1
|
Chen H, Yan G, Wen MH, Brooks KN, Zhang Y, Huang PS, Chen TY. Advancements and Practical Considerations for Biophysical Research: Navigating the Challenges and Future of Super-resolution Microscopy. CHEMICAL & BIOMEDICAL IMAGING 2024; 2:331-344. [PMID: 38817319 PMCID: PMC11134610 DOI: 10.1021/cbmi.4c00019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 04/06/2024] [Accepted: 04/10/2024] [Indexed: 06/01/2024]
Abstract
The introduction of super-resolution microscopy (SRM) has significantly advanced our understanding of cellular and molecular dynamics, offering a detailed view previously beyond our reach. Implementing SRM in biophysical research, however, presents numerous challenges. This review addresses the crucial aspects of utilizing SRM effectively, from selecting appropriate fluorophores and preparing samples to analyzing complex data sets. We explore recent technological advancements and methodological improvements that enhance the capabilities of SRM. Emphasizing the integration of SRM with other analytical methods, we aim to overcome inherent limitations and expand the scope of biological insights achievable. By providing a comprehensive guide for choosing the most suitable SRM methods based on specific research objectives, we aim to empower researchers to explore complex biological processes with enhanced precision and clarity, thereby advancing the frontiers of biophysical research.
Collapse
Affiliation(s)
- Huanhuan Chen
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| | - Guangjie Yan
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| | - Meng-Hsuan Wen
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| | - Kameron N. Brooks
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| | - Yuteng Zhang
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| | - Pei-San Huang
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| | - Tai-Yen Chen
- Department of Chemistry, University of Houston, Houston, Texas 77204, United States
| |
Collapse
|
2
|
Kuhn TM, Paulsen M, Cuylen-Haering S. Accessible high-speed image-activated cell sorting. Trends Cell Biol 2024:S0962-8924(24)00094-1. [PMID: 38789300 DOI: 10.1016/j.tcb.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/15/2024] [Accepted: 04/23/2024] [Indexed: 05/26/2024]
Abstract
Over the past six decades, fluorescence-activated cell sorting (FACS) has become an essential technology for basic and clinical research by enabling the isolation of cells of interest in high throughput. Recent technological advancements have started a new era of flow cytometry. By combining the spatial resolution of microscopy with high-speed cell sorting, new instruments allow cell sorting based on simple image-derived parameters or sophisticated image analysis algorithms, thereby greatly expanding the scope of applications. In this review, we discuss the systems that are commercially available or have been described in enough methodological and engineering detail to allow their replication. We summarize their strengths and limitations and highlight applications that have the potential to transform various fields in basic life science research and clinical settings.
Collapse
Affiliation(s)
- Terra M Kuhn
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Malte Paulsen
- Novo Nordisk Foundation Center for Stem Cell Medicine, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Sara Cuylen-Haering
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany.
| |
Collapse
|
3
|
Dotti P, Fernandez-Tenorio M, Janicek R, Márquez-Neila P, Wullschleger M, Sznitman R, Egger M. A deep learning-based approach for efficient detection and classification of local Ca²⁺ release events in Full-Frame confocal imaging. Cell Calcium 2024; 121:102893. [PMID: 38701707 DOI: 10.1016/j.ceca.2024.102893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/24/2024] [Accepted: 04/23/2024] [Indexed: 05/05/2024]
Abstract
The release of Ca2+ ions from intracellular stores plays a crucial role in many cellular processes, acting as a secondary messenger in various cell types, including cardiomyocytes, smooth muscle cells, hepatocytes, and many others. Detecting and classifying associated local Ca2+ release events is particularly important, as these events provide insight into the mechanisms, interplay, and interdependencies of local Ca2+release events underlying global intracellular Ca2+signaling. However, time-consuming and labor-intensive procedures often complicate analysis, especially with low signal-to-noise ratio imaging data. Here, we present an innovative deep learning-based approach for automatically detecting and classifying local Ca2+ release events. This approach is exemplified with rapid full-frame confocal imaging data recorded in isolated cardiomyocytes. To demonstrate the robustness and accuracy of our method, we first use conventional evaluation methods by comparing the intersection between manual annotations and the segmentation of Ca2+ release events provided by the deep learning method, as well as the annotated and recognized instances of individual events. In addition to these methods, we compare the performance of the proposed model with the annotation of six experts in the field. Our model can recognize more than 75 % of the annotated Ca2+ release events and correctly classify more than 75 %. A key result was that there were no significant differences between the annotations produced by human experts and the result of the proposed deep learning model. We conclude that the proposed approach is a robust and time-saving alternative to conventional full-frame confocal imaging analysis of local intracellular Ca2+ events.
Collapse
Affiliation(s)
- Prisca Dotti
- Department of Physiology, Universität Bern, Bern, Switzerland; ARTORG Center, Universität Bern, Bern, Switzerland
| | | | | | | | | | | | - Marcel Egger
- Department of Physiology, Universität Bern, Bern, Switzerland.
| |
Collapse
|
4
|
Ibrahim KA, Naidu AS, Miljkovic H, Radenovic A, Yang W. Label-Free Techniques for Probing Biomolecular Condensates. ACS NANO 2024; 18:10738-10757. [PMID: 38609349 DOI: 10.1021/acsnano.4c01534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Biomolecular condensates play important roles in a wide array of fundamental biological processes, such as cellular compartmentalization, cellular regulation, and other biochemical reactions. Since their discovery and first observations, an extensive and expansive library of tools has been developed to investigate various aspects and properties, encompassing structural and compositional information, material properties, and their evolution throughout the life cycle from formation to eventual dissolution. This Review presents an overview of the expanded set of tools and methods that researchers use to probe the properties of biomolecular condensates across diverse scales of length, concentration, stiffness, and time. In particular, we review recent years' exciting development of label-free techniques and methodologies. We broadly organize the set of tools into 3 categories: (1) imaging-based techniques, such as transmitted-light microscopy (TLM) and Brillouin microscopy (BM), (2) force spectroscopy techniques, such as atomic force microscopy (AFM) and the optical tweezer (OT), and (3) microfluidic platforms and emerging technologies. We point out the tools' key opportunities, challenges, and future perspectives and analyze their correlative potential as well as compatibility with other techniques. Additionally, we review emerging techniques, namely, differential dynamic microscopy (DDM) and interferometric scattering microscopy (iSCAT), that have huge potential for future applications in studying biomolecular condensates. Finally, we highlight how some of these techniques can be translated for diagnostics and therapy purposes. We hope this Review serves as a useful guide for new researchers in this field and aids in advancing the development of new biophysical tools to study biomolecular condensates.
Collapse
|
5
|
Brückner DB, Broedersz CP. Learning dynamical models of single and collective cell migration: a review. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2024; 87:056601. [PMID: 38518358 DOI: 10.1088/1361-6633/ad36d2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 03/22/2024] [Indexed: 03/24/2024]
Abstract
Single and collective cell migration are fundamental processes critical for physiological phenomena ranging from embryonic development and immune response to wound healing and cancer metastasis. To understand cell migration from a physical perspective, a broad variety of models for the underlying physical mechanisms that govern cell motility have been developed. A key challenge in the development of such models is how to connect them to experimental observations, which often exhibit complex stochastic behaviours. In this review, we discuss recent advances in data-driven theoretical approaches that directly connect with experimental data to infer dynamical models of stochastic cell migration. Leveraging advances in nanofabrication, image analysis, and tracking technology, experimental studies now provide unprecedented large datasets on cellular dynamics. In parallel, theoretical efforts have been directed towards integrating such datasets into physical models from the single cell to the tissue scale with the aim of conceptualising the emergent behaviour of cells. We first review how this inference problem has been addressed in both freely migrating and confined cells. Next, we discuss why these dynamics typically take the form of underdamped stochastic equations of motion, and how such equations can be inferred from data. We then review applications of data-driven inference and machine learning approaches to heterogeneity in cell behaviour, subcellular degrees of freedom, and to the collective dynamics of multicellular systems. Across these applications, we emphasise how data-driven methods can be integrated with physical active matter models of migrating cells, and help reveal how underlying molecular mechanisms control cell behaviour. Together, these data-driven approaches are a promising avenue for building physical models of cell migration directly from experimental data, and for providing conceptual links between different length-scales of description.
Collapse
Affiliation(s)
- David B Brückner
- Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria
| | - Chase P Broedersz
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
- Arnold Sommerfeld Center for Theoretical Physics and Center for NanoScience, Department of Physics, Ludwig-Maximilian-University Munich, Theresienstr. 37, D-80333 Munich, Germany
| |
Collapse
|
6
|
Ma J, Chen H. Efficient Supervised Pretraining of Swin-Transformer for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1388-1399. [PMID: 38010933 DOI: 10.1109/tmi.2023.3337253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Fluorescence staining is an important technique in life science for labeling cellular constituents. However, it also suffers from being time-consuming, having difficulty in simultaneous labeling, etc. Thus, virtual staining, which does not rely on chemical labeling, has been introduced. Recently, deep learning models such as transformers have been applied to virtual staining tasks. However, their performance relies on large-scale pretraining, hindering their development in the field. To reduce the reliance on large amounts of computation and data, we construct a Swin-transformer model and propose an efficient supervised pretraining method based on the masked autoencoder (MAE). Specifically, we adopt downsampling and grid sampling to mask 75% of pixels and reduce the number of tokens. The pretraining time of our method is only 1/16 compared with the original MAE. We also design a supervised proxy task to predict stained images with multiple styles instead of masked pixels. Additionally, most virtual staining approaches are based on private datasets and evaluated by different metrics, making a fair comparison difficult. Therefore, we develop a standard benchmark based on three public datasets and build a baseline for the convenience of future researchers. We conduct extensive experiments on three benchmark datasets, and the experimental results show the proposed method achieves the best performance both quantitatively and qualitatively. In addition, ablation studies are conducted, and experimental results illustrate the effectiveness of the proposed pretraining method. The benchmark and code are available at https://github.com/birkhoffkiki/CAS-Transformer.
Collapse
|
7
|
Trettner KJ, Hsieh J, Xiao W, Lee JSH, Armani AM. Nondestructive, quantitative viability analysis of 3D tissue cultures using machine learning image segmentation. APL Bioeng 2024; 8:016121. [PMID: 38566822 PMCID: PMC10985731 DOI: 10.1063/5.0189222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 03/04/2024] [Indexed: 04/04/2024] Open
Abstract
Ascertaining the collective viability of cells in different cell culture conditions has typically relied on averaging colorimetric indicators and is often reported out in simple binary readouts. Recent research has combined viability assessment techniques with image-based deep-learning models to automate the characterization of cellular properties. However, further development of viability measurements to assess the continuity of possible cellular states and responses to perturbation across cell culture conditions is needed. In this work, we demonstrate an image processing algorithm for quantifying features associated with cellular viability in 3D cultures without the need for assay-based indicators. We show that our algorithm performs similarly to a pair of human experts in whole-well images over a range of days and culture matrix compositions. To demonstrate potential utility, we perform a longitudinal study investigating the impact of a known therapeutic on pancreatic cancer spheroids. Using images taken with a high content imaging system, the algorithm successfully tracks viability at the individual spheroid and whole-well level. The method we propose reduces analysis time by 97% in comparison with the experts. Because the method is independent of the microscope or imaging system used, this approach lays the foundation for accelerating progress in and for improving the robustness and reproducibility of 3D culture analysis across biological and clinical research.
Collapse
Affiliation(s)
| | - Jeremy Hsieh
- Pasadena Polytechnic High School, Pasadena, California 91106, USA
| | - Weikun Xiao
- Ellison Institute of Technology, Los Angeles, California 90064, USA
| | | | | |
Collapse
|
8
|
REINKE ANNIKA, TIZABI MINUD, BAUMGARTNER MICHAEL, EISENMANN MATTHIAS, HECKMANN-NÖTZEL DOREEN, KAVUR AEMRE, RÄDSCH TIM, SUDRE CAROLEH, ACION LAURA, ANTONELLI MICHELA, ARBEL TAL, BAKAS SPYRIDON, BENIS ARRIEL, BLASCHKO MATTHEWB, BUETTNER FLORIAN, CARDOSO MJORGE, CHEPLYGINA VERONIKA, CHEN JIANXU, CHRISTODOULOU EVANGELIA, CIMINI BETHA, COLLINS GARYS, FARAHANI KEYVAN, FERRER LUCIANA, GALDRAN ADRIAN, VAN GINNEKEN BRAM, GLOCKER BEN, GODAU PATRICK, HAASE ROBERT, HASHIMOTO DANIELA, HOFFMAN MICHAELM, HUISMAN MEREL, ISENSEE FABIAN, JANNIN PIERRE, KAHN CHARLESE, KAINMUELLER DAGMAR, KAINZ BERNHARD, KARARGYRIS ALEXANDROS, KARTHIKESALINGAM ALAN, KENNGOTT HANNES, KLEESIEK JENS, KOFLER FLORIAN, KOOI THIJS, KOPP-SCHNEIDER ANNETTE, KOZUBEK MICHAL, KRESHUK ANNA, KURC TAHSIN, LANDMAN BENNETTA, LITJENS GEERT, MADANI AMIN, MAIER-HEIN KLAUS, MARTEL ANNEL, MATTSON PETER, MEIJERING ERIK, MENZE BJOERN, MOONS KARELG, MÜLLER HENNING, NICHYPORUK BRENNAN, NICKEL FELIX, PETERSEN JENS, RAFELSKI SUSANNEM, RAJPOOT NASIR, REYES MAURICIO, RIEGLER MICHAELA, RIEKE NICOLA, SAEZ-RODRIGUEZ JULIO, SÁNCHEZ CLARAI, SHETTY SHRAVYA, SUMMERS RONALDM, TAHA ABDELA, TIULPIN ALEKSEI, TSAFTARIS SOTIRIOSA, VAN CALSTER BEN, VAROQUAUX GAËL, YANIV ZIVR, JÄGER PAULF, MAIER-HEIN LENA. Understanding metric-related pitfalls in image analysis validation. ARXIV 2024:arXiv:2302.01790v4. [PMID: 36945687 PMCID: PMC10029046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/23/2023]
Abstract
Validation metrics are key for the reliable tracking of scientific progress and for bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that particularly in image analysis, metrics are often chosen inadequately in relation to the underlying research problem. This could be attributed to a lack of accessibility of metric-related knowledge: While taking into account the individual strengths, weaknesses, and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multi-stage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides the first reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. To facilitate comprehension, illustrations and specific examples accompany each pitfall. As a structured body of information accessible to researchers of all levels of expertise, this work enhances global comprehension of a key topic in image analysis validation.
Collapse
Affiliation(s)
- ANNIKA REINKE
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems and HI Helmholtz Imaging, Germany and Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - MINU D. TIZABI
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| | - MICHAEL BAUMGARTNER
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Germany
| | - MATTHIAS EISENMANN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany
| | - DOREEN HECKMANN-NÖTZEL
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| | - A. EMRE KAVUR
- HI Applied Computer Vision Lab, Division of Medical Image Computing; German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany
| | - TIM RÄDSCH
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems and HI Helmholtz Imaging, Germany
| | - CAROLE H. SUDRE
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK and School of Biomedical Engineering and Imaging Science, King’s College London, London, UK
| | - LAURA ACION
- Instituto de Cálculo, CONICET – Universidad de Buenos Aires, Buenos Aires, Argentina
| | - MICHELA ANTONELLI
- School of Biomedical Engineering and Imaging Science, King’s College London, London, UK and Centre for Medical Image Computing, University College London, London, UK
| | - TAL ARBEL
- Centre for Intelligent Machines and MILA (Quebec Artificial Intelligence Institute), McGill University, Montreal, Canada
| | - SPYRIDON BAKAS
- Division of Computational Pathology, Dept of Pathology & Laboratory Medicine, Indiana University School of Medicine, IU Health Information and Translational Sciences Building, Indianapolis, USA and Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Richards Medical Research Laboratories FL7, Philadelphia, PA, USA
| | - ARRIEL BENIS
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel and European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - MATTHEW B. BLASCHKO
- Center for Processing Speech and Images, Department of Electrical Engineering, KU Leuven, Leuven, Belgium
| | - FLORIAN BUETTNER
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Germany, German Cancer Research Center (DKFZ) Heidelberg, Germany, Goethe University Frankfurt, Department of Medicine, Germany, Goethe University Frankfurt, Department of Informatics, Germany, and Frankfurt Cancer Insititute, Germany
| | - M. JORGE CARDOSO
- School of Biomedical Engineering and Imaging Science, King’s College London, London, UK
| | - VERONIKA CHEPLYGINA
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - JIANXU CHEN
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Dortmund, Germany
| | - EVANGELIA CHRISTODOULOU
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany
| | - BETH A. CIMINI
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, Massachusetts, USA
| | - GARY S. COLLINS
- Centre for Statistics in Medicine, University of Oxford, Oxford, UK
| | - KEYVAN FARAHANI
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - LUCIANA FERRER
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Universitaria, Ciudad Autónoma de Buenos Aires, Argentina
| | - ADRIAN GALDRAN
- Universitat Pompeu Fabra, Barcelona, Spain and University of Adelaide, Adelaide, Australia
| | - BRAM VAN GINNEKEN
- Fraunhofer MEVIS, Bremen, Germany and Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - BEN GLOCKER
- Department of Computing, Imperial College London, London, UK
| | - PATRICK GODAU
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Germany, Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany, and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| | - ROBERT HAASE
- Now with: Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Leipzig University, Leipzig, Germany, DFG Cluster of Excellence “Physics of Life”, Technische Universität (TU) Dresden, Dresden, Germany, and Center for Systems Biology , Dresden, Germany
| | - DANIEL A. HASHIMOTO
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA and General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - MICHAEL M. HOFFMAN
- Princess Margaret Cancer Centre, University Health Network, Toronto, Canada, Department of Medical Biophysics, University of Toronto, Toronto, Canada, Department of Computer Science, University of Toronto, Toronto, Canada, and Vector Institute for Artificial Intelligence, Toronto, Canada
| | - MEREL HUISMAN
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - FABIAN ISENSEE
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing and HI Applied Computer Vision Lab, Germany
| | - PIERRE JANNIN
- Laboratoire Traitement du Signal et de l’Image – UMR_S 1099, Université de Rennes 1, Rennes, France and INSERM, Paris Cedex, France
| | - CHARLES E. KAHN
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - DAGMAR KAINMUELLER
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany and University of Potsdam, Digital Engineering Faculty, Potsdam, Germany
| | - BERNHARD KAINZ
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK and Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | | | - HANNES KENNGOTT
- Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - JENS KLEESIEK
- Translational Image-guided Oncology (TIO), Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | | | | | | | - MICHAL KOZUBEK
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - ANNA KRESHUK
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - TAHSIN KURC
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | | | - GEERT LITJENS
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - AMIN MADANI
- Department of Surgery, University Health Network, Philadelphia, PA, Canada
| | - KLAUS MAIER-HEIN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing and HI Helmholtz Imaging, Germany and Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - ANNE L. MARTEL
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada and Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | | | - ERIK MEIJERING
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | - BJOERN MENZE
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - KAREL G.M. MOONS
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, The Netherlands
| | - HENNING MÜLLER
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland and Medical Faculty, University of Geneva, Geneva, Switzerland
| | | | - FELIX NICKEL
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - JENS PETERSEN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Germany
| | | | - NASIR RAJPOOT
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | - MAURICIO REYES
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland and Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - MICHAEL A. RIEGLER
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway and UiT The Arctic University of Norway, Tromsø, Norway
| | | | - JULIO SAEZ-RODRIGUEZ
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg. Germany and Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - CLARA I. SÁNCHEZ
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands
| | | | | | - ABDEL A. TAHA
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - ALEKSEI TIULPIN
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland and Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - BEN VAN CALSTER
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium and Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands
| | - GAËL VAROQUAUX
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - ZIV R. YANIV
- National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD, USA
| | - PAUL F. JÄGER
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group and HI Helmholtz Imaging, Germany
| | - LENA MAIER-HEIN
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems and HI Helmholtz Imaging, Germany, Faculty of Mathematics and Computer Science and Medical Faculty, Heidelberg University, Heidelberg, Germany, and National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Germany
| |
Collapse
|
9
|
Abadie K, Clark EC, Valanparambil RM, Ukogu O, Yang W, Daza RM, Ng KKH, Fathima J, Wang AL, Lee J, Nasti TH, Bhandoola A, Nourmohammad A, Ahmed R, Shendure J, Cao J, Kueh HY. Reversible, tunable epigenetic silencing of TCF1 generates flexibility in the T cell memory decision. Immunity 2024; 57:271-286.e13. [PMID: 38301652 PMCID: PMC10922671 DOI: 10.1016/j.immuni.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 10/09/2023] [Accepted: 12/07/2023] [Indexed: 02/03/2024]
Abstract
The immune system encodes information about the severity of a pathogenic threat in the quantity and type of memory cells it forms. This encoding emerges from lymphocyte decisions to maintain or lose self-renewal and memory potential during a challenge. By tracking CD8+ T cells at the single-cell and clonal lineage level using time-resolved transcriptomics, quantitative live imaging, and an acute infection model, we find that T cells will maintain or lose memory potential early after antigen recognition. However, following pathogen clearance, T cells may regain memory potential if initially lost. Mechanistically, this flexibility is implemented by a stochastic cis-epigenetic switch that tunably and reversibly silences the memory regulator, TCF1, in response to stimulation. Mathematical modeling shows how this flexibility allows memory T cell numbers to scale robustly with pathogen virulence and immune response magnitudes. We propose that flexibility and stochasticity in cellular decisions ensure optimal immune responses against diverse threats.
Collapse
Affiliation(s)
- Kathleen Abadie
- Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Elisa C Clark
- Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Rajesh M Valanparambil
- Emory Vaccine Center and Department of Microbiology and Immunology, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Obinna Ukogu
- Department of Applied Mathematics, University of Washington, Seattle, WA 98105, USA
| | - Wei Yang
- Department of Genome Sciences, University of Washington, Seattle, WA 98195, USA
| | - Riza M Daza
- Department of Genome Sciences, University of Washington, Seattle, WA 98195, USA
| | - Kenneth K H Ng
- Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Jumana Fathima
- Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Allan L Wang
- Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Judong Lee
- Emory Vaccine Center and Department of Microbiology and Immunology, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Tahseen H Nasti
- Emory Vaccine Center and Department of Microbiology and Immunology, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Avinash Bhandoola
- T-Cell Biology and Development Unit, Laboratory of Genome Integrity, Center for Cancer Research, National Cancer Institute, National Institute of Health, Bethesda, MD 20892, USA
| | - Armita Nourmohammad
- Department of Applied Mathematics, University of Washington, Seattle, WA 98105, USA; Department of Physics, University of Washington, Seattle, WA 98105, USA; Fred Hutchinson Cancer Research Center, Seattle, WA 98109, USA
| | - Rafi Ahmed
- Emory Vaccine Center and Department of Microbiology and Immunology, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Jay Shendure
- Department of Genome Sciences, University of Washington, Seattle, WA 98195, USA; Brotman Baty Institute for Precision Medicine, Seattle, WA 98195, USA; Allen Discovery Center for Cell Lineage Tracing, Seattle, WA 98109, USA; Howard Hughes Medical Institute, Seattle, WA 98195, USA; Institute for Stem Cell and Regenerative Medicine, University of Washington, Seattle, WA 98109, USA.
| | - Junyue Cao
- Department of Genome Sciences, University of Washington, Seattle, WA 98195, USA; Laboratory of Single-Cell Genomics and Population Dynamics, The Rockefeller University, New York, NY 10065, USA.
| | - Hao Yuan Kueh
- Department of Bioengineering, University of Washington, Seattle, WA 98195, USA; Institute for Stem Cell and Regenerative Medicine, University of Washington, Seattle, WA 98109, USA.
| |
Collapse
|
10
|
Hatzakis N, Kaestel-Hansen J, de Sautu M, Saminathan A, Scanavachi G, Correia R, Nielsen AJ, Bleshoey S, Boomsma W, Kirchhausen T. Deep learning assisted single particle tracking for automated correlation between diffusion and function. RESEARCH SQUARE 2024:rs.3.rs-3716053. [PMID: 38352328 PMCID: PMC10862944 DOI: 10.21203/rs.3.rs-3716053/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Sub-cellular diffusion in living systems reflects cellular processes and interactions. Recent advances in optical microscopy allow the tracking of this nanoscale diffusion of individual objects with an unprecedented level of precision. However, the agnostic and automated extraction of functional information from the diffusion of molecules and organelles within the sub-cellular environment, is labor-intensive and poses a significant challenge. Here we introduce DeepSPT, a deep learning framework to interpret the diffusional 2D or 3D temporal behavior of objects in a rapid and efficient manner, agnostically. Demonstrating its versatility, we have applied DeepSPT to automated mapping of the early events of viral infections, identifying distinct types of endosomal organelles, and clathrin-coated pits and vesicles with up to 95% accuracy and within seconds instead of weeks. The fact that DeepSPT effectively extracts biological information from diffusion alone illustrates that besides structure, motion encodes function at the molecular and subcellular level.
Collapse
|
11
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
12
|
Reinke A, Tizabi MD, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Kavur AE, Rädsch T, Sudre CH, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Buettner F, Cardoso MJ, Cheplygina V, Chen J, Christodoulou E, Cimini BA, Farahani K, Ferrer L, Galdran A, van Ginneken B, Glocker B, Godau P, Hashimoto DA, Hoffman MM, Huisman M, Isensee F, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Kleesiek J, Kofler F, Kooi T, Kopp-Schneider A, Kozubek M, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rafelski SM, Rajpoot N, Reyes M, Riegler MA, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Yaniv ZR, Jäger PF, Maier-Hein L. Understanding metric-related pitfalls in image analysis validation. Nat Methods 2024; 21:182-194. [PMID: 38347140 DOI: 10.1038/s41592-023-02150-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.
Collapse
Affiliation(s)
- Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Quebec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Dept of Pathology & Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Goethe University Frankfurt, Department of Medicine, Frankfurt am Main, Germany
- Goethe University Frankfurt, Department of Informatics, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften - ISAS - e.V., Dortmund, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- Universitat Pompeu Fabra, Barcelona, Spain
- University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- University of Potsdam, Digital Engineering Faculty, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | - Jens Kleesiek
- Translational Image-guided Oncology (TIO), Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Quebec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | | | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- UiT The Arctic University of Norway, Tromsø, Norway
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Ziv R Yaniv
- National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
13
|
Gregor BW, Coston ME, Adams EM, Arakaki J, Borensztejn A, Do TP, Fuqua MA, Haupt A, Hendershott MC, Leung W, Mueller IA, Nath A, Nelson AM, Rafelski SM, Sanchez EE, Swain-Bowden MJ, Tang WJ, Thirstrup DJ, Wiegraebe W, Whitney BP, Yan C, Gunawardane RN, Gaudreault N. Automated human induced pluripotent stem cell culture and sample preparation for 3D live-cell microscopy. Nat Protoc 2024; 19:565-594. [PMID: 38087082 DOI: 10.1038/s41596-023-00912-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 09/08/2023] [Indexed: 02/12/2024]
Abstract
To produce abundant cell culture samples to generate large, standardized image datasets of human induced pluripotent stem (hiPS) cells, we developed an automated workflow on a Hamilton STAR liquid handler system. This was developed specifically for culturing hiPS cell lines expressing fluorescently tagged proteins, which we have used to study the principles by which cells establish and maintain robust dynamic localization of cellular structures. This protocol includes all details for the maintenance, passage and seeding of cells, as well as Matrigel coating of 6-well plastic plates and 96-well optical-grade, glass plates. We also developed an automated image-based hiPS cell colony segmentation and feature extraction pipeline to streamline the process of predicting cell count and selecting wells with consistent morphology for high-resolution three-dimensional (3D) microscopy. The imaging samples produced with this protocol have been used to study the integrated intracellular organization and cell-to-cell variability of hiPS cells to train and develop deep learning-based label-free predictions from transmitted-light microscopy images and to develop deep learning-based generative models of single-cell organization. This protocol requires some experience with robotic equipment. However, we provide details and source code to facilitate implementation by biologists less experienced with robotics. The protocol is completed in less than 10 h with minimal human interaction. Overall, automation of our cell culture procedures increased our imaging samples' standardization, reproducibility, scalability and consistency. It also reduced the need for stringent culturist training and eliminated culturist-to-culturist variability, both of which were previous pain points of our original manual pipeline workflow.
Collapse
Affiliation(s)
| | | | | | - Joy Arakaki
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | - Thao P Do
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | - Amanda Haupt
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | - Winnie Leung
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | - Aditya Nath
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | | | | | | | - W Joyce Tang
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | | | | | - Calysta Yan
- Allen Institute for Cell Science, Seattle, WA, USA
| | | | | |
Collapse
|
14
|
Sun H, Li J, Murphy RF. Expanding the coverage of spatial proteomics: a machine learning approach. Bioinformatics 2024; 40:btae062. [PMID: 38310340 PMCID: PMC10873576 DOI: 10.1093/bioinformatics/btae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 02/15/2024] [Accepted: 02/15/2024] [Indexed: 02/05/2024] Open
Abstract
MOTIVATION Multiplexed protein imaging methods use a chosen set of markers and provide valuable information about complex tissue structure and cellular heterogeneity. However, the number of markers that can be measured in the same tissue sample is inherently limited. RESULTS In this paper, we present an efficient method to choose a minimal predictive subset of markers that for the first time allows the prediction of full images for a much larger set of markers. We demonstrate that our approach also outperforms previous methods for predicting cell-level protein composition. Most importantly, we demonstrate that our approach can be used to select a marker set that enables prediction of a much larger set than could be measured concurrently. AVAILABILITY AND IMPLEMENTATION All code and intermediate results are available in a Reproducible Research Archive at https://github.com/murphygroup/CODEXPanelOptimization.
Collapse
Affiliation(s)
- Huangqingbo Sun
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Jiayi Li
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Robert F Murphy
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| |
Collapse
|
15
|
Opstad IS, Birgisdottir ÅB, Agarwal K. Fluorescence microscopy and correlative brightfield videos of mitochondria and vesicles in H9c2 cardiomyoblasts. Sci Data 2024; 11:125. [PMID: 38272930 PMCID: PMC10810863 DOI: 10.1038/s41597-024-02970-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 01/15/2024] [Indexed: 01/27/2024] Open
Abstract
This paper presents data acquired to study the dynamics and interactions of mitochondria and subcellular vesicles in living cardiomyoblasts. The study was motivated by the importance of mitochondrial quality control and turnover in cardiovascular health. Although fluorescence microscopy is an invaluable tool, it presents several limitations. Correlative fluorescence and brightfield images (label-free) were therefore acquired with the purpose of achieving virtual labelling via machine learning. In comparison with the fluorescence images of mitochondria, the brightfield images show vesicles and subcellular components, providing additional insights about sub-cellular components. A large part of the data contains correlative fluorescence images of lysosomes and/or endosomes over a duration of up to 400 timepoints (>30 min). The data can be reused for biological inferences about mitochondrial and vesicular morphology, dynamics, and interactions. Furthermore, virtual labelling of mitochondria or subcellular vesicles can be achieved using these datasets. Finally, the data can inspire new imaging experiments for cellular investigations or computational developments. The data is available through two large, open datasets on DataverseNO.
Collapse
Affiliation(s)
- Ida S Opstad
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway
| | - Åsa B Birgisdottir
- Department of Clinical Medicine, UiT The Arctic University of Norway, Tromsø, Norway
- Division of Cardiothoracic and Respiratory Medicine, University Hospital of North Norway, Tromsø, Norway
| | - Krishna Agarwal
- Department of Physics and Technology, UiT The Arctic University of Norway, Tromsø, Norway.
| |
Collapse
|
16
|
Waliman M, Johnson RL, Natesan G, Tan S, Santella A, Hong RL, Shah PK. Automated Cell Lineage Reconstruction using Label-Free 4D Microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576449. [PMID: 38328064 PMCID: PMC10849476 DOI: 10.1101/2024.01.20.576449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Here we describe embGAN, a deep learning pipeline that addresses the challenge of automated cell detection and tracking in label-free 3D time lapse imaging. embGAN requires no manual data annotation for training, learns robust detections that exhibits a high degree of scale invariance and generalizes well to images acquired in multiple labs on multiple instruments.
Collapse
Affiliation(s)
- Matthew Waliman
- Department of Electrical and Computer Engineering, University of California, Los Angeles, California, United States of America
| | - Ryan L Johnson
- Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles, California, United State of America
| | - Gunalan Natesan
- Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles, California, United State of America
| | - Shiqin Tan
- Department of Computational and Systems Biology, University of California, Los Angeles, California, United States of America
| | - Anthony Santella
- Molecular Cytology Core, Memorial Sloan Kettering Cancer Center, New York, New York, United States of America
| | - Ray L Hong
- Department of Biology, California State University, Northridge, California, United States of America
| | - Pavak K Shah
- Department of Molecular, Cell and Developmental Biology, University of California, Los Angeles, California, United State of America
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, California, United States of America
| |
Collapse
|
17
|
Copperman J, Mclean IC, Gross SM, Chang YH, Zuckerman DM, Heiser LM. Single-cell morphodynamical trajectories enable prediction of gene expression accompanying cell state change. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.18.576248. [PMID: 38293173 PMCID: PMC10827140 DOI: 10.1101/2024.01.18.576248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Extracellular signals induce changes to molecular programs that modulate multiple cellular phenotypes, including proliferation, motility, and differentiation status. The connection between dynamically adapting phenotypic states and the molecular programs that define them is not well understood. Here we develop data-driven models of single-cell phenotypic responses to extracellular stimuli by linking gene transcription levels to "morphodynamics" - changes in cell morphology and motility observable in time-lapse image data. We adopt a dynamics-first view of cell state by grouping single-cell trajectories into states with shared morphodynamic responses. The single-cell trajectories enable development of a first-of-its-kind computational approach to map live-cell dynamics to snapshot gene transcript levels, which we term MMIST, Molecular and Morphodynamics-Integrated Single-cell Trajectories. The key conceptual advance of MMIST is that cell behavior can be quantified based on dynamically defined states and that extracellular signals alter the overall distribution of cell states by altering rates of switching between states. We find a cell state landscape that is bound by epithelial and mesenchymal endpoints, with distinct sequences of epithelial to mesenchymal transition (EMT) and mesenchymal to epithelial transition (MET) intermediates. The analysis yields predictions for gene expression changes consistent with curated EMT gene sets and provides a prediction of thousands of RNA transcripts through extracellular signal-induced EMT and MET with near-continuous time resolution. The MMIST framework leverages true single-cell dynamical behavior to generate molecular-level omics inferences and is broadly applicable to other biological domains, time-lapse imaging approaches and molecular snapshot data.
Collapse
Affiliation(s)
- Jeremy Copperman
- Cancer Early Detection Advanced Research Center, Oregon Health and Science University, Portland OR 97239, U.S.A
| | - Ian C. Mclean
- Department of Biomedical Engineering, Oregon Health and Science University, Portland OR 97239, U.S.A
| | | | - Young Hwan Chang
- Department of Biomedical Engineering, Oregon Health and Science University, Portland OR 97239, U.S.A
- Knight Cancer Institute, Oregon Health and Science University, Portland OR 97239, U.S.A
| | - Daniel M. Zuckerman
- Department of Biomedical Engineering, Oregon Health and Science University, Portland OR 97239, U.S.A
- Knight Cancer Institute, Oregon Health and Science University, Portland OR 97239, U.S.A
| | - Laura M. Heiser
- Department of Biomedical Engineering, Oregon Health and Science University, Portland OR 97239, U.S.A
- Knight Cancer Institute, Oregon Health and Science University, Portland OR 97239, U.S.A
| |
Collapse
|
18
|
Park R, Kang MS, Heo G, Shin YC, Han DW, Hong SW. Regulated Behavior in Living Cells with Highly Aligned Configurations on Nanowrinkled Graphene Oxide Substrates: Deep Learning Based on Interplay of Cellular Contact Guidance. ACS NANO 2024; 18:1325-1344. [PMID: 38099607 DOI: 10.1021/acsnano.2c09815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Micro-/nanotopographical cues have emerged as a practical and promising strategy for controlling cell fate and reprogramming, which play a key role as biophysical regulators in diverse cellular processes and behaviors. Extracellular biophysical factors can trigger intracellular physiological signaling via mechanotransduction and promote cellular responses such as cell adhesion, migration, proliferation, gene/protein expression, and differentiation. Here, we engineered a highly ordered nanowrinkled graphene oxide (GO) surface via the mechanical deformation of an ultrathin GO film on an elastomeric substrate to observe specific cellular responses based on surface-mediated topographical cues. The ultrathin GO film on the uniaxially prestrained elastomeric substrate through self-assembly and subsequent compressive force produced GO nanowrinkles with periodic amplitude. To examine the acute cellular behaviors on the GO-based cell interface with nanostructured arrays of wrinkles, we cultured L929 fibroblasts and HT22 hippocampal neuronal cells. As a result, our developed cell-culture substrate obviously provided a directional guidance effect. In addition, based on the observed results, we adapted a deep learning (DL)-based data processing technique to precisely interpret the cell behaviors on the nanowrinkled GO surfaces. According to the learning/transfer learning protocol of the DL network, we detected cell boundaries, elongation, and orientation and quantitatively evaluated cell velocity, traveling distance, displacement, and orientation. The presented experimental results have intriguing implications such that the nanotopographical microenvironment could engineer the living cells' morphological polarization to assemble them into useful tissue chips consisting of multiple cell types.
Collapse
Affiliation(s)
- Rowoon Park
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Moon Sung Kang
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Gyeonghwa Heo
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Yong Cheol Shin
- Department of Inflammation and Immunity, Lerner Research Institute, Cleveland Clinic, Ohio 44195, United States
| | - Dong-Wook Han
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Suck Won Hong
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
- Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Pusan National University, Busan 46241, Republic of Korea
| |
Collapse
|
19
|
Kobayashi-Kirschvink KJ, Comiter CS, Gaddam S, Joren T, Grody EI, Ounadjela JR, Zhang K, Ge B, Kang JW, Xavier RJ, So PTC, Biancalani T, Shu J, Regev A. Prediction of single-cell RNA expression profiles in live cells by Raman microscopy with Raman2RNA. Nat Biotechnol 2024:10.1038/s41587-023-02082-2. [PMID: 38200118 DOI: 10.1038/s41587-023-02082-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 12/01/2023] [Indexed: 01/12/2024]
Abstract
Single-cell RNA sequencing and other profiling assays have helped interrogate cells at unprecedented resolution and scale, but are inherently destructive. Raman microscopy reports on the vibrational energy levels of proteins and metabolites in a label-free and nondestructive manner at subcellular spatial resolution, but it lacks genetic and molecular interpretability. Here we present Raman2RNA (R2R), a method to infer single-cell expression profiles in live cells through label-free hyperspectral Raman microscopy images and domain translation. We predict single-cell RNA sequencing profiles nondestructively from Raman images using either anchor-based integration with single molecule fluorescence in situ hybridization, or anchor-free generation with adversarial autoencoders. R2R outperformed inference from brightfield images (cosine similarities: R2R >0.85 and brightfield <0.15). In reprogramming of mouse fibroblasts into induced pluripotent stem cells, R2R inferred the expression profiles of various cell states. With live-cell tracking of mouse embryonic stem cell differentiation, R2R traced the early emergence of lineage divergence and differentiation trajectories, overcoming discontinuities in expression space. R2R lays a foundation for future exploration of live genomic dynamics.
Collapse
Affiliation(s)
- Koseki J Kobayashi-Kirschvink
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Charles S Comiter
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Shreya Gaddam
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Genentech, South San Francisco, CA, USA
| | - Taylor Joren
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Emanuelle I Grody
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Johain R Ounadjela
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ke Zhang
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Baoliang Ge
- Department of Mechanical and Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jeon Woong Kang
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ramnik J Xavier
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Center for Computational and Integrative Biology and Department of Molecular Biology, Massachusetts General Hospital, Boston, MA, USA
| | - Peter T C So
- Laser Biomedical Research Center, G. R. Harrison Spectroscopy Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Mechanical and Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tommaso Biancalani
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Genentech, South San Francisco, CA, USA.
| | - Jian Shu
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Aviv Regev
- Klarman Cell Observatory, Broad Institute of MIT and Harvard, Cambridge, MA, USA.
- Genentech, South San Francisco, CA, USA.
| |
Collapse
|
20
|
Sonneck J, Zhou Y, Chen J. MMV_Im2Im: an open-source microscopy machine vision toolbox for image-to-image transformation. Gigascience 2024; 13:giad120. [PMID: 38280188 PMCID: PMC10821710 DOI: 10.1093/gigascience/giad120] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 09/30/2023] [Accepted: 12/28/2023] [Indexed: 01/29/2024] Open
Abstract
Over the past decade, deep learning (DL) research in computer vision has been growing rapidly, with many advances in DL-based image analysis methods for biomedical problems. In this work, we introduce MMV_Im2Im, a new open-source Python package for image-to-image transformation in bioimaging applications. MMV_Im2Im is designed with a generic image-to-image transformation framework that can be used for a wide range of tasks, including semantic segmentation, instance segmentation, image restoration, image generation, and so on. Our implementation takes advantage of state-of-the-art machine learning engineering techniques, allowing researchers to focus on their research without worrying about engineering details. We demonstrate the effectiveness of MMV_Im2Im on more than 10 different biomedical problems, showcasing its general potentials and applicabilities. For computational biomedical researchers, MMV_Im2Im provides a starting point for developing new biomedical image analysis or machine learning algorithms, where they can either reuse the code in this package or fork and extend this package to facilitate the development of new methods. Experimental biomedical researchers can benefit from this work by gaining a comprehensive view of the image-to-image transformation concept through diversified examples and use cases. We hope this work can give the community inspirations on how DL-based image-to-image transformation can be integrated into the assay development process, enabling new biomedical studies that cannot be done only with traditional experimental assays. To help researchers get started, we have provided source code, documentation, and tutorials for MMV_Im2Im at [https://github.com/MMV-Lab/mmv_im2im] under MIT license.
Collapse
Affiliation(s)
- Justin Sonneck
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
- Faculty of Computer Science, Ruhr-University Bochum, Universitätsstraße 150, Bochum 44801, Germany
| | - Yu Zhou
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften – ISAS – e.V., Bunsen-Kirchhoff-Str. 11, Dortmund 44139, Germany
| |
Collapse
|
21
|
Chen C, Smith ZJ, Fang J, Chu K. Organelle-specific phase contrast microscopy (OS-PCM) enables facile correlation study of organelles and proteins. BIOMEDICAL OPTICS EXPRESS 2024; 15:199-211. [PMID: 38223195 PMCID: PMC10783919 DOI: 10.1364/boe.510243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 11/29/2023] [Accepted: 12/03/2023] [Indexed: 01/16/2024]
Abstract
Current methods for studying organelle and protein interactions and correlations depend on multiplex fluorescent labeling, which is experimentally complex and harmful to cells. Here we propose to solve this challenge via OS-PCM, where organelles are imaged and segmented without labels, and combined with standard fluorescence microscopy of protein distributions. In this work, we develop new neural networks to obtain unlabeled organelle, nucleus and membrane predictions from a single 2D image. Automated analysis is also implemented to obtain quantitative information regarding the spatial distribution and co-localization of both protein and organelle, as well as their relationship to the landmark structures of nucleus and membrane. Using mitochondria and DRP1 protein as a proof-of-concept, we conducted a correlation study where only DRP1 is labeled, with results consistent with prior reports utilizing multiplex labeling. Thus our work demonstrates that OS-PCM simplifies the correlation study of organelles and proteins.
Collapse
Affiliation(s)
- Chen Chen
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Zachary J Smith
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Jingde Fang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230027, China
| | - Kaiqin Chu
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230027, China
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu 215123, China
| |
Collapse
|
22
|
Ivanov IE, Hirata-Miyasaki E, Chandler T, Kovilakam RC, Liu Z, Liu C, Leonetti MD, Huang B, Mehta SB. Mantis: high-throughput 4D imaging and analysis of the molecular and physical architecture of cells. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.19.572435. [PMID: 38187521 PMCID: PMC10769231 DOI: 10.1101/2023.12.19.572435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
High-throughput dynamic imaging of cells and organelles is important for parsing complex cellular responses. We report a high-throughput 4D microscope, named Mantis, that combines two complementary, gentle, live-imaging technologies: remote-refocus label-free microscopy and oblique light-sheet fluorescence microscopy. We also report open-source software for automated acquisition, registration, and reconstruction, and virtual staining software for single-cell segmentation and phenotyping. Mantis enabled high-content correlative imaging of molecular components and the physical architecture of 20 cell lines every 15 minutes over 7.5 hours, and also detailed measurements of the impacts of viral infection on the architecture of host cells and host proteins. The Mantis platform can enable high-throughput profiling of intracellular dynamics, long-term imaging and analysis of cellular responses to stress, and live cell optical screens to dissect gene regulatory networks.
Collapse
Affiliation(s)
- Ivan E. Ivanov
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | | | - Talon Chandler
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Rasmi Cheloor Kovilakam
- Department of Pharmaceutical Chemistry, University of California San Francisco, San Francisco, United States
| | - Ziwen Liu
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | - Chad Liu
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| | | | - Bo Huang
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
- Department of Pharmaceutical Chemistry, University of California San Francisco, San Francisco, United States
| | - Shalin B. Mehta
- Chan Zuckerberg Biohub San Francisco, San Francisco, United States
| |
Collapse
|
23
|
Sun G, Liu S, Shi C, Liu X, Guo Q. 3DCNAS: A universal method for predicting the location of fluorescent organelles in living cells in three-dimensional space. Exp Cell Res 2023; 433:113807. [PMID: 37852350 DOI: 10.1016/j.yexcr.2023.113807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/09/2023] [Accepted: 10/09/2023] [Indexed: 10/20/2023]
Abstract
Cellular biology research relies on microscopic imaging techniques for studying the complex structures and dynamic processes within cells. Fluorescence microscopy provides high sensitivity and subcellular resolution but has limitations such as photobleaching and sample preparation challenges. Transmission light microscopy offers a label-free alternative but lacks contrast for detailed interpretation. Deep learning methods have shown promise in analyzing cell images and extracting meaningful information. However, accurately learning and simulating diverse subcellular structures remain challenging. In this study, we propose a method named three-dimensional cell neural architecture search (3DCNAS) to predict subcellular structures of fluorescence using unlabeled transmitted light microscope images. By leveraging the automated search capability of differentiable neural architecture search (NAS), our method partially mitigates the issues of overfitting and underfitting caused by the distinct details of various subcellular structures. Furthermore, we apply our method to analyze cell dynamics in genome-edited human induced pluripotent stem cells during mitotic events. This allows us to study the functional roles of organelles and their involvement in cellular processes, contributing to a comprehensive understanding of cell biology and offering insights into disease pathogenesis.
Collapse
Affiliation(s)
- Guocheng Sun
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Shitou Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Chaojing Shi
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Xi Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China
| | - Qianjin Guo
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, 102617, China.
| |
Collapse
|
24
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
25
|
Xu X, Xiao Z, Zhang F, Wang C, Wei B, Wang Y, Cheng B, Jia Y, Li Y, Li B, Guo H, Xu F. CellVisioner: A Generalizable Cell Virtual Staining Toolbox based on Few-Shot Transfer Learning for Mechanobiological Analysis. RESEARCH (WASHINGTON, D.C.) 2023; 6:0285. [PMID: 38434246 PMCID: PMC10907024 DOI: 10.34133/research.0285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 11/16/2023] [Indexed: 03/05/2024]
Abstract
Visualizing cellular structures especially the cytoskeleton and the nucleus is crucial for understanding mechanobiology, but traditional fluorescence staining has inherent limitations such as phototoxicity and photobleaching. Virtual staining techniques provide an alternative approach to addressing these issues but often require substantial amount of user training data. In this study, we develop a generalizable cell virtual staining toolbox (termed CellVisioner) based on few-shot transfer learning that requires substantially reduced user training data. CellVisioner can virtually stain F-actin and nuclei for various types of cells and extract single-cell parameters relevant to mechanobiology research. Taking the label-free single-cell images as input, CellVisioner can predict cell mechanobiological status (e.g., Yes-associated protein nuclear/cytoplasmic ratio) and perform long-term monitoring for living cells. We envision that CellVisioner would be a powerful tool to facilitate on-site mechanobiological research.
Collapse
Affiliation(s)
- Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Zhanfeng Xiao
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Fan Zhang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Changxiang Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bo Wei
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yaohui Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bo Cheng
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yuanbo Jia
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Yuan Li
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Bin Li
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| | - Hui Guo
- Department of Medical Oncology,
The First Affiliated Hospital of Xi’an Jiaotong University, Xi’an 710061, P.R. China
| | - Feng Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education,
Xi’an Jiaotong University, Xi’an 710049, P.R. China
- Bioinspired Engineering and Biomechanics Center (BEBC),
Xi’an Jiaotong University, Xi’an 710049, P.R. China
| |
Collapse
|
26
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
27
|
Cao K, Xia Y, Yao J, Han X, Lambert L, Zhang T, Tang W, Jin G, Jiang H, Fang X, Nogues I, Li X, Guo W, Wang Y, Fang W, Qiu M, Hou Y, Kovarnik T, Vocka M, Lu Y, Chen Y, Chen X, Liu Z, Zhou J, Xie C, Zhang R, Lu H, Hager GD, Yuille AL, Lu L, Shao C, Shi Y, Zhang Q, Liang T, Zhang L, Lu J. Large-scale pancreatic cancer detection via non-contrast CT and deep learning. Nat Med 2023; 29:3033-3043. [PMID: 37985692 PMCID: PMC10719100 DOI: 10.1038/s41591-023-02640-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 10/12/2023] [Indexed: 11/22/2023]
Abstract
Pancreatic ductal adenocarcinoma (PDAC), the most deadly solid malignancy, is typically detected late and at an inoperable stage. Early or incidental detection is associated with prolonged survival, but screening asymptomatic individuals for PDAC using a single test remains unfeasible due to the low prevalence and potential harms of false positives. Non-contrast computed tomography (CT), routinely performed for clinical indications, offers the potential for large-scale screening, however, identification of PDAC using non-contrast CT has long been considered impossible. Here, we develop a deep learning approach, pancreatic cancer detection with artificial intelligence (PANDA), that can detect and classify pancreatic lesions with high accuracy via non-contrast CT. PANDA is trained on a dataset of 3,208 patients from a single center. PANDA achieves an area under the receiver operating characteristic curve (AUC) of 0.986-0.996 for lesion detection in a multicenter validation involving 6,239 patients across 10 centers, outperforms the mean radiologist performance by 34.1% in sensitivity and 6.3% in specificity for PDAC identification, and achieves a sensitivity of 92.9% and specificity of 99.9% for lesion detection in a real-world multi-scenario validation consisting of 20,530 consecutive patients. Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes. PANDA could potentially serve as a new tool for large-scale pancreatic cancer screening.
Collapse
Affiliation(s)
- Kai Cao
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Yingda Xia
- DAMO Academy, Alibaba Group, New York, NY, USA
| | - Jiawen Yao
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Xu Han
- Department of Hepatobiliary and Pancreatic Surgery, First Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lukas Lambert
- Department of Radiology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Tingting Zhang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Tang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Gang Jin
- Department of Surgery, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Hui Jiang
- Department of Pathology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Xu Fang
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Isabella Nogues
- Department of Biostatistics, Harvard University T.H. Chan School of Public Health, Cambridge, MA, USA
| | - Xuezhou Li
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Wenchao Guo
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Yu Wang
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Wei Fang
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Mingyan Qiu
- Hupan Laboratory, Hangzhou, China
- Damo Academy, Alibaba Group, Hangzhou, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Tomas Kovarnik
- Department of Invasive Cardiology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Michal Vocka
- Department of Oncology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Yimei Lu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yingli Chen
- Department of Surgery, Shanghai Institution of Pancreatic Disease, Shanghai, China
| | - Xin Chen
- Department of Radiology, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangzhou, China
| | - Jian Zhou
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hong Lu
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Gregory D Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Alan L Yuille
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Le Lu
- DAMO Academy, Alibaba Group, New York, NY, USA
| | - Chengwei Shao
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China.
| | - Yu Shi
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China.
| | - Qi Zhang
- Department of Hepatobiliary and Pancreatic Surgery, First Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | - Tingbo Liang
- Department of Hepatobiliary and Pancreatic Surgery, First Affiliated Hospital of Zhejiang University, Hangzhou, China.
| | - Ling Zhang
- DAMO Academy, Alibaba Group, New York, NY, USA.
| | - Jianping Lu
- Department of Radiology, Shanghai Institution of Pancreatic Disease, Shanghai, China.
| |
Collapse
|
28
|
Ten Eyck A, Chen YC, Gifford L, Torres-Rivera D, Dyer EL, Melikyan GB. Label-free imaging of nuclear membrane for analysis of nuclear import of viral complexes. J Virol Methods 2023; 322:114834. [PMID: 37875225 PMCID: PMC10841631 DOI: 10.1016/j.jviromet.2023.114834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 10/09/2023] [Accepted: 10/20/2023] [Indexed: 10/26/2023]
Abstract
HIV-1 enters the nucleus of non-dividing cells through the nuclear pore complex where it integrates into the host genome. The mechanism of HIV-1 nuclear import remains poorly understood. A powerful means to investigate the docking of HIV-1 at the nuclear pore and nuclear import of viral complexes is through single virus tracking in live cells. This approach necessitates fluorescence labeling of HIV-1 particles and the nuclear envelope, which may be challenging, especially in the context of primary cells. Here, we leveraged a deep neural network model for label-free visualization of the nuclear envelope using transmitted light microscopy. A training image set of cells with fluorescently labeled nuclear Lamin B1 (ground truth), along with the corresponding transmitted light images, was acquired and used to train our model to predict the morphology of the nuclear envelope in fixed cells. This protocol yielded accurate predictions of the nuclear membrane and was used in conjunction with virus infection to examine the nuclear entry of fluorescently labeled HIV-1 complexes. Analyses of HIV-1 nuclear import as a function of virus input yielded identical numbers of fluorescent viral complexes per nucleus using the ground truth and predicted nuclear membrane images. We also demonstrate the utility of predicting the nuclear envelope based on transmitted light images for multicolor fluorescence microscopy of infected cells. Importantly, we show that our model can be adapted to predict the nuclear membrane of live cells imaged at 37 °C, making this approach compatible with single virus tracking. Collectively, these findings demonstrate the utility of deep learning approaches for label-free imaging of cellular structures during early stages of virus infection.
Collapse
Affiliation(s)
- Andrew Ten Eyck
- Department of Biomedical Engineering, Georgia Institute of Technology-Emory School of Medicine, Atlanta, GA, USA
| | - Yen-Cheng Chen
- Division of Infectious Diseases, Department of Pediatrics, Emory University, Atlanta, GA, USA
| | - Levi Gifford
- Division of Infectious Diseases, Department of Pediatrics, Emory University, Atlanta, GA, USA; Graduate Division of Biological and Biomedical Sciences, Biochemistry, Cell and Developmental Biology Program, Emory University, Atlanta, GA, USA
| | - Dariana Torres-Rivera
- Division of Infectious Diseases, Department of Pediatrics, Emory University, Atlanta, GA, USA; Graduate Division of Biological and Biomedical Sciences, Biochemistry, Cell and Developmental Biology Program, Emory University, Atlanta, GA, USA
| | - Eva L Dyer
- Department of Biomedical Engineering, Georgia Institute of Technology-Emory School of Medicine, Atlanta, GA, USA; Department of Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Gregory B Melikyan
- Department of Biomedical Engineering, Georgia Institute of Technology-Emory School of Medicine, Atlanta, GA, USA; Division of Infectious Diseases, Department of Pediatrics, Emory University, Atlanta, GA, USA; Children's Healthcare of Atlanta, GA, USA.
| |
Collapse
|
29
|
Ibrahim KA, Grußmayer KS, Riguet N, Feletti L, Lashuel HA, Radenovic A. Label-free identification of protein aggregates using deep learning. Nat Commun 2023; 14:7816. [PMID: 38016971 PMCID: PMC10684545 DOI: 10.1038/s41467-023-43440-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023] Open
Abstract
Protein misfolding and aggregation play central roles in the pathogenesis of various neurodegenerative diseases (NDDs), including Huntington's disease, which is caused by a genetic mutation in exon 1 of the Huntingtin protein (Httex1). The fluorescent labels commonly used to visualize and monitor the dynamics of protein expression have been shown to alter the biophysical properties of proteins and the final ultrastructure, composition, and toxic properties of the formed aggregates. To overcome this limitation, we present a method for label-free identification of NDD-associated aggregates (LINA). Our approach utilizes deep learning to detect unlabeled and unaltered Httex1 aggregates in living cells from transmitted-light images, without the need for fluorescent labeling. Our models are robust across imaging conditions and on aggregates formed by different constructs of Httex1. LINA enables the dynamic identification of label-free aggregates and measurement of their dry mass and area changes during their growth process, offering high speed, specificity, and simplicity to analyze protein aggregation dynamics and obtain high-fidelity information.
Collapse
Affiliation(s)
- Khalid A Ibrahim
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Kristin S Grußmayer
- Department of Bionanoscience and Kavli Institute of Nanoscience Delft, Delft University of Technology, Delft, Netherlands.
| | - Nathan Riguet
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Lely Feletti
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Hilal A Lashuel
- Laboratory of Molecular and Chemical Biology of Neurodegeneration, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| | - Aleksandra Radenovic
- Laboratory of Nanoscale Biology, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
30
|
Yilmaz A, Aydin T, Varol R. Virtual staining for pixel-wise and quantitative analysis of single cell images. Sci Rep 2023; 13:19178. [PMID: 37932315 PMCID: PMC10628122 DOI: 10.1038/s41598-023-45150-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 10/16/2023] [Indexed: 11/08/2023] Open
Abstract
Immunocytochemical staining of microorganisms and cells has long been a popular method for examining their specific subcellular structures in greater detail. Recently, generative networks have emerged as an alternative to traditional immunostaining techniques. These networks infer fluorescence signatures from various imaging modalities and then virtually apply staining to the images in a digital environment. In numerous studies, virtual staining models have been trained on histopathology slides or intricate subcellular structures to enhance their accuracy and applicability. Despite the advancements in virtual staining technology, utilizing this method for quantitative analysis of microscopic images still poses a significant challenge. To address this issue, we propose a straightforward and automated approach for pixel-wise image-to-image translation. Our primary objective in this research is to leverage advanced virtual staining techniques to accurately measure the DNA fragmentation index in unstained sperm images. This not only offers a non-invasive approach to gauging sperm quality, but also paves the way for streamlined and efficient analyses without the constraints and potential biases introduced by traditional staining processes. This novel approach takes into account the limitations of conventional techniques and incorporates improvements to bolster the reliability of the virtual staining process. To further refine the results, we discuss various denoising techniques that can be employed to reduce the impact of background noise on the digital images. Additionally, we present a pixel-wise image matching algorithm designed to minimize the error caused by background noise and to prevent the introduction of bias into the analysis. By combining these approaches, we aim to develop a more effective and reliable method for quantitative analysis of virtually stained microscopic images, ultimately enhancing the study of microorganisms and cells at the subcellular level.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Universität der Bundeswehr München, 85579, Neubiberg, Germany
- Imperial College London, London, SW7 2BX, United Kingdom
| | - Tuelay Aydin
- Universität der Bundeswehr München, 85579, Neubiberg, Germany
| | | |
Collapse
|
31
|
Eddy CZ, Naylor A, Cunningham CT, Sun B. Facilitating cell segmentation with the projection-enhancement network. Phys Biol 2023; 20:10.1088/1478-3975/acfe53. [PMID: 37769666 PMCID: PMC10586931 DOI: 10.1088/1478-3975/acfe53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/28/2023] [Indexed: 10/03/2023]
Abstract
Contemporary approaches to instance segmentation in cell science use 2D or 3D convolutional networks depending on the experiment and data structures. However, limitations in microscopy systems or efforts to prevent phototoxicity commonly require recording sub-optimally sampled data that greatly reduces the utility of such 3D data, especially in crowded sample space with significant axial overlap between objects. In such regimes, 2D segmentations are both more reliable for cell morphology and easier to annotate. In this work, we propose the projection enhancement network (PEN), a novel convolutional module which processes the sub-sampled 3D data and produces a 2D RGB semantic compression, and is trained in conjunction with an instance segmentation network of choice to produce 2D segmentations. Our approach combines augmentation to increase cell density using a low-density cell image dataset to train PEN, and curated datasets to evaluate PEN. We show that with PEN, the learned semantic representation in CellPose encodes depth and greatly improves segmentation performance in comparison to maximum intensity projection images as input, but does not similarly aid segmentation in region-based networks like Mask-RCNN. Finally, we dissect the segmentation strength against cell density of PEN with CellPose on disseminated cells from side-by-side spheroids. We present PEN as a data-driven solution to form compressed representations of 3D data that improve 2D segmentations from instance segmentation networks.
Collapse
Affiliation(s)
| | - Austin Naylor
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| | | | - Bo Sun
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| |
Collapse
|
32
|
Song Y, Wang L, Xu T, Zhang G, Zhang X. Emerging open-channel droplet arrays for biosensing. Natl Sci Rev 2023; 10:nwad106. [PMID: 38027246 PMCID: PMC10662666 DOI: 10.1093/nsr/nwad106] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 11/23/2022] [Accepted: 12/07/2022] [Indexed: 12/01/2023] Open
Abstract
Open-channel droplet arrays have attracted much attention in the fields of biochemical analysis, biofluid monitoring, biomarker recognition and cell interactions, as they have advantages with regard to miniaturization, parallelization, high-throughput, simplicity and accessibility. Such droplet arrays not only improve the sensitivity and accuracy of a biosensor, but also do not require sophisticated equipment or tedious processes, showing great potential in next-generation miniaturized sensing platforms. This review summarizes typical examples of open-channel microdroplet arrays and focuses on diversified biosensing integrated with multiple signal-output approaches (fluorescence, colorimetric, surface-enhanced Raman scattering (SERS), electrochemical, etc.). The limitations and development prospects of open-channel droplet arrays in biosensing are also discussed with regard to the increasing demand for biosensors.
Collapse
Affiliation(s)
- Yongchao Song
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, China
- Intelligent Wearable Engineering Research Center of Qingdao, Research Center for Intelligent and Wearable Technology, College of Textiles and Clothing, State Key Laboratory of Bio-Fibers and Eco-Textiles, Qingdao University, Qingdao 266071, China
| | - Lirong Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, China
| | - Tailin Xu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, China
| | - Guangyao Zhang
- Intelligent Wearable Engineering Research Center of Qingdao, Research Center for Intelligent and Wearable Technology, College of Textiles and Clothing, State Key Laboratory of Bio-Fibers and Eco-Textiles, Qingdao University, Qingdao 266071, China
| | - Xueji Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, China
| |
Collapse
|
33
|
Timonen VA, Kerkelä E, Impola U, Penna L, Partanen J, Kilpivaara O, Arvas M, Pitkänen E. DeepIFC: Virtual fluorescent labeling of blood cells in imaging flow cytometry data with deep learning. Cytometry A 2023; 103:807-817. [PMID: 37276178 DOI: 10.1002/cyto.a.24770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 05/16/2023] [Accepted: 06/02/2023] [Indexed: 06/07/2023]
Abstract
Imaging flow cytometry (IFC) combines flow cytometry with microscopy, allowing rapid characterization of cellular and molecular properties via high-throughput single-cell fluorescent imaging. However, fluorescent labeling is costly and time-consuming. We present a computational method called DeepIFC based on the Inception U-Net neural network architecture, able to generate fluorescent marker images and learn morphological features from IFC brightfield and darkfield images. Furthermore, the DeepIFC workflow identifies cell types from the generated fluorescent images and visualizes the single-cell features generated in a 2D space. We demonstrate that rarer cell types are predicted well when a balanced data set is used to train the model, and the model is able to recognize red blood cells not seen during model training as a distinct entity. In summary, DeepIFC allows accurate cell reconstruction, typing and recognition of unseen cell types from brightfield and darkfield images via virtual fluorescent labeling.
Collapse
Affiliation(s)
- Veera A Timonen
- Institute for Molecular Medicine Finland (FIMM), Helsinki Institute of Life Science (HiLIFE), University of Helsinki, Helsinki, Finland
- Applied Tumor Genomics Research Program, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Erja Kerkelä
- Advanced Cell Therapy Centre, Finnish Red Cross Blood Service, Vantaa, Finland
| | - Ulla Impola
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Leena Penna
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Jukka Partanen
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Outi Kilpivaara
- Applied Tumor Genomics Research Program, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Department of Medical and Clinical Genetics, Medicum, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- HUSLAB Laboratory of Genetics, HUS Diagnostic Center, Helsinki University Hospital, Helsinki, Finland
- iCAN Digital Precision Cancer Medicine Flagship, Helsinki, Finland
| | - Mikko Arvas
- Research and Development, Finnish Red Cross Blood Service, Helsinki, Finland
| | - Esa Pitkänen
- Institute for Molecular Medicine Finland (FIMM), Helsinki Institute of Life Science (HiLIFE), University of Helsinki, Helsinki, Finland
- Applied Tumor Genomics Research Program, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- iCAN Digital Precision Cancer Medicine Flagship, Helsinki, Finland
| |
Collapse
|
34
|
Johnson GT, Agmon E, Akamatsu M, Lundberg E, Lyons B, Ouyang W, Quintero-Carmona OA, Riel-Mehan M, Rafelski S, Horwitz R. Building the next generation of virtual cells to understand cellular biology. Biophys J 2023; 122:3560-3569. [PMID: 37050874 PMCID: PMC10541477 DOI: 10.1016/j.bpj.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 03/19/2023] [Accepted: 04/06/2023] [Indexed: 04/14/2023] Open
Abstract
Cell science has made significant progress by focusing on understanding individual cellular processes through reductionist approaches. However, the sheer volume of knowledge collected presents challenges in integrating this information across different scales of space and time to comprehend cellular behaviors, as well as making the data and methods more accessible for the community to tackle complex biological questions. This perspective proposes the creation of next-generation virtual cells, which are dynamic 3D models that integrate information from diverse sources, including simulations, biophysical models, image-based models, and evidence-based knowledge graphs. These virtual cells would provide statistically accurate and holistic views of real cells, bridging the gap between theoretical concepts and experimental data, and facilitating productive new collaborations among researchers across related fields.
Collapse
Affiliation(s)
| | - Eran Agmon
- Center for Cell Analysis and Modeling, University of Connecticut Health, Farmington, Connecticut
| | - Matthew Akamatsu
- Department of Biology, University of Washington, Seattle, Washington
| | - Emma Lundberg
- Department of Applied Physics, Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden; Department of Bioengineering, Stanford University, Stanford, California; Department of Pathology, Stanford University, Stanford, California; Chan Zuckerberg Biohub, San Francisco, California
| | - Blair Lyons
- Allen Institute for Cell Science, Seattle, Washington
| | - Wei Ouyang
- Department of Applied Physics, Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | | | | | | | - Rick Horwitz
- Allen Institute for Cell Science, Seattle, Washington.
| |
Collapse
|
35
|
Strawbridge SE, Kurowski A, Corujo-Simon E, Fletcher AN, Nichols J, Fletcher AG. insideOutside: an accessible algorithm for classifying interior and exterior points, with applications in embryology. Biol Open 2023; 12:bio060055. [PMID: 37623821 PMCID: PMC10461464 DOI: 10.1242/bio.060055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 07/27/2023] [Indexed: 08/26/2023] Open
Abstract
A crucial aspect of embryology is relating the position of individual cells to the broader geometry of the embryo. A classic example of this is the first cell-fate decision of the mouse embryo, where interior cells become inner cell mass and exterior cells become trophectoderm. Fluorescent labelling, imaging, and quantification of tissue-specific proteins have advanced our understanding of this dynamic process. However, instances arise where these markers are either not available, or not reliable, and we are left only with the cells' spatial locations. Therefore, a simple, robust method for classifying interior and exterior cells of an embryo using spatial information is required. Here, we describe a simple mathematical framework and an unsupervised machine learning approach, termed insideOutside, for classifying interior and exterior points of a three-dimensional point-cloud, a common output from imaged cells within the early mouse embryo. We benchmark our method against other published methods to demonstrate that it yields greater accuracy in classification of nuclei from the pre-implantation mouse embryos and greater accuracy when challenged with local surface concavities. We have made MATLAB and Python implementations of the method freely available. This method should prove useful for embryology, with broader applications to similar data arising in the life sciences.
Collapse
Affiliation(s)
- Stanley E. Strawbridge
- Wellcome-MRC Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
- Department of Physiology, Neuroscience and Development, University of Cambridge, Cambridge, UK
| | - Agata Kurowski
- Department of Pharmacological Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Elena Corujo-Simon
- Wellcome-MRC Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
- Department of Physiology, Neuroscience and Development, University of Cambridge, Cambridge, UK
- MRC Human Genetics Unit, University of Edinburgh, Edinburgh, UK
| | - Alastair N. Fletcher
- Department of Mathematical Sciences, Northern Illinois University, DeKalb, IL, USA
| | - Jennifer Nichols
- Wellcome-MRC Cambridge Stem Cell Institute, University of Cambridge, Cambridge, UK
- Department of Physiology, Neuroscience and Development, University of Cambridge, Cambridge, UK
- MRC Human Genetics Unit, University of Edinburgh, Edinburgh, UK
- Centre for Trophoblast Research, University of Cambridge, Cambridge, UK
| | - Alexander G. Fletcher
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK
- The Bateson Centre, University of Sheffield, Sheffield, UK
| |
Collapse
|
36
|
Jiang Y, Sha H, Liu S, Qin P, Zhang Y. AutoUnmix: an autoencoder-based spectral unmixing method for multi-color fluorescence microscopy imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:4814-4827. [PMID: 37791286 PMCID: PMC10545201 DOI: 10.1364/boe.498421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/12/2023] [Accepted: 08/14/2023] [Indexed: 10/05/2023]
Abstract
Multiplexed fluorescence microscopy imaging is widely used in biomedical applications. However, simultaneous imaging of multiple fluorophores can result in spectral leaks and overlapping, which greatly degrades image quality and subsequent analysis. Existing popular spectral unmixing methods are mainly based on computational intensive linear models, and the performance is heavily dependent on the reference spectra, which may greatly preclude its further applications. In this paper, we propose a deep learning-based blindly spectral unmixing method, termed AutoUnmix, to imitate the physical spectral mixing process. A transfer learning framework is further devised to allow our AutoUnmix to adapt to a variety of imaging systems without retraining the network. Our proposed method has demonstrated real-time unmixing capabilities, surpassing existing methods by up to 100-fold in terms of unmixing speed. We further validate the reconstruction performance on both synthetic datasets and biological samples. The unmixing results of AutoUnmix achieve the highest SSIM of 0.99 in both three- and four-color imaging, with nearly up to 20% higher than other popular unmixing methods. For experiments where spectral profiles and morphology are akin to simulated data, our method realizes the quantitative performance demonstrated above. Due to the desirable property of data independency and superior blind unmixing performance, we believe AutoUnmix is a powerful tool for studying the interaction process of different organelles labeled by multiple fluorophores.
Collapse
Affiliation(s)
- Yuan Jiang
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Guangdong 518055, China
| | - Hao Sha
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Guangdong 518055, China
| | - Shuai Liu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, Guangdong Province 518055, China
| | - Peiwu Qin
- Center of Precision Medicine and Healthcare, Tsinghua-Berkeley Shenzhen Institute, Guangdong Province, 518055, China
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen, Guangdong Province, 518055, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, Guangdong 518055, China
| |
Collapse
|
37
|
Michalska JM, Lyudchik J, Velicky P, Štefaničková H, Watson JF, Cenameri A, Sommer C, Amberg N, Venturino A, Roessler K, Czech T, Höftberger R, Siegert S, Novarino G, Jonas P, Danzl JG. Imaging brain tissue architecture across millimeter to nanometer scales. Nat Biotechnol 2023:10.1038/s41587-023-01911-8. [PMID: 37653226 DOI: 10.1038/s41587-023-01911-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 07/20/2023] [Indexed: 09/02/2023]
Abstract
Mapping the complex and dense arrangement of cells and their connectivity in brain tissue demands nanoscale spatial resolution imaging. Super-resolution optical microscopy excels at visualizing specific molecules and individual cells but fails to provide tissue context. Here we developed Comprehensive Analysis of Tissues across Scales (CATS), a technology to densely map brain tissue architecture from millimeter regional to nanometer synaptic scales in diverse chemically fixed brain preparations, including rodent and human. CATS uses fixation-compatible extracellular labeling and optical imaging, including stimulated emission depletion or expansion microscopy, to comprehensively delineate cellular structures. It enables three-dimensional reconstruction of single synapses and mapping of synaptic connectivity by identification and analysis of putative synaptic cleft regions. Applying CATS to the mouse hippocampal mossy fiber circuitry, we reconstructed and quantified the synaptic input and output structure of identified neurons. We furthermore demonstrate applicability to clinically derived human tissue samples, including formalin-fixed paraffin-embedded routine diagnostic specimens, for visualizing the cellular architecture of brain tissue in health and disease.
Collapse
Affiliation(s)
- Julia M Michalska
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Julia Lyudchik
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Philipp Velicky
- Institute of Science and Technology Austria, Klosterneuburg, Austria
- Core Facility Imaging, Medical University of Vienna, Vienna, Austria
| | - Hana Štefaničková
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Jake F Watson
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Alban Cenameri
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Christoph Sommer
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Nicole Amberg
- Department of Neurology, Division of Neuropathology and Neurochemistry, Medical University of Vienna, Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, Vienna, Austria
| | | | - Karl Roessler
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, Vienna, Austria
- Department of Neurosurgery, Medical University of Vienna, Vienna, Austria
| | - Thomas Czech
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, Vienna, Austria
- Department of Neurosurgery, Medical University of Vienna, Vienna, Austria
| | - Romana Höftberger
- Department of Neurology, Division of Neuropathology and Neurochemistry, Medical University of Vienna, Vienna, Austria
- Comprehensive Center for Clinical Neurosciences and Mental Health, Medical University of Vienna, Vienna, Austria
| | - Sandra Siegert
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Gaia Novarino
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Peter Jonas
- Institute of Science and Technology Austria, Klosterneuburg, Austria
| | - Johann G Danzl
- Institute of Science and Technology Austria, Klosterneuburg, Austria.
| |
Collapse
|
38
|
Fanous MJ, Pillar N, Ozcan A. Digital staining facilitates biomedical microscopy. FRONTIERS IN BIOINFORMATICS 2023; 3:1243663. [PMID: 37564725 PMCID: PMC10411189 DOI: 10.3389/fbinf.2023.1243663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/17/2023] [Indexed: 08/12/2023] Open
Abstract
Traditional staining of biological specimens for microscopic imaging entails time-consuming, laborious, and costly procedures, in addition to producing inconsistent labeling and causing irreversible sample damage. In recent years, computational "virtual" staining using deep learning techniques has evolved into a robust and comprehensive application for streamlining the staining process without typical histochemical staining-related drawbacks. Such virtual staining techniques can also be combined with neural networks designed to correct various microscopy aberrations, such as out-of-focus or motion blur artifacts, and improve upon diffracted-limited resolution. Here, we highlight how such methods lead to a host of new opportunities that can significantly improve both sample preparation and imaging in biomedical microscopy.
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
| | - Nir Pillar
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, United States
- Bioengineering Department, University of California, Los Angeles, CA, United States
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, United States
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, United States
| |
Collapse
|
39
|
Atwell S, Waibel DJE, Boushehri SS, Wiedenmann S, Marr C, Meier M. Label-free imaging of 3D pluripotent stem cell differentiation dynamics on chip. CELL REPORTS METHODS 2023; 3:100523. [PMID: 37533640 PMCID: PMC10391578 DOI: 10.1016/j.crmeth.2023.100523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 05/09/2023] [Accepted: 06/15/2023] [Indexed: 08/04/2023]
Abstract
Massive, parallelized 3D stem cell cultures for engineering in vitro human cell types require imaging methods with high time and spatial resolution to fully exploit technological advances in cell culture technologies. Here, we introduce a large-scale integrated microfluidic chip platform for automated 3D stem cell differentiation. To fully enable dynamic high-content imaging on the chip platform, we developed a label-free deep learning method called Bright2Nuc to predict in silico nuclear staining in 3D from confocal microscopy bright-field images. Bright2Nuc was trained and applied to hundreds of 3D human induced pluripotent stem cell cultures differentiating toward definitive endoderm on a microfluidic platform. Combined with existing image analysis tools, Bright2Nuc segmented individual nuclei from bright-field images, quantified their morphological properties, predicted stem cell differentiation state, and tracked the cells over time. Our methods are available in an open-source pipeline, enabling researchers to upscale image acquisition and phenotyping of 3D cell culture.
Collapse
Affiliation(s)
- Scott Atwell
- Helmholtz Pioneer Campus, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik Jens Elias Waibel
- Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
- TUM School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Sayedali Shetab Boushehri
- Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
- Department of Mathematics, Technical University of Munich, Munich, Germany
- Data & Analytics, Pharmaceutical Research and Early Development, Roche Innovation Center Munich (RICM), Penzberg, Germany
| | - Sandra Wiedenmann
- Helmholtz Pioneer Campus, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
| | - Matthias Meier
- Helmholtz Pioneer Campus, Helmholtz Zentrum München - German Research Center for Environmental Health, Neuherberg, Germany
- Center for Biotechnology and Biomedicine, University of Leipzig, Leipzig, Germany
| |
Collapse
|
40
|
Körber N. MIA is an open-source standalone deep learning application for microscopic image analysis. CELL REPORTS METHODS 2023; 3:100517. [PMID: 37533647 PMCID: PMC10391334 DOI: 10.1016/j.crmeth.2023.100517] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 02/10/2023] [Accepted: 06/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, the amount of data generated by imaging techniques has grown rapidly, along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, the Microscopic Image Analyzer (MIA) was developed. MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep-learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and uses open data formats, which are compatible with commonly used open-source software packages. The software provides a unified interface for easy image labeling, model training, and inference. Furthermore, the software was evaluated in a public competition and performed among the top three for all tested datasets.
Collapse
Affiliation(s)
- Nils Körber
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany
| |
Collapse
|
41
|
Chen J, Viana MP, Rafelski SM. When seeing is not believing: application-appropriate validation matters for quantitative bioimage analysis. Nat Methods 2023; 20:968-970. [PMID: 37433995 DOI: 10.1038/s41592-023-01881-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Affiliation(s)
- Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften - ISAS - e.V., Dortmund, Germany
| | | | | |
Collapse
|
42
|
Schwartz M, Israel U, Wang XJ, Laubscher E, Yu C, Dilip R, Li Q, Mari J, Soro J, Yu K, Pradhan E, Ates A, Gallandt D, Barnowski R, Pao E, Van Valen D. Scaling biological discovery at the interface of deep learning and cellular imaging. Nat Methods 2023; 20:956-957. [PMID: 37434003 DOI: 10.1038/s41592-023-01931-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Affiliation(s)
- Morgan Schwartz
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Uriah Israel
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Xuefei Julie Wang
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Emily Laubscher
- Department of Chemistry, California Institute of Technology, Pasadena, CA, USA
| | - Changhua Yu
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Rohit Dilip
- Department of Computer Science, California Institute of Technology, Pasadena, CA, USA
| | - Qilin Li
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Joud Mari
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Johnathon Soro
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Kevin Yu
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Elora Pradhan
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Ada Ates
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Danielle Gallandt
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Ross Barnowski
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Edward Pao
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - David Van Valen
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
43
|
Wang R, Butt D, Cross S, Verkade P, Achim A. Bright-field to fluorescence microscopy image translation for cell nuclei health quantification. BIOLOGICAL IMAGING 2023; 3:e12. [PMID: 38510164 PMCID: PMC10951917 DOI: 10.1017/s2633903x23000120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/05/2023] [Accepted: 05/29/2023] [Indexed: 03/22/2024]
Abstract
Microscopy is a widely used method in biological research to observe the morphology and structure of cells. Amongst the plethora of microscopy techniques, fluorescent labeling with dyes or antibodies is the most popular method for revealing specific cellular organelles. However, fluorescent labeling also introduces new challenges to cellular observation, as it increases the workload, and the process may result in nonspecific labeling. Recent advances in deep visual learning have shown that there are systematic relationships between fluorescent and bright-field images, thus facilitating image translation between the two. In this article, we propose the cross-attention conditional generative adversarial network (XAcGAN) model. It employs state-of-the-art GANs (GANs) to solve the image translation task. The model uses supervised learning and combines attention-based networks to explore spatial information during translation. In addition, we demonstrate the successful application of XAcGAN to infer the health state of translated nuclei from bright-field microscopy images. The results show that our approach achieves excellent performance both in terms of image translation and nuclei state inference.
Collapse
Affiliation(s)
- Ruixiong Wang
- Visual Information Laboratory, University of Bristol, Bristol, United Kingdom
| | - Daniel Butt
- School of Biochemistry, University of Bristol, Bristol, United Kingdom
| | - Stephen Cross
- Wolfson Bioimaging Facility, University of Bristol, Bristol, United Kingdom
| | - Paul Verkade
- School of Biochemistry, University of Bristol, Bristol, United Kingdom
| | - Alin Achim
- Visual Information Laboratory, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
44
|
Yang X, Chen D, Sun Q, Wang Y, Xia Y, Yang J, Lin C, Dang X, Cen Z, Liang D, Wei R, Xu Z, Xi G, Xue G, Ye C, Wang LP, Zou P, Wang SQ, Rivera-Fuentes P, Püntener S, Chen Z, Liu Y, Zhang J, Zhao Y. A live-cell image-based machine learning strategy for reducing variability in PSC differentiation systems. Cell Discov 2023; 9:53. [PMID: 37280224 DOI: 10.1038/s41421-023-00543-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 03/13/2023] [Indexed: 06/08/2023] Open
Abstract
The differentiation of pluripotent stem cells (PSCs) into diverse functional cell types provides a promising solution to support drug discovery, disease modeling, and regenerative medicine. However, functional cell differentiation is currently limited by the substantial line-to-line and batch-to-batch variabilities, which severely impede the progress of scientific research and the manufacturing of cell products. For instance, PSC-to-cardiomyocyte (CM) differentiation is vulnerable to inappropriate doses of CHIR99021 (CHIR) that are applied in the initial stage of mesoderm differentiation. Here, by harnessing live-cell bright-field imaging and machine learning (ML), we realize real-time cell recognition in the entire differentiation process, e.g., CMs, cardiac progenitor cells (CPCs), PSC clones, and even misdifferentiated cells. This enables non-invasive prediction of differentiation efficiency, purification of ML-recognized CMs and CPCs for reducing cell contamination, early assessment of the CHIR dose for correcting the misdifferentiation trajectory, and evaluation of initial PSC colonies for controlling the start point of differentiation, all of which provide a more invulnerable differentiation method with resistance to variability. Moreover, with the established ML models as a readout for the chemical screen, we identify a CDK8 inhibitor that can further improve the cell resistance to the overdose of CHIR. Together, this study indicates that artificial intelligence is able to guide and iteratively optimize PSC differentiation to achieve consistently high efficiency across cell lines and batches, providing a better understanding and rational modulation of the differentiation process for functional cell manufacturing in biomedical applications.
Collapse
Affiliation(s)
- Xiaochun Yang
- State Key Laboratory of Natural and Biomimetic Drugs, MOE Key Laboratory of Cell Proliferation and Differentiation, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, College of Future Technology, Peking University, Beijing, China
| | - Daichao Chen
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Qiushi Sun
- Beijing Key Lab of Traffic Data Analysis and Mining, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Yao Wang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Yu Xia
- College of Engineering, Peking University, Beijing, China
| | - Jinyu Yang
- College of Engineering, Peking University, Beijing, China
| | - Chang Lin
- College of Chemistry and Molecular Engineering, Synthetic and Functional Biomolecules Center, Beijing National Laboratory for Molecular Sciences, Key Laboratory of Bioorganic Chemistry and Molecular Engineering of Ministry of Education, Peking University, Beijing, China
| | - Xin Dang
- State Key Laboratory of Natural and Biomimetic Drugs, MOE Key Laboratory of Cell Proliferation and Differentiation, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, College of Future Technology, Peking University, Beijing, China
| | - Zimu Cen
- State Key Laboratory of Natural and Biomimetic Drugs, MOE Key Laboratory of Cell Proliferation and Differentiation, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, College of Future Technology, Peking University, Beijing, China
| | - Dongdong Liang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Rong Wei
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Ze Xu
- State Key Laboratory of Membrane Biology, College of Life Sciences, Peking University, Beijing, China
| | - Guangyin Xi
- State Key Laboratory of Natural and Biomimetic Drugs, MOE Key Laboratory of Cell Proliferation and Differentiation, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, College of Future Technology, Peking University, Beijing, China
| | - Gang Xue
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Can Ye
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Li-Peng Wang
- State Key Laboratory of Membrane Biology, College of Life Sciences, Peking University, Beijing, China
| | - Peng Zou
- College of Chemistry and Molecular Engineering, Synthetic and Functional Biomolecules Center, Beijing National Laboratory for Molecular Sciences, Key Laboratory of Bioorganic Chemistry and Molecular Engineering of Ministry of Education, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Shi-Qiang Wang
- State Key Laboratory of Membrane Biology, College of Life Sciences, Peking University, Beijing, China
| | | | - Salome Püntener
- Department of Chemistry, University of Zurich, Zurich, Switzerland
- Institute of Chemical Sciences and Engineering, Ecole Polytechnique Fédéral de Lausanne, Lausanne, Switzerland
| | - Zhixing Chen
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
- Institute of Molecular Medicine, National Biomedical Imaging Center, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, College of Future Technology, Peking University, Beijing, China
| | - Yi Liu
- Beijing Key Lab of Traffic Data Analysis and Mining, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China.
| | - Jue Zhang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.
- College of Engineering, Peking University, Beijing, China.
| | - Yang Zhao
- State Key Laboratory of Natural and Biomimetic Drugs, MOE Key Laboratory of Cell Proliferation and Differentiation, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, College of Future Technology, Peking University, Beijing, China.
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China.
| |
Collapse
|
45
|
Zhang B, Sun X, Mai J, Wang W. Deep learning-enhanced fluorescence microscopy via confocal physical imaging model. OPTICS EXPRESS 2023; 31:19048-19064. [PMID: 37381330 DOI: 10.1364/oe.490037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/09/2023] [Indexed: 06/30/2023]
Abstract
Confocal microscopy is one of the most widely used tools for high-resolution cellular, tissue imaging and industrial inspection. Micrograph reconstruction based on deep learning has become an effective tool for modern microscopy imaging techniques. While most deep learning methods neglect the imaging process mechanism, which requires a lot of work to solve the multi-scale image pairs aliasing problem. We show that these limitations can be mitigated via an image degradation model based on Richards-Wolf vectorial diffraction integral and confocal imaging theory. The low-resolution images required for network training are generated by model degradation from their high-resolution counterparts, thereby eliminating the need for accurate image alignment. The image degradation model ensures the generalization and fidelity of the confocal images. By combining the residual neural network with a lightweight feature attention module with degradation model of confocal microscopy ensures high fidelity and generalization. Experiments on different measured data report that compared with the two deconvolution algorithms, non-negative least squares algorithm and Richardson-Lucy algorithm, the structural similarity index between the network output image and the real image reaches a high level above 0.82, and the peak signal-to-noise ratio can be improved by more than 0.6 dB. It also shows good applicability in different deep learning networks.
Collapse
|
46
|
Lapierre-Landry M, Liu Y, Bayat M, Wilson DL, Jenkins MW. Digital labeling for 3D histology: segmenting blood vessels without a vascular contrast agent using deep learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:2416-2431. [PMID: 37342724 PMCID: PMC10278624 DOI: 10.1364/boe.480230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 01/12/2023] [Accepted: 02/20/2023] [Indexed: 06/23/2023]
Abstract
Recent advances in optical tissue clearing and three-dimensional (3D) fluorescence microscopy have enabled high resolution in situ imaging of intact tissues. Using simply prepared samples, we demonstrate here "digital labeling," a method to segment blood vessels in 3D volumes solely based on the autofluorescence signal and a nuclei stain (DAPI). We trained a deep-learning neural network based on the U-net architecture using a regression loss instead of a commonly used segmentation loss to achieve better detection of small vessels. We achieved high vessel detection accuracy and obtained accurate vascular morphometrics such as vessel length density and orientation. In the future, such digital labeling approach could easily be transferred to other biological structures.
Collapse
Affiliation(s)
| | - Yehe Liu
- Department of Biomedical Engineering, Case Western Reserve University, USA
| | - Mahdi Bayat
- Department of Electrical, Computer and Systems Engineering, Case Western Reserve University, USA
| | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, USA
- Department of Radiology, Case Western Reserve University, USA
| | - Michael W. Jenkins
- Department of Biomedical Engineering, Case Western Reserve University, USA
- Department of Pediatrics, School of
Medicine, Case Western Reserve University, USA
| |
Collapse
|
47
|
Abstract
Super-resolution fluorescence microscopy allows the investigation of cellular structures at nanoscale resolution using light. Current developments in super-resolution microscopy have focused on reliable quantification of the underlying biological data. In this review, we first describe the basic principles of super-resolution microscopy techniques such as stimulated emission depletion (STED) microscopy and single-molecule localization microscopy (SMLM), and then give a broad overview of methodological developments to quantify super-resolution data, particularly those geared toward SMLM data. We cover commonly used techniques such as spatial point pattern analysis, colocalization, and protein copy number quantification but also describe more advanced techniques such as structural modeling, single-particle tracking, and biosensing. Finally, we provide an outlook on exciting new research directions to which quantitative super-resolution microscopy might be applied.
Collapse
Affiliation(s)
- Siewert Hugelier
- Department of Physiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA; , ,
| | - P L Colosi
- Department of Physiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA; , ,
| | - Melike Lakadamyali
- Department of Physiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA; , ,
- Department of Cell and Developmental Biology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Epigenetics Institute, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
48
|
Copperman J, Gross SM, Chang YH, Heiser LM, Zuckerman DM. Morphodynamical cell state description via live-cell imaging trajectory embedding. Commun Biol 2023; 6:484. [PMID: 37142678 PMCID: PMC10160022 DOI: 10.1038/s42003-023-04837-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Accepted: 04/10/2023] [Indexed: 05/06/2023] Open
Abstract
Time-lapse imaging is a powerful approach to gain insight into the dynamic responses of cells, but the quantitative analysis of morphological changes over time remains challenging. Here, we exploit the concept of "trajectory embedding" to analyze cellular behavior using morphological feature trajectory histories-that is, multiple time points simultaneously, rather than the more common practice of examining morphological feature time courses in single timepoint (snapshot) morphological features. We apply this approach to analyze live-cell images of MCF10A mammary epithelial cells after treatment with a panel of microenvironmental perturbagens that strongly modulate cell motility, morphology, and cell cycle behavior. Our morphodynamical trajectory embedding analysis constructs a shared cell state landscape revealing ligand-specific regulation of cell state transitions and enables quantitative and descriptive models of single-cell trajectories. Additionally, we show that incorporation of trajectories into single-cell morphological analysis enables (i) systematic characterization of cell state trajectories, (ii) better separation of phenotypes, and (iii) more descriptive models of ligand-induced differences as compared to snapshot-based analysis. This morphodynamical trajectory embedding is broadly applicable to the quantitative analysis of cell responses via live-cell imaging across many biological and biomedical applications.
Collapse
Affiliation(s)
- Jeremy Copperman
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, 97239, USA.
| | - Sean M Gross
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, 97239, USA
| | - Young Hwan Chang
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, 97239, USA
- Knight Cancer Institute, Oregon Health and Science University, Portland, OR, 97239, USA
| | - Laura M Heiser
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, 97239, USA.
- Knight Cancer Institute, Oregon Health and Science University, Portland, OR, 97239, USA.
| | - Daniel M Zuckerman
- Department of Biomedical Engineering, Oregon Health and Science University, Portland, OR, 97239, USA.
- Knight Cancer Institute, Oregon Health and Science University, Portland, OR, 97239, USA.
| |
Collapse
|
49
|
Tsai HF, Podder S, Chen PY. Microsystem Advances through Integration with Artificial Intelligence. MICROMACHINES 2023; 14:826. [PMID: 37421059 DOI: 10.3390/mi14040826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 04/04/2023] [Accepted: 04/06/2023] [Indexed: 07/09/2023]
Abstract
Microfluidics is a rapidly growing discipline that involves studying and manipulating fluids at reduced length scale and volume, typically on the scale of micro- or nanoliters. Under the reduced length scale and larger surface-to-volume ratio, advantages of low reagent consumption, faster reaction kinetics, and more compact systems are evident in microfluidics. However, miniaturization of microfluidic chips and systems introduces challenges of stricter tolerances in designing and controlling them for interdisciplinary applications. Recent advances in artificial intelligence (AI) have brought innovation to microfluidics from design, simulation, automation, and optimization to bioanalysis and data analytics. In microfluidics, the Navier-Stokes equations, which are partial differential equations describing viscous fluid motion that in complete form are known to not have a general analytical solution, can be simplified and have fair performance through numerical approximation due to low inertia and laminar flow. Approximation using neural networks trained by rules of physical knowledge introduces a new possibility to predict the physicochemical nature. The combination of microfluidics and automation can produce large amounts of data, where features and patterns that are difficult to discern by a human can be extracted by machine learning. Therefore, integration with AI introduces the potential to revolutionize the microfluidic workflow by enabling the precision control and automation of data analysis. Deployment of smart microfluidics may be tremendously beneficial in various applications in the future, including high-throughput drug discovery, rapid point-of-care-testing (POCT), and personalized medicine. In this review, we summarize key microfluidic advances integrated with AI and discuss the outlook and possibilities of combining AI and microfluidics.
Collapse
Affiliation(s)
- Hsieh-Fu Tsai
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
- Department of Neurosurgery, Chang Gung Memorial Hospital, Keelung, Keelung City 204, Taiwan
- Center for Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
| | - Soumyajit Podder
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
| | - Pin-Yuan Chen
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
- Department of Neurosurgery, Chang Gung Memorial Hospital, Keelung, Keelung City 204, Taiwan
| |
Collapse
|
50
|
Küppers M, Albrecht D, Kashkanova AD, Lühr J, Sandoghdar V. Confocal interferometric scattering microscopy reveals 3D nanoscopic structure and dynamics in live cells. Nat Commun 2023; 14:1962. [PMID: 37029107 PMCID: PMC10081331 DOI: 10.1038/s41467-023-37497-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 03/16/2023] [Indexed: 04/09/2023] Open
Abstract
Bright-field light microscopy and related phase-sensitive techniques play an important role in life sciences because they provide facile and label-free insights into biological specimens. However, lack of three-dimensional imaging and low sensitivity to nanoscopic features hamper their application in many high-end quantitative studies. Here, we demonstrate that interferometric scattering (iSCAT) microscopy operated in the confocal mode provides unique label-free solutions for live-cell studies. We reveal the nanometric topography of the nuclear envelope, quantify the dynamics of the endoplasmic reticulum, detect single microtubules, and map nanoscopic diffusion of clathrin-coated pits undergoing endocytosis. Furthermore, we introduce the combination of confocal and wide-field iSCAT modalities for simultaneous imaging of cellular structures and high-speed tracking of nanoscopic entities such as single SARS-CoV-2 virions. We benchmark our findings against simultaneously acquired fluorescence images. Confocal iSCAT can be readily implemented as an additional contrast mechanism in existing laser scanning microscopes. The method is ideally suited for live studies on primary cells that face labeling challenges and for very long measurements beyond photobleaching times.
Collapse
Affiliation(s)
- Michelle Küppers
- Max Planck Institute for the Science of Light, 91058, Erlangen, Germany
- Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
- Department of Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - David Albrecht
- Max Planck Institute for the Science of Light, 91058, Erlangen, Germany
- Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Anna D Kashkanova
- Max Planck Institute for the Science of Light, 91058, Erlangen, Germany
- Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Jennifer Lühr
- Max Planck Institute for the Science of Light, 91058, Erlangen, Germany
- Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany
| | - Vahid Sandoghdar
- Max Planck Institute for the Science of Light, 91058, Erlangen, Germany.
- Max-Planck-Zentrum für Physik und Medizin, 91058, Erlangen, Germany.
- Department of Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
| |
Collapse
|