51
|
Juhas M. Artificial Intelligence in Microbiology. BRIEF LESSONS IN MICROBIOLOGY 2023:93-109. [DOI: 10.1007/978-3-031-29544-7_8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
52
|
Abstract
The ability of bacteria to respond to changes in their environment is critical to their survival, allowing them to withstand stress, form complex communities, and induce virulence responses during host infection. A remarkable feature of many of these bacterial responses is that they are often variable across individual cells, despite occurring in an isogenic population exposed to a homogeneous environmental change, a phenomenon known as phenotypic heterogeneity. Phenotypic heterogeneity can enable bet-hedging or division of labor strategies that allow bacteria to survive fluctuating conditions. Investigating the significance of phenotypic heterogeneity in environmental transitions requires dynamic, single-cell data. Technical advances in quantitative single-cell measurements, imaging, and microfluidics have led to a surge of publications on this topic. Here, we review recent discoveries on single-cell bacterial responses to environmental transitions of various origins and complexities, from simple diauxic shifts to community behaviors in biofilm formation to virulence regulation during infection. We describe how these studies firmly establish that this form of heterogeneity is prevalent and a conserved mechanism by which bacteria cope with fluctuating conditions. We end with an outline of current challenges and future directions for the field. While it remains challenging to predict how an individual bacterium will respond to a given environmental input, we anticipate that capturing the dynamics of the process will begin to resolve this and facilitate rational perturbation of environmental responses for therapeutic and bioengineering purposes.
Collapse
|
53
|
Hardo G, Noka M, Bakshi S. Synthetic Micrographs of Bacteria (SyMBac) allows accurate segmentation of bacterial cells using deep neural networks. BMC Biol 2022; 20:263. [PMID: 36447211 PMCID: PMC9710168 DOI: 10.1186/s12915-022-01453-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 10/31/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND Deep-learning-based image segmentation models are required for accurate processing of high-throughput timelapse imaging data of bacterial cells. However, the performance of any such model strictly depends on the quality and quantity of training data, which is difficult to generate for bacterial cell images. Here, we present a novel method of bacterial image segmentation using machine learning models trained with Synthetic Micrographs of Bacteria (SyMBac). RESULTS We have developed SyMBac, a tool that allows for rapid, automatic creation of arbitrary amounts of training data, combining detailed models of cell growth, physical interactions, and microscope optics to create synthetic images which closely resemble real micrographs, and is capable of training accurate image segmentation models. The major advantages of our approach are as follows: (1) synthetic training data can be generated virtually instantly and on demand; (2) these synthetic images are accompanied by perfect ground truth positions of cells, meaning no data curation is required; (3) different biological conditions, imaging platforms, and imaging modalities can be rapidly simulated, meaning any change in one's experimental setup no longer requires the laborious process of manually generating new training data for each change. Deep-learning models trained with SyMBac data are capable of analysing data from various imaging platforms and are robust to drastic changes in cell size and morphology. Our benchmarking results demonstrate that models trained on SyMBac data generate more accurate cell identifications and precise cell masks than those trained on human-annotated data, because the model learns the true position of the cell irrespective of imaging artefacts. We illustrate the approach by analysing the growth and size regulation of bacterial cells during entry and exit from dormancy, which revealed novel insights about the physiological dynamics of cells under various growth conditions. CONCLUSIONS The SyMBac approach will help to adapt and improve the performance of deep-learning-based image segmentation models for accurate processing of high-throughput timelapse image data.
Collapse
Affiliation(s)
- Georgeos Hardo
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, UK
| | - Maximilian Noka
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, UK
| | - Somenath Bakshi
- Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, UK.
| |
Collapse
|
54
|
Cutler KJ, Stringer C, Lo TW, Rappez L, Stroustrup N, Brook Peterson S, Wiggins PA, Mougous JD. Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation. Nat Methods 2022; 19:1438-1448. [PMID: 36253643 PMCID: PMC9636021 DOI: 10.1038/s41592-022-01639-4] [Citation(s) in RCA: 112] [Impact Index Per Article: 37.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/06/2022] [Indexed: 12/26/2022]
Abstract
Advances in microscopy hold great promise for allowing quantitative and precise measurement of morphological and molecular phenomena at the single-cell level in bacteria; however, the potential of this approach is ultimately limited by the availability of methods to faithfully segment cells independent of their morphological or optical characteristics. Here, we present Omnipose, a deep neural network image-segmentation algorithm. Unique network outputs such as the gradient of the distance field allow Omnipose to accurately segment cells on which current algorithms, including its predecessor, Cellpose, produce errors. We show that Omnipose achieves unprecedented segmentation performance on mixed bacterial cultures, antibiotic-treated cells and cells of elongated or branched morphology. Furthermore, the benefits of Omnipose extend to non-bacterial subjects, varied imaging modalities and three-dimensional objects. Finally, we demonstrate the utility of Omnipose in the characterization of extreme morphological phenotypes that arise during interbacterial antagonism. Our results distinguish Omnipose as a powerful tool for characterizing diverse and arbitrarily shaped cell types from imaging data.
Collapse
Affiliation(s)
- Kevin J Cutler
- Department of Physics, University of Washington, Seattle, WA, USA
| | | | - Teresa W Lo
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Luca Rappez
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Nicholas Stroustrup
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - S Brook Peterson
- Department of Microbiology, University of Washington, Seattle, WA, USA
| | - Paul A Wiggins
- Department of Physics, University of Washington, Seattle, WA, USA.
- Department of Bioengineering, University of Washington, Seattle, WA, USA.
| | - Joseph D Mougous
- Department of Microbiology, University of Washington, Seattle, WA, USA.
- Howard Hughes Medical Institute, University of Washington, Seattle, WA, USA.
| |
Collapse
|
55
|
GEMA-An Automatic Segmentation Method for Real-Time Analysis of Mammalian Cell Growth in Microfluidic Devices. J Imaging 2022; 8:jimaging8100281. [PMID: 36286375 PMCID: PMC9605644 DOI: 10.3390/jimaging8100281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 09/15/2022] [Accepted: 09/20/2022] [Indexed: 01/24/2023] Open
Abstract
Nowadays, image analysis has a relevant role in most scientific and research areas. This process is used to extract and understand information from images to obtain a model, knowledge, and rules in the decision process. In the case of biological areas, images are acquired to describe the behavior of a biological agent in time such as cells using a mathematical and computational approach to generate a system with automatic control. In this paper, MCF7 cells are used to model their growth and death when they have been injected with a drug. These mammalian cells allow understanding of behavior, gene expression, and drug resistance to breast cancer. For this, an automatic segmentation method called GEMA is presented to analyze the apoptosis and confluence stages of culture by measuring the increase or decrease of the image area occupied by cells in microfluidic devices. In vitro, the biological experiments can be analyzed through a sequence of images taken at specific intervals of time. To automate the image segmentation, the proposed algorithm is based on a Gabor filter, a coefficient of variation (CV), and linear regression. This allows the processing of images in real time during the evolution of biological experiments. Moreover, GEMA has been compared with another three representative methods such as gold standard (manual segmentation), morphological gradient, and a semi-automatic algorithm using FIJI. The experiments show promising results, due to the proposed algorithm achieving an accuracy above 90% and a lower computation time because it requires on average 1 s to process each image. This makes it suitable for image-based real-time automatization of biological lab-on-a-chip experiments.
Collapse
|
56
|
Scatigno C, Festa G. Neutron Imaging and Learning Algorithms: New Perspectives in Cultural Heritage Applications. J Imaging 2022; 8:jimaging8100284. [PMID: 36286378 PMCID: PMC9605401 DOI: 10.3390/jimaging8100284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 10/04/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022] Open
Abstract
Recently, learning algorithms such as Convolutional Neural Networks have been successfully applied in different stages of data processing from the acquisition to the data analysis in the imaging context. The aim of these algorithms is the dimensionality of data reduction and the computational effort, to find benchmarks and extract features, to improve the resolution, and reproducibility performances of the imaging data. Currently, no Neutron Imaging combined with learning algorithms was applied on cultural heritage domain, but future applications could help to solve challenges of this research field. Here, a review of pioneering works to exploit the use of Machine Learning and Deep Learning models applied to X-ray imaging and Neutron Imaging data processing is reported, spanning from biomedicine, microbiology, and materials science to give new perspectives on future cultural heritage applications.
Collapse
|
57
|
Allard P, Papazotos F, Potvin-Trottier L. Microfluidics for long-term single-cell time-lapse microscopy: Advances and applications. Front Bioeng Biotechnol 2022; 10:968342. [PMID: 36312536 PMCID: PMC9597311 DOI: 10.3389/fbioe.2022.968342] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 09/21/2022] [Indexed: 11/13/2022] Open
Abstract
Cells are inherently dynamic, whether they are responding to environmental conditions or simply at equilibrium, with biomolecules constantly being made and destroyed. Due to their small volumes, the chemical reactions inside cells are stochastic, such that genetically identical cells display heterogeneous behaviors and gene expression profiles. Studying these dynamic processes is challenging, but the development of microfluidic methods enabling the tracking of individual prokaryotic cells with microscopy over long time periods under controlled growth conditions has led to many discoveries. This review focuses on the recent developments of one such microfluidic device nicknamed the mother machine. We overview the original device design, experimental setup, and challenges associated with this platform. We then describe recent methods for analyzing experiments using automated image segmentation and tracking. We further discuss modifications to the experimental setup that allow for time-varying environmental control, replicating batch culture conditions, cell screening based on their dynamic behaviors, and to accommodate a variety of microbial species. Finally, this review highlights the discoveries enabled by this technology in diverse fields, such as cell-size control, genetic mutations, cellular aging, and synthetic biology.
Collapse
Affiliation(s)
- Paige Allard
- Department of Biology, Concordia University, Montréal, QC, Canada
| | - Fotini Papazotos
- Department of Biology, Concordia University, Montréal, QC, Canada
| | - Laurent Potvin-Trottier
- Department of Biology, Concordia University, Montréal, QC, Canada
- Department of Physics, Concordia University, Montréal, QC, Canada
- Centre for Applied Synthetic Biology, Concordia University, Montréal, QC, Canada
- *Correspondence: Laurent Potvin-Trottier,
| |
Collapse
|
58
|
Cuny AP, Ponti A, Kündig T, Rudolf F, Stelling J. Cell region fingerprints enable highly precise single-cell tracking and lineage reconstruction. Nat Methods 2022; 19:1276-1285. [PMID: 36138173 DOI: 10.1038/s41592-022-01603-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 08/02/2022] [Indexed: 11/09/2022]
Abstract
Experimental studies of cell growth, inheritance and their associated processes by microscopy require accurate single-cell observations of sufficient duration to reconstruct the genealogy. However, cell tracking-assigning identical cells on consecutive images to a track-is often challenging, resulting in laborious manual verification. Here, we propose fingerprints to identify problematic assignments rapidly. A fingerprint distance compares the structural information contained in the low frequencies of a Fourier transform to measure the similarity between cells in two consecutive images. We show that fingerprints are broadly applicable across cell types and image modalities, provided the image has sufficient structural information. Our tracker (TracX) uses fingerprints to reject unlikely assignments, thereby increasing tracking performance on published and newly generated long-term data sets. For Saccharomyces cerevisiae, we propose a comprehensive model for cell size control at the single-cell and population level centered on the Whi5 regulator, demonstrating how precise tracking can help uncover previously undescribed single-cell biology.
Collapse
Affiliation(s)
- Andreas P Cuny
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland.,Swiss Institute of Bioinformatics, Basel, Switzerland
| | - Aaron Ponti
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Tomas Kündig
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Fabian Rudolf
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland.,Swiss Institute of Bioinformatics, Basel, Switzerland
| | - Jörg Stelling
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland. .,Swiss Institute of Bioinformatics, Basel, Switzerland.
| |
Collapse
|
59
|
Ren J, Wang N, Guo P, Fan Y, Lin F, Wu J. Recent advances in microfluidics-based cell migration research. LAB ON A CHIP 2022; 22:3361-3376. [PMID: 35993877 DOI: 10.1039/d2lc00397j] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cell migration is crucial for many biological processes, including normal development, immune response, and tissue homeostasis and many pathological processes such as cancer metastasis and wound healing. Microfluidics has revolutionized the research in cell migration since its inception as it reduces the cost of studies and allows precise manipulation of different parameters that affect cell migratory response. Over the past decade, the field has made great strides in many directions, such as techniques for better control of the cellular microenvironment, application-oriented physiological-like models, and machine-assisted cell image analysis methods. Here we review recent developments in the field of microfluidic cell migration through the following aspects: 1) the co-culture models for studying host-pathogen interactions at single-cell resolution; 2) the spatiotemporal manipulation of the chemical gradients guiding cell migration; 3) the organ-on-chip models to study cell transmigration; and 4) the deep learning image processing strategies for cell migration data analysis. We further discuss the challenges, possible improvement and future perspectives of using microfluidic techniques to study cell migration.
Collapse
Affiliation(s)
- Jiaqi Ren
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Ning Wang
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Piao Guo
- Department of Radiation Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, China
- Zhejiang University Cancer Center, Hangzhou, 310003, China
| | - Yanping Fan
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Francis Lin
- Department of Physics and Astronomy, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada.
| | - Jiandong Wu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
60
|
Beardall WA, Stan GB, Dunlop MJ. Deep Learning Concepts and Applications for Synthetic Biology. GEN BIOTECHNOLOGY 2022; 1:360-371. [PMID: 36061221 PMCID: PMC9428732 DOI: 10.1089/genbio.2022.0017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 07/14/2022] [Indexed: 12/24/2022]
Abstract
Synthetic biology has a natural synergy with deep learning. It can be used to generate large data sets to train models, for example by using DNA synthesis, and deep learning models can be used to inform design, such as by generating novel parts or suggesting optimal experiments to conduct. Recently, research at the interface of engineering biology and deep learning has highlighted this potential through successes including the design of novel biological parts, protein structure prediction, automated analysis of microscopy data, optimal experimental design, and biomolecular implementations of artificial neural networks. In this review, we present an overview of synthetic biology-relevant classes of data and deep learning architectures. We also highlight emerging studies in synthetic biology that capitalize on deep learning to enable novel understanding and design, and discuss challenges and future opportunities in this space.
Collapse
Affiliation(s)
- William A.V. Beardall
- Department of Bioengineering, Imperial College London, London, United Kingdom
- Imperial College Centre of Excellence in Synthetic Biology, Imperial College London, London, United Kingdom
| | - Guy-Bart Stan
- Department of Bioengineering, Imperial College London, London, United Kingdom
- Imperial College Centre of Excellence in Synthetic Biology, Imperial College London, London, United Kingdom
| | - Mary J. Dunlop
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, USA
- Biological Design Center, Boston University, Boston, Massachusetts, USA
| |
Collapse
|
61
|
Spahn C, Gómez-de-Mariscal E, Laine RF, Pereira PM, von Chamier L, Conduit M, Pinho MG, Jacquemet G, Holden S, Heilemann M, Henriques R. DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun Biol 2022; 5:688. [PMID: 35810255 PMCID: PMC9271087 DOI: 10.1038/s42003-022-03634-z] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/23/2022] [Indexed: 11/09/2022] Open
Abstract
This work demonstrates and guides how to use a range of state-of-the-art artificial neural-networks to analyse bacterial microscopy images using the recently developed ZeroCostDL4Mic platform. We generated a database of image datasets used to train networks for various image analysis tasks and present strategies for data acquisition and curation, as well as model training. We showcase different deep learning (DL) approaches for segmenting bright field and fluorescence images of different bacterial species, use object detection to classify different growth stages in time-lapse imaging data, and carry out DL-assisted phenotypic profiling of antibiotic-treated cells. To also demonstrate the ability of DL to enhance low-phototoxicity live-cell microscopy, we showcase how image denoising can allow researchers to attain high-fidelity data in faster and longer imaging. Finally, artificial labelling of cell membranes and predictions of super-resolution images allow for accurate mapping of cell shape and intracellular targets. Our purposefully-built database of training and testing data aids in novice users' training, enabling them to quickly explore how to analyse their data through DL. We hope this lays a fertile ground for the efficient application of DL in microbiology and fosters the creation of tools for bacterial cell biology and antibiotic research.
Collapse
Affiliation(s)
- Christoph Spahn
- Department of Natural Products in Organismic Interaction, Max Planck Institute for Terrestrial Microbiology, Marburg, Germany.
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany.
| | | | - Romain F Laine
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Micrographia Bio, Translation and Innovation hub 84 Wood lane, W120BZ, London, UK
| | - Pedro M Pereira
- Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
| | - Lucas von Chamier
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Mia Conduit
- Centre for Bacterial Cell Biology, Newcastle University Biosciences Institute, Faculty of Medical Sciences, Newcastle upon Tyne, NE24AX, United Kingdom
| | - Mariana G Pinho
- Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland
| | - Séamus Holden
- Centre for Bacterial Cell Biology, Newcastle University Biosciences Institute, Faculty of Medical Sciences, Newcastle upon Tyne, NE24AX, United Kingdom
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany.
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, 2780-156, Oeiras, Portugal.
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK.
- The Francis Crick Institute, London, UK.
| |
Collapse
|
62
|
Rickert CA, Lieleg O. Machine learning approaches for biomolecular, biophysical, and biomaterials research. BIOPHYSICS REVIEWS 2022; 3:021306. [PMID: 38505413 PMCID: PMC10914139 DOI: 10.1063/5.0082179] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 05/12/2022] [Indexed: 03/21/2024]
Abstract
A fluent conversation with a virtual assistant, person-tailored news feeds, and deep-fake images created within seconds-all those things that have been unthinkable for a long time are now a part of our everyday lives. What these examples have in common is that they are realized by different means of machine learning (ML), a technology that has fundamentally changed many aspects of the modern world. The possibility to process enormous amount of data in multi-hierarchical, digital constructs has paved the way not only for creating intelligent systems but also for obtaining surprising new insight into many scientific problems. However, in the different areas of biosciences, which typically rely heavily on the collection of time-consuming experimental data, applying ML methods is a bit more challenging: Here, difficulties can arise from small datasets and the inherent, broad variability, and complexity associated with studying biological objects and phenomena. In this Review, we give an overview of commonly used ML algorithms (which are often referred to as "machines") and learning strategies as well as their applications in different bio-disciplines such as molecular biology, drug development, biophysics, and biomaterials science. We highlight how selected research questions from those fields were successfully translated into machine readable formats, discuss typical problems that can arise in this context, and provide an overview of how to resolve those encountered difficulties.
Collapse
|
63
|
Gwatimba A, Rosenow T, Stick SM, Kicic A, Iosifidis T, Karpievitch YV. AI-Driven Cell Tracking to Enable High-Throughput Drug Screening Targeting Airway Epithelial Repair for Children with Asthma. J Pers Med 2022; 12:jpm12050809. [PMID: 35629232 PMCID: PMC9146422 DOI: 10.3390/jpm12050809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 05/08/2022] [Accepted: 05/13/2022] [Indexed: 11/16/2022] Open
Abstract
The airway epithelium of children with asthma is characterized by aberrant repair that may be therapeutically modifiable. The development of epithelial-targeting therapeutics that enhance airway repair could provide a novel treatment avenue for childhood asthma. Drug discovery efforts utilizing high-throughput live cell imaging of patient-derived airway epithelial culture-based wound repair assays can be used to identify compounds that modulate airway repair in childhood asthma. Manual cell tracking has been used to determine cell trajectories and wound closure rates, but is time consuming, subject to bias, and infeasible for high-throughput experiments. We therefore developed software, EPIC, that automatically tracks low-resolution low-framerate cells using artificial intelligence, analyzes high-throughput drug screening experiments and produces multiple wound repair metrics and publication-ready figures. Additionally, unlike available cell trackers that perform cell segmentation, EPIC tracks cells using bounding boxes and thus has simpler and faster training data generation requirements for researchers working with other cell types. EPIC outperformed publicly available software in our wound repair datasets by achieving human-level cell tracking accuracy in a fraction of the time. We also showed that EPIC is not limited to airway epithelial repair for children with asthma but can be applied in other cellular contexts by outperforming the same software in the Cell Tracking with Mitosis Detection Challenge (CTMC) dataset. The CTMC is the only established cell tracking benchmark dataset that is designed for cell trackers utilizing bounding boxes. We expect our open-source and easy-to-use software to enable high-throughput drug screening targeting airway epithelial repair for children with asthma.
Collapse
Affiliation(s)
- Alphons Gwatimba
- Wal-Yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Perth, WA 6009, Australia; (T.R.); (S.M.S.); (A.K.); (T.I.); (Y.V.K.)
- School of Computer Science and Software Engineering, University of Western Australia, Nedlands, WA 6009, Australia
- Correspondence:
| | - Tim Rosenow
- Wal-Yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Perth, WA 6009, Australia; (T.R.); (S.M.S.); (A.K.); (T.I.); (Y.V.K.)
- Centre for Microscopy, Characterisation and Analysis, University of Western Australia, Nedlands, WA 6009, Australia
| | - Stephen M. Stick
- Wal-Yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Perth, WA 6009, Australia; (T.R.); (S.M.S.); (A.K.); (T.I.); (Y.V.K.)
- Division of Paediatrics, Medical School, University of Western Australia, Nedlands, WA 6009, Australia
- Department of Respiratory and Sleep Medicine, Perth Children’s Hospital, Nedlands, WA 6009, Australia
- Centre for Cell Therapy and Regenerative Medicine, School of Medicine, University of Western Australia, Nedlands, WA 6009, Australia
| | - Anthony Kicic
- Wal-Yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Perth, WA 6009, Australia; (T.R.); (S.M.S.); (A.K.); (T.I.); (Y.V.K.)
- Division of Paediatrics, Medical School, University of Western Australia, Nedlands, WA 6009, Australia
- Centre for Cell Therapy and Regenerative Medicine, School of Medicine, University of Western Australia, Nedlands, WA 6009, Australia
- School of Population Health, Curtin University, Bentley, WA 6102, Australia
| | - Thomas Iosifidis
- Wal-Yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Perth, WA 6009, Australia; (T.R.); (S.M.S.); (A.K.); (T.I.); (Y.V.K.)
- Centre for Cell Therapy and Regenerative Medicine, School of Medicine, University of Western Australia, Nedlands, WA 6009, Australia
- School of Population Health, Curtin University, Bentley, WA 6102, Australia
| | - Yuliya V. Karpievitch
- Wal-Yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Perth, WA 6009, Australia; (T.R.); (S.M.S.); (A.K.); (T.I.); (Y.V.K.)
- School of Biomedical Sciences, University of Western Australia, Nedlands, WA 6009, Australia
| |
Collapse
|
64
|
Abstract
Microscopy image analysis has recently made enormous progress both in terms of accuracy and speed thanks to machine learning methods and improved computational resources. This greatly facilitates the online adaptation of microscopy experimental plans using real-time information of the observed systems and their environments. Applications in which reactiveness is needed are multifarious. Here we report MicroMator, an open and flexible software for defining and driving reactive microscopy experiments. It provides a Python software environment and an extensible set of modules that greatly facilitate the definition of events with triggers and effects interacting with the experiment. We provide a pedagogic example performing dynamic adaptation of fluorescence illumination on bacteria, and demonstrate MicroMator’s potential via two challenging case studies in yeast to single-cell control and single-cell recombination, both requiring real-time tracking and light targeting at the single-cell level. In microscopy, applications in which reactiveness is needed are multifarious. Here the authors report MicroMator, a Python software package for reactive experiments, which they use for applications requiring real-time tracking and light-targeting at the single-cell level.
Collapse
|
65
|
Dynamic gene expression and growth underlie cell-to-cell heterogeneity in Escherichia coli stress response. Proc Natl Acad Sci U S A 2022; 119:e2115032119. [PMID: 35344432 PMCID: PMC9168488 DOI: 10.1073/pnas.2115032119] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Individual bacteria that share identical genomes and growth environments can display substantial cell-to-cell differences in expression of stress-response genes and single-cell growth rates. This phenotypic heterogeneity can impact the survival of single cells facing sudden stress. However, the windows of time that cells spend in vulnerable or tolerant states are often unknown. We quantify the temporal expression of a suite of stress-response reporters, while simultaneously monitoring growth. We observe pulsatile expression across genes with a range of stress-response functions, finding that single-cell growth rates are often anticorrelated with reporter levels. These dynamic phenotypic differences have a concrete link to function, in which individual cells undergoing a pulse of elevated expression and slow growth are predisposed to survive antibiotic exposure. Cell-to-cell heterogeneity in gene expression and growth can have critical functional consequences, such as determining whether individual bacteria survive or die following stress. Although phenotypic variability is well documented, the dynamics that underlie it are often unknown. This information is important because dramatically different outcomes can arise from gradual versus rapid changes in expression and growth. Using single-cell time-lapse microscopy, we measured the temporal expression of a suite of stress-response reporters in Escherichia coli, while simultaneously monitoring growth rate. In conditions without stress, we found several examples of pulsatile expression. Single-cell growth rates were often anticorrelated with reporter levels, with changes in growth preceding changes in expression. These dynamics have functional consequences, which we demonstrate by measuring survival after challenging cells with the antibiotic ciprofloxacin. Our results suggest that fluctuations in both gene expression and growth dynamics in stress-response networks have direct consequences on survival.
Collapse
|
66
|
Kempster C, Butler G, Kuznecova E, Taylor KA, Kriek N, Little G, Sowa MA, Sage T, Johnson LJ, Gibbins JM, Pollitt AY. Fully automated platelet differential interference contrast image analysis via deep learning. Sci Rep 2022; 12:4614. [PMID: 35301400 PMCID: PMC8931011 DOI: 10.1038/s41598-022-08613-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 03/08/2022] [Indexed: 11/12/2022] Open
Abstract
Platelets mediate arterial thrombosis, a leading cause of myocardial infarction and stroke. During injury, platelets adhere and spread over exposed subendothelial matrix substrates of the damaged blood vessel wall. The mechanisms which govern platelet activation and their interaction with a range of substrates are therefore regularly investigated using platelet spreading assays. These assays often use differential interference contrast (DIC) microscopy to assess platelet morphology and analysis performed using manual annotation. Here, a convolutional neural network (CNN) allowed fully automated analysis of platelet spreading assays captured by DIC microscopy. The CNN was trained using 120 generalised training images. Increasing the number of training images increases the mean average precision of the CNN. The CNN performance was compared to six manual annotators. Significant variation was observed between annotators, highlighting bias when manual analysis is performed. The CNN effectively analysed platelet morphology when platelets spread over a range of substrates (CRP-XL, vWF and fibrinogen), in the presence and absence of inhibitors (dasatinib, ibrutinib and PRT-060318) and agonist (thrombin), with results consistent in quantifying spread platelet area which is comparable to published literature. The application of a CNN enables, for the first time, automated analysis of platelet spreading assays captured by DIC microscopy.
Collapse
Affiliation(s)
- Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - George Butler
- School of Biological Sciences, University of Reading, Reading, UK.,The Brady Urological Institute, Johns Hopkins School of Medicine, Baltimore, USA
| | - Elina Kuznecova
- School of Biological Sciences, University of Reading, Reading, UK
| | - Kirk A Taylor
- School of Biological Sciences, University of Reading, Reading, UK
| | - Neline Kriek
- School of Biological Sciences, University of Reading, Reading, UK
| | - Gemma Little
- School of Biological Sciences, University of Reading, Reading, UK
| | - Marcin A Sowa
- School of Biological Sciences, University of Reading, Reading, UK
| | - Tanya Sage
- School of Biological Sciences, University of Reading, Reading, UK
| | - Louise J Johnson
- School of Biological Sciences, University of Reading, Reading, UK
| | | | - Alice Y Pollitt
- School of Biological Sciences, University of Reading, Reading, UK.
| |
Collapse
|
67
|
Carrión H, Jafari M, Bagood MD, Yang HY, Isseroff RR, Gomez M. Automatic wound detection and size estimation using deep learning algorithms. PLoS Comput Biol 2022; 18:e1009852. [PMID: 35275923 PMCID: PMC8942216 DOI: 10.1371/journal.pcbi.1009852] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 03/23/2022] [Accepted: 01/20/2022] [Indexed: 11/17/2022] Open
Abstract
Evaluating and tracking wound size is a fundamental metric for the wound assessment process. Good location and size estimates can enable proper diagnosis and effective treatment. Traditionally, laboratory wound healing studies include a collection of images at uniform time intervals exhibiting the wounded area and the healing process in the test animal, often a mouse. These images are then manually observed to determine key metrics -such as wound size progress- relevant to the study. However, this task is a time-consuming and laborious process. In addition, defining the wound edge could be subjective and can vary from one individual to another even among experts. Furthermore, as our understanding of the healing process grows, so does our need to efficiently and accurately track these key factors for high throughput (e.g., over large-scale and long-term experiments). Thus, in this study, we develop a deep learning-based image analysis pipeline that aims to intake non-uniform wound images and extract relevant information such as the location of interest, wound only image crops, and wound periphery size over-time metrics. In particular, our work focuses on images of wounded laboratory mice that are used widely for translationally relevant wound studies and leverages a commonly used ring-shaped splint present in most images to predict wound size. We apply the method to a dataset that was never meant to be quantified and, thus, presents many visual challenges. Additionally, the data set was not meant for training deep learning models and so is relatively small in size with only 256 images. We compare results to that of expert measurements and demonstrate preservation of information relevant to predicting wound closure despite variability from machine-to-expert and even expert-to-expert. The proposed system resulted in high fidelity results on unseen data with minimal human intervention. Furthermore, the pipeline estimates acceptable wound sizes when less than 50% of the images are missing reference objects.
Collapse
Affiliation(s)
- Héctor Carrión
- Department of Computer Science and Engineering, University of California, Santa Cruz, California, United States of America
| | - Mohammad Jafari
- Department of Earth and Space Sciences, Columbus State University, Columbus, Georgia, United States of America
| | - Michelle Dawn Bagood
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Hsin-ya Yang
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Roslyn Rivkah Isseroff
- Department of Dermatology, University of California, Davis, Sacramento, California, United States of America
| | - Marcella Gomez
- Department of Applied Mathematics, University of California, Santa Cruz, California, United States of America
| |
Collapse
|
68
|
Ye G, Kaya M. Automated Cell Foreground–Background Segmentation with Phase-Contrast Microscopy Images: An Alternative to Machine Learning Segmentation Methods with Small-Scale Data. Bioengineering (Basel) 2022; 9:bioengineering9020081. [PMID: 35200434 PMCID: PMC8869246 DOI: 10.3390/bioengineering9020081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 02/12/2022] [Accepted: 02/15/2022] [Indexed: 12/04/2022] Open
Abstract
Cell segmentation is a critical step for image-based experimental analysis. Existing cell segmentation methods are neither entirely automated nor perform well under basic laboratory microscopy. This study proposes an efficient and automated cell segmentation method involving morphological operations to automatically achieve cell segmentation for phase-contrast microscopes. Manual/visual counting of cell segmentation serves as the control group (156 images as ground truth) to evaluate the proposed method’s performance. The proposed technology’s adaptive performance is assessed at varying conditions, including artificial blurriness, illumination, and image size. Compared to the Trainable Weka Segmentation method, the Empirical Gradient Threshold method, and the ilastik segmentation software, the proposed method achieved better segmentation accuracy (dice coefficient: 90.07, IoU: 82.16%, and 6.51% as the average relative error on measuring cell area). The proposed method also has good reliability, even under unfavored imaging conditions at which manual labeling or human intervention is inefficient. Additionally, similar degrees of segmentation accuracy were confirmed when the ground truth data and the generated data from the proposed method were applied individually to train modified U-Net models (16848 images). These results demonstrated good accuracy and high practicality of the proposed cell segmentation method with phase-contrast microscopy image data.
Collapse
|
69
|
Bioimaging approaches for quantification of individual cell behavior during cell fate decisions. Biochem Soc Trans 2022; 50:513-527. [PMID: 35166330 DOI: 10.1042/bst20210534] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 01/10/2022] [Accepted: 01/24/2022] [Indexed: 11/17/2022]
Abstract
Tracking individual cells has allowed a new understanding of cellular behavior in human health and disease by adding a dynamic component to the already complex heterogeneity of single cells. Technically, despite countless advances, numerous experimental variables can affect data collection and interpretation and need to be considered. In this review, we discuss the main technical aspects and biological findings in the analysis of the behavior of individual cells. We discuss the most relevant contributions provided by these approaches in clinically relevant human conditions like embryo development, stem cells biology, inflammation, cancer and microbiology, along with the cellular mechanisms and molecular pathways underlying these conditions. We also discuss the key technical aspects to be considered when planning and performing experiments involving the analysis of individual cells over long periods. Despite the challenges in automatic detection, features extraction and long-term tracking that need to be tackled, the potential impact of single-cell bioimaging is enormous in understanding the pathogenesis and development of new therapies in human pathophysiology.
Collapse
|
70
|
Hernández-Herrera P, Ugartechea-Chirino Y, Torres-Martínez HH, Arzola AV, Chairez-Veloz JE, García-Ponce B, Sánchez MDLP, Garay-Arroyo A, Álvarez-Buylla ER, Dubrovsky JG, Corkidi G. Live Plant Cell Tracking: Fiji plugin to analyze cell proliferation dynamics and understand morphogenesis. PLANT PHYSIOLOGY 2022; 188:846-860. [PMID: 34791452 PMCID: PMC8825436 DOI: 10.1093/plphys/kiab530] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/19/2021] [Indexed: 05/13/2023]
Abstract
Arabidopsis (Arabidopsis thaliana) primary and lateral roots (LRs) are well suited for 3D and 4D microscopy, and their development provides an ideal system for studying morphogenesis and cell proliferation dynamics. With fast-advancing microscopy techniques used for live-imaging, whole tissue data are increasingly available, yet present the great challenge of analyzing complex interactions within cell populations. We developed a plugin "Live Plant Cell Tracking" (LiPlaCeT) coupled to the publicly available ImageJ image analysis program and generated a pipeline that allows, with the aid of LiPlaCeT, 4D cell tracking and lineage analysis of populations of dividing and growing cells. The LiPlaCeT plugin contains ad hoc ergonomic curating tools, making it very simple to use for manual cell tracking, especially when the signal-to-noise ratio of images is low or variable in time or 3D space and when automated methods may fail. Performing time-lapse experiments and using cell-tracking data extracted with the assistance of LiPlaCeT, we accomplished deep analyses of cell proliferation and clonal relations in the whole developing LR primordia and constructed genealogical trees. We also used cell-tracking data for endodermis cells of the root apical meristem (RAM) and performed automated analyses of cell population dynamics using ParaView software (also publicly available). Using the RAM as an example, we also showed how LiPlaCeT can be used to generate information at the whole-tissue level regarding cell length, cell position, cell growth rate, cell displacement rate, and proliferation activity. The pipeline will be useful in live-imaging studies of roots and other plant organs to understand complex interactions within proliferating and growing cell populations. The plugin includes a step-by-step user manual and a dataset example that are available at https://www.ibt.unam.mx/documentos/diversos/LiPlaCeT.zip.
Collapse
Affiliation(s)
- Paul Hernández-Herrera
- Laboratorio de Imágenes y Visión por Computadora, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Yamel Ugartechea-Chirino
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Héctor H Torres-Martínez
- Departamento de Biología Molecular de Plantas, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Alejandro V Arzola
- Instituto de Física, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - José Eduardo Chairez-Veloz
- Departamento de Control Automático, Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Cd. de México, C.P. 07350, Mexico
| | - Berenice García-Ponce
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - María de la Paz Sánchez
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Adriana Garay-Arroyo
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Elena R Álvarez-Buylla
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Joseph G Dubrovsky
- Departamento de Biología Molecular de Plantas, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Gabriel Corkidi
- Laboratorio de Imágenes y Visión por Computadora, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| |
Collapse
|
71
|
Scebba G, Zhang J, Catanzaro S, Mihai C, Distler O, Berli M, Karlen W. Detect-and-segment: A deep learning approach to automate wound image segmentation. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
72
|
Watson ER, Taherian Fard A, Mar JC. Computational Methods for Single-Cell Imaging and Omics Data Integration. Front Mol Biosci 2022; 8:768106. [PMID: 35111809 PMCID: PMC8801747 DOI: 10.3389/fmolb.2021.768106] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
Integrating single cell omics and single cell imaging allows for a more effective characterisation of the underlying mechanisms that drive a phenotype at the tissue level, creating a comprehensive profile at the cellular level. Although the use of imaging data is well established in biomedical research, its primary application has been to observe phenotypes at the tissue or organ level, often using medical imaging techniques such as MRI, CT, and PET. These imaging technologies complement omics-based data in biomedical research because they are helpful for identifying associations between genotype and phenotype, along with functional changes occurring at the tissue level. Single cell imaging can act as an intermediary between these levels. Meanwhile new technologies continue to arrive that can be used to interrogate the genome of single cells and its related omics datasets. As these two areas, single cell imaging and single cell omics, each advance independently with the development of novel techniques, the opportunity to integrate these data types becomes more and more attractive. This review outlines some of the technologies and methods currently available for generating, processing, and analysing single-cell omics- and imaging data, and how they could be integrated to further our understanding of complex biological phenomena like ageing. We include an emphasis on machine learning algorithms because of their ability to identify complex patterns in large multidimensional data.
Collapse
Affiliation(s)
| | - Atefeh Taherian Fard
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| | - Jessica Cara Mar
- Australian Institute for Bioengineering and Nanotechnology, The University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
73
|
Robust optical flow algorithm for general single cell segmentation. PLoS One 2022; 17:e0261763. [PMID: 35030184 PMCID: PMC8759635 DOI: 10.1371/journal.pone.0261763] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 12/09/2021] [Indexed: 11/19/2022] Open
Abstract
Cell segmentation is crucial to the field of cell biology, as the accurate extraction of single-cell morphology, migration, and ultimately behavior from time-lapse live cell imagery are of paramount importance to elucidate and understand basic cellular processes. In an effort to increase available segmentation tools that can perform across research groups and platforms, we introduce a novel segmentation approach centered around optical flow and show that it achieves robust segmentation of single cells by validating it on multiple cell types, phenotypes, optical modalities, and in-vitro environments with or without labels. By leveraging cell movement in time-lapse imagery as a means to distinguish cells from their background and augmenting the output with machine vision operations, our algorithm reduces the number of adjustable parameters needed for manual optimization to two. We show that this approach offers the advantage of quicker processing times compared to contemporary machine learning based methods that require manual labeling for training, and in most cases achieves higher quality segmentation as well. This algorithm is packaged within MATLAB, offering an accessible means for general cell segmentation in a time-efficient manner.
Collapse
|
74
|
Li W, Zhang X, Stern A, Birtwistle M, Iuricich F. CellTrackVis: analyzing the performance of cell tracking algorithms. EUROGRAPHICS/IEEE VGTC SYMPOSIUM ON VISUALIZATION : EUROVIS : [PROCEEDINGS]. EUROGRAPHICS/IEEE VGTC SYMPOSIUM ON VISUALIZATION 2022; 2022:115-119. [PMID: 36656607 PMCID: PMC9841471 DOI: 10.2312/evs.20221103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
Live-cell imaging is a common data acquisition technique used by biologists to analyze cell behavior. Since manually tracking cells in a video sequence is extremely time-consuming, many automatic algorithms have been developed in the last twenty years to accomplish the task. However, none of these algorithms can yet claim robust tracking performance at the varying of acquisition conditions (e.g., cell type, acquisition device, cell treatments). While many visualization tools exist to help with cell behavior analysis, there are no tools to help with the algorithm's validation. This paper proposes CellTrackVis, a new visualization tool for evaluating cell tracking algorithms. CellTrackVis allows comparing automatically generated cell tracks with ground truth data to help biologists select the best-suited algorithm for their experimented pipeline. Moreover, CellTackVis can be used as a debugging tool while developing a new cell tracking algorithm to investigate where, when, and why each tracking error occurred.
Collapse
Affiliation(s)
- W. Li
- School of Computing, Clemson University, United States
| | - X. Zhang
- School of Computing, Clemson University, United States
| | - A. Stern
- Icahn School of Medicine at Mount Sinai, New York, United States
| | - M. Birtwistle
- Department of Chemical and Biomolecular Engineering, Clemson University, United States
| | - F. Iuricich
- School of Computing, Clemson University, United States
| |
Collapse
|
75
|
O’Connor OM, Alnahhas RN, Lugagne JB, Dunlop MJ. DeLTA 2.0: A deep learning pipeline for quantifying single-cell spatial and temporal dynamics. PLoS Comput Biol 2022; 18:e1009797. [PMID: 35041653 PMCID: PMC8797229 DOI: 10.1371/journal.pcbi.1009797] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 01/28/2022] [Accepted: 12/25/2021] [Indexed: 12/04/2022] Open
Abstract
Improvements in microscopy software and hardware have dramatically increased the pace of image acquisition, making analysis a major bottleneck in generating quantitative, single-cell data. Although tools for segmenting and tracking bacteria within time-lapse images exist, most require human input, are specialized to the experimental set up, or lack accuracy. Here, we introduce DeLTA 2.0, a purely Python workflow that can rapidly and accurately analyze images of single cells on two-dimensional surfaces to quantify gene expression and cell growth. The algorithm uses deep convolutional neural networks to extract single-cell information from time-lapse images, requiring no human input after training. DeLTA 2.0 retains all the functionality of the original version, which was optimized for bacteria growing in the mother machine microfluidic device, but extends results to two-dimensional growth environments. Two-dimensional environments represent an important class of data because they are more straightforward to implement experimentally, they offer the potential for studies using co-cultures of cells, and they can be used to quantify spatial effects and multi-generational phenomena. However, segmentation and tracking are significantly more challenging tasks in two-dimensions due to exponential increases in the number of cells. To showcase this new functionality, we analyze mixed populations of antibiotic resistant and susceptible cells, and also track pole age and growth rate across generations. In addition to the two-dimensional capabilities, we also introduce several major improvements to the code that increase accessibility, including the ability to accept many standard microscopy file formats as inputs and the introduction of a Google Colab notebook so users can try the software without installing the code on their local machine. DeLTA 2.0 is rapid, with run times of less than 10 minutes for complete movies with hundreds of cells, and is highly accurate, with error rates around 1%, making it a powerful tool for analyzing time-lapse microscopy data.
Collapse
Affiliation(s)
- Owen M. O’Connor
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Biological Design Center, Boston University, Boston, Massachusetts, United States of America
| | - Razan N. Alnahhas
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Biological Design Center, Boston University, Boston, Massachusetts, United States of America
| | - Jean-Baptiste Lugagne
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Biological Design Center, Boston University, Boston, Massachusetts, United States of America
| | - Mary J. Dunlop
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Biological Design Center, Boston University, Boston, Massachusetts, United States of America
| |
Collapse
|
76
|
Basak AK, Mirzaei M, Strzałka K, Yamada K. Texture feature extraction from microscope images enables a robust estimation of ER body phenotype in Arabidopsis. PLANT METHODS 2021; 17:109. [PMID: 34702318 PMCID: PMC8549183 DOI: 10.1186/s13007-021-00810-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 10/17/2021] [Indexed: 05/31/2023]
Abstract
BACKGROUND Cellular components are controlled by genetic and physiological factors that define their shape and size. However, quantitively capturing the morphological characteristics and movement of cellular organelles from micrograph images is challenging, because the analysis deals with complexities of images that frequently lead to inaccuracy in the estimation of the features. Here we show a unique quantitative method to overcome biases and inaccuracy of biological samples from confocal micrographs. RESULTS We generated 2D images of cell walls and spindle-shaped cellular organelles, namely ER bodies, with a maximum contrast projection of 3D confocal fluorescent microscope images. The projected images were further processed and segmented by adaptive thresholding of the fluorescent levels in the cell walls. Micrographs are composed of pixels, which have information on position and intensity. From the pixel information we calculated three types of features (spatial, intensity and Haralick) in ER bodies corresponding to segmented cells. The spatial features include basic information on shape, e.g., surface area and perimeter. The intensity features include information on mean, standard deviation and quantile of fluorescence intensities within an ER body. Haralick features describe the texture features, which can be calculated mathematically from the interrelationship between the pixel information. Together these parameters were subjected to multivariate analysis to estimate the morphological diversity. Additionally, we calculated the displacement of the ER bodies using the positional information in time-lapse images. We captured similar morphological diversity and movement within ER body phenotypes in several microscopy experiments performed in different settings and scanned under different objectives. We then described differences in morphology and movement of ER bodies between A. thaliana wild type and mutants deficient in ER body-related genes. CONCLUSIONS The findings unexpectedly revealed multiple genetic factors that are involved in the shape and size of ER bodies in A. thaliana. This is the first report showing morphological characteristics in addition to the movement of cellular components and it quantitatively summarises plant phenotypic differences even in plants that show similar cellular components. The estimation of morphological diversity was independent of the cell staining method and the objective lens used in the microscopy. Hence, our study enables a robust estimation of plant phenotypes by recognizing small differences in complex cell organelle shapes and their movement, which is beneficial in a comprehensive analysis of the molecular mechanism for cell organelle formation that is independent of technical variations.
Collapse
Affiliation(s)
- Arpan Kumar Basak
- Faculty of Biology, Jagiellonian University, Krakow, Poland
- Malopolska Centre of Biotechnology, Jagiellonian University, Krakow, Poland
| | | | - Kazimierz Strzałka
- Malopolska Centre of Biotechnology, Jagiellonian University, Krakow, Poland
- Faculty of Biochemistry, Biophysics and Biotechnology, Department of Plant Physiology and Biochemistry, Jagiellonian University, Krakow, Poland
| | - Kenji Yamada
- Malopolska Centre of Biotechnology, Jagiellonian University, Krakow, Poland.
| |
Collapse
|
77
|
Ulicna K, Vallardi G, Charras G, Lowe AR. Automated Deep Lineage Tree Analysis Using a Bayesian Single Cell Tracking Approach. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.734559] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Single-cell methods are beginning to reveal the intrinsic heterogeneity in cell populations, arising from the interplay of deterministic and stochastic processes. However, it remains challenging to quantify single-cell behaviour from time-lapse microscopy data, owing to the difficulty of extracting reliable cell trajectories and lineage information over long time-scales and across several generations. Therefore, we developed a hybrid deep learning and Bayesian cell tracking approach to reconstruct lineage trees from live-cell microscopy data. We implemented a residual U-Net model coupled with a classification CNN to allow accurate instance segmentation of the cell nuclei. To track the cells over time and through cell divisions, we developed a Bayesian cell tracking methodology that uses input features from the images to enable the retrieval of multi-generational lineage information from a corpus of thousands of hours of live-cell imaging data. Using our approach, we extracted 20,000 + fully annotated single-cell trajectories from over 3,500 h of video footage, organised into multi-generational lineage trees spanning up to eight generations and fourth cousin distances. Benchmarking tests, including lineage tree reconstruction assessments, demonstrate that our approach yields high-fidelity results with our data, with minimal requirement for manual curation. To demonstrate the robustness of our minimally supervised cell tracking methodology, we retrieve cell cycle durations and their extended inter- and intra-generational family relationships in 5,000 + fully annotated cell lineages. We observe vanishing cycle duration correlations across ancestral relatives, yet reveal correlated cyclings between cells sharing the same generation in extended lineages. These findings expand the depth and breadth of investigated cell lineage relationships in approximately two orders of magnitude more data than in previous studies of cell cycle heritability, which were reliant on semi-manual lineage data analysis.
Collapse
|
78
|
Yeast cell segmentation in microstructured environments with deep learning. Biosystems 2021; 211:104557. [PMID: 34634444 DOI: 10.1016/j.biosystems.2021.104557] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/09/2021] [Accepted: 09/30/2021] [Indexed: 11/23/2022]
Abstract
Cell segmentation is a major bottleneck in extracting quantitative single-cell information from microscopy data. The challenge is exasperated in the setting of microstructured environments. While deep learning approaches have proven useful for general cell segmentation tasks, previously available segmentation tools for the yeast-microstructure setting rely on traditional machine learning approaches. Here we present convolutional neural networks trained for multiclass segmenting of individual yeast cells and discerning these from cell-similar microstructures. An U-Net based semantic segmentaiton approach, as well as a direct instance segmentation approach with a Mask R-CNN are demonstrated. We give an overview of the datasets recorded for training, validating and testing the networks, as well as a typical use-case. We showcase the methods' contribution to segmenting yeast in microstructured environments with a typical systems or synthetic biology application. The models achieve robust segmentation results, outperforming the previous state-of-the-art in both accuracy and speed. The combination of fast and accurate segmentation is not only beneficial for a posteriori data processing, it also makes online monitoring of thousands of trapped cells or closed-loop optimal experimental design feasible from an image processing perspective. Code is and data samples are available at https://git.rwth-aachen.de/bcs/projects/tp/multiclass-yeast-seg.
Collapse
|
79
|
Bao R, Al-Shakarji NM, Bunyak F, Palaniappan K. DMNet: Dual-Stream Marker Guided Deep Network for Dense Cell Segmentation and Lineage Tracking. ... IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3354-3363. [PMID: 35386855 PMCID: PMC8982054 DOI: 10.1109/iccvw54120.2021.00375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate segmentation and tracking of cells in microscopy image sequences is extremely beneficial in clinical diagnostic applications and biomedical research. A continuing challenge is the segmentation of dense touching cells and deforming cells with indistinct boundaries, in low signal-to-noise-ratio images. In this paper, we present a dual-stream marker-guided network (DMNet) for segmentation of touching cells in microscopy videos of many cell types. DMNet uses an explicit cell marker-detection stream, with a separate mask-prediction stream using a distance map penalty function, which enables supervised training to focus attention on touching and nearby cells. For multi-object cell tracking we use M2Track tracking-by-detection approach with multi-step data association. Our M2Track with mask overlap includes short term track-to-cell association followed by track-to-track association to re-link tracklets with missing segmentation masks over a short sequence of frames. Our combined detection, segmentation and tracking algorithm has proven its potential on the IEEE ISBI 2021 6th Cell Tracking Challenge (CTC-6) where we achieved multiple top three rankings for diverse cell types. Our team name is MU-Ba-US, and the implementation of DMNet is available at, http://celltrackingchallenge.net/participants/MU-Ba-US/.
Collapse
Affiliation(s)
- Rina Bao
- University of Missouri-Columbia, MO 65211, USA
| | | | | | | |
Collapse
|
80
|
Hallou A, Yevick HG, Dumitrascu B, Uhlmann V. Deep learning for bioimage analysis in developmental biology. Development 2021; 148:dev199616. [PMID: 34490888 PMCID: PMC8451066 DOI: 10.1242/dev.199616] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Deep learning has transformed the way large and complex image datasets can be processed, reshaping what is possible in bioimage analysis. As the complexity and size of bioimage data continues to grow, this new analysis paradigm is becoming increasingly ubiquitous. In this Review, we begin by introducing the concepts needed for beginners to understand deep learning. We then review how deep learning has impacted bioimage analysis and explore the open-source resources available to integrate it into a research project. Finally, we discuss the future of deep learning applied to cell and developmental biology. We analyze how state-of-the-art methodologies have the potential to transform our understanding of biological systems through new image-based analysis and modelling that integrate multimodal inputs in space and time.
Collapse
Affiliation(s)
- Adrien Hallou
- Cavendish Laboratory, Department of Physics, University of Cambridge, Cambridge, CB3 0HE, UK
- Wellcome Trust/Cancer Research UK Gurdon Institute, University of Cambridge, Cambridge, CB2 1QN, UK
- Wellcome Trust/Medical Research Council Stem Cell Institute, University of Cambridge, Cambridge, CB2 1QR, UK
| | - Hannah G. Yevick
- Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, 02142, USA
| | - Bianca Dumitrascu
- Computer Laboratory, Cambridge, University of Cambridge, Cambridge, CB3 0FD, UK
| | - Virginie Uhlmann
- European Bioinformatics Institute, European Molecular Biology Laboratory, Cambridge, CB10 1SD, UK
| |
Collapse
|
81
|
Pérez-Dones D, Ledesma-Terrón M, Míguez DG. Quantitative Approaches to Study Retinal Neurogenesis. Biomedicines 2021; 9:1222. [PMID: 34572408 PMCID: PMC8471905 DOI: 10.3390/biomedicines9091222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/07/2021] [Accepted: 09/11/2021] [Indexed: 11/16/2022] Open
Abstract
The study of the development of the vertebrate retina can be addressed from several perspectives: from a purely qualitative to a more quantitative approach that takes into account its spatio-temporal features, its three-dimensional structure and also the regulation and properties at the systems level. Here, we review the ongoing transition toward a full four-dimensional characterization of the developing vertebrate retina, focusing on the challenges at the experimental, image acquisition, image processing and quantification. Using the developing zebrafish retina, we illustrate how quantitative data extracted from these type of highly dense, three-dimensional tissues depend strongly on the image quality, image processing and algorithms used to segment and quantify. Therefore, we propose that the scientific community that focuses on developmental systems could strongly benefit from a more detailed disclosure of the tools and pipelines used to process and analyze images from biological samples.
Collapse
Affiliation(s)
- Diego Pérez-Dones
- Centro de Biología Molecular Severo Ochoa, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Física de la Materia Condensada (IFIMAC), Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - Mario Ledesma-Terrón
- Centro de Biología Molecular Severo Ochoa, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Física de la Materia Condensada (IFIMAC), Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| | - David G Míguez
- Centro de Biología Molecular Severo Ochoa, Universidad Autónoma de Madrid, 28049 Madrid, Spain
- Física de la Materia Condensada (IFIMAC), Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain
| |
Collapse
|
82
|
Panigrahi S, Murat D, Le Gall A, Martineau E, Goldlust K, Fiche JB, Rombouts S, Nöllmann M, Espinosa L, Mignot T. Misic, a general deep learning-based method for the high-throughput cell segmentation of complex bacterial communities. eLife 2021; 10:65151. [PMID: 34498586 PMCID: PMC8478410 DOI: 10.7554/elife.65151] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 09/07/2021] [Indexed: 02/01/2023] Open
Abstract
Studies of bacterial communities, biofilms and microbiomes, are multiplying due to their impact on health and ecology. Live imaging of microbial communities requires new tools for the robust identification of bacterial cells in dense and often inter-species populations, sometimes over very large scales. Here, we developed MiSiC, a general deep-learning-based 2D segmentation method that automatically segments single bacteria in complex images of interacting bacterial communities with very little parameter adjustment, independent of the microscopy settings and imaging modality. Using a bacterial predator-prey interaction model, we demonstrate that MiSiC enables the analysis of interspecies interactions, resolving processes at subcellular scales and discriminating between species in millimeter size datasets. The simple implementation of MiSiC and the relatively low need in computing power make its use broadly accessible to fields interested in bacterial interactions and cell biology.
Collapse
Affiliation(s)
- Swapnesh Panigrahi
- CNRS-Aix-Marseille University, Laboratoire de Chimie Bactérienne, Institut de Microbiologie de la Méditerranée and Turing Center for Living Systems, Marseille, France
| | - Dorothée Murat
- CNRS-Aix-Marseille University, Laboratoire de Chimie Bactérienne, Institut de Microbiologie de la Méditerranée and Turing Center for Living Systems, Marseille, France
| | - Antoine Le Gall
- Centre de Biochimie Structurale, CNRS UMR 5048, INSERM U1054, Université de Montpellie, Marseille, France
| | - Eugénie Martineau
- CNRS-Aix-Marseille University, Laboratoire de Chimie Bactérienne, Institut de Microbiologie de la Méditerranée and Turing Center for Living Systems, Marseille, France
| | - Kelly Goldlust
- CNRS-Aix-Marseille University, Laboratoire de Chimie Bactérienne, Institut de Microbiologie de la Méditerranée and Turing Center for Living Systems, Marseille, France
| | - Jean-Bernard Fiche
- Centre de Biochimie Structurale, CNRS UMR 5048, INSERM U1054, Université de Montpellie, Marseille, France
| | - Sara Rombouts
- Centre de Biochimie Structurale, CNRS UMR 5048, INSERM U1054, Université de Montpellie, Marseille, France
| | - Marcelo Nöllmann
- Centre de Biochimie Structurale, CNRS UMR 5048, INSERM U1054, Université de Montpellie, Marseille, France
| | - Leon Espinosa
- CNRS-Aix-Marseille University, Laboratoire de Chimie Bactérienne, Institut de Microbiologie de la Méditerranée and Turing Center for Living Systems, Marseille, France
| | - Tâm Mignot
- CNRS-Aix-Marseille University, Laboratoire de Chimie Bactérienne, Institut de Microbiologie de la Méditerranée and Turing Center for Living Systems, Marseille, France
| |
Collapse
|
83
|
Jeckel H, Drescher K. Advances and opportunities in image analysis of bacterial cells and communities. FEMS Microbiol Rev 2021; 45:fuaa062. [PMID: 33242074 PMCID: PMC8371272 DOI: 10.1093/femsre/fuaa062] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 11/20/2020] [Indexed: 12/16/2022] Open
Abstract
The cellular morphology and sub-cellular spatial structure critically influence the function of microbial cells. Similarly, the spatial arrangement of genotypes and phenotypes in microbial communities has important consequences for cooperation, competition, and community functions. Fluorescence microscopy techniques are widely used to measure spatial structure inside living cells and communities, which often results in large numbers of images that are difficult or impossible to analyze manually. The rapidly evolving progress in computational image analysis has recently enabled the quantification of a large number of properties of single cells and communities, based on traditional analysis techniques and convolutional neural networks. Here, we provide a brief introduction to core concepts of automated image processing, recent software tools and how to validate image analysis results. We also discuss recent advances in image analysis of microbial cells and communities, and how these advances open up opportunities for quantitative studies of spatiotemporal processes in microbiology, based on image cytometry and adaptive microscope control.
Collapse
Affiliation(s)
- Hannah Jeckel
- Max Planck Institute for Terrestrial Microbiology, Karl-von-Frisch-Str. 16, 35043 Marburg, Germany
- Department of Physics, Philipps-Universität Marburg, Karl-von-Frisch-Str. 16, 35043 Marburg, Germany
| | - Knut Drescher
- Max Planck Institute for Terrestrial Microbiology, Karl-von-Frisch-Str. 16, 35043 Marburg, Germany
- Department of Physics, Philipps-Universität Marburg, Karl-von-Frisch-Str. 16, 35043 Marburg, Germany
- Synmikro Center for Synthetic Microbiology, Karl-von-Frisch-Str. 16, 35043 Marburg, Germany
| |
Collapse
|
84
|
Fernandez-Gonzalez R, Balaghi N, Wang K, Hawkins R, Rothenberg K, McFaul C, Schimmer C, Ly M, do Carmo AM, Scepanovic G, Erdemci-Tandogan G, Castle V. PyJAMAS: open-source, multimodal segmentation and analysis of microscopy images. Bioinformatics 2021; 38:594-596. [PMID: 34390579 PMCID: PMC8722751 DOI: 10.1093/bioinformatics/btab589] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/16/2021] [Accepted: 08/12/2021] [Indexed: 02/03/2023] Open
Abstract
SUMMARY Our increasing ability to resolve fine details using light microscopy is matched by an increasing need to quantify images in order to detect and measure phenotypes. Despite their central role in cell biology, many image analysis tools require a financial investment, are released as proprietary software, or are implemented in languages not friendly for beginners, and thus are used as black boxes. To overcome these limitations, we have developed PyJAMAS, an open-source tool for image processing and analysis written in Python. PyJAMAS provides a variety of segmentation tools, including watershed and machine learning-based methods; takes advantage of Jupyter notebooks for the display and reproducibility of data analyses; and can be used through a cross-platform graphical user interface or as part of Python scripts via a comprehensive application programming interface. AVAILABILITY AND IMPLEMENTATION PyJAMAS is open-source and available at https://bitbucket.org/rfg_lab/pyjamas. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | - Negar Balaghi
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Kelly Wang
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Ray Hawkins
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Katheryn Rothenberg
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Christopher McFaul
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Clara Schimmer
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Michelle Ly
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Ana Maria do Carmo
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Gordana Scepanovic
- Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada,Department of Cell and Systems Biology, University of Toronto, Toronto, ON M5S 3G5, Canada
| | - Gonca Erdemci-Tandogan
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| | - Veronica Castle
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada,Translational Biology and Engineering Program, Ted Rogers Centre for Heart Research, University of Toronto, Toronto, ON M5G 1M1, Canada
| |
Collapse
|
85
|
Steiner UK. Senescence in Bacteria and Its Underlying Mechanisms. Front Cell Dev Biol 2021; 9:668915. [PMID: 34222238 PMCID: PMC8249858 DOI: 10.3389/fcell.2021.668915] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 05/14/2021] [Indexed: 12/11/2022] Open
Abstract
Bacteria have been thought to flee senescence by dividing into two identical daughter cells, but this notion of immortality has changed over the last two decades. Asymmetry between the resulting daughter cells after binary fission is revealed in physiological function, cell growth, and survival probabilities and is expected from theoretical understanding. Since the discovery of senescence in morphologically identical but physiologically asymmetric dividing bacteria, the mechanisms of bacteria aging have been explored across levels of biological organization. Quantitative investigations are heavily biased toward Escherichia coli and on the role of inclusion bodies—clusters of misfolded proteins. Despite intensive efforts to date, it is not evident if and how inclusion bodies, a phenotype linked to the loss of proteostasis and one of the consequences of a chain of reactions triggered by reactive oxygen species, contribute to senescence in bacteria. Recent findings in bacteria question that inclusion bodies are only deleterious, illustrated by fitness advantages of cells holding inclusion bodies under varying environmental conditions. The contributions of other hallmarks of aging, identified for metazoans, remain elusive. For instance, genomic instability appears to be age independent, epigenetic alterations might be little age specific, and other hallmarks do not play a major role in bacteria systems. What is surprising is that, on the one hand, classical senescence patterns, such as an early exponential increase in mortality followed by late age mortality plateaus, are found, but, on the other hand, identifying mechanisms that link to these patterns is challenging. Senescence patterns are sensitive to environmental conditions and to genetic background, even within species, which suggests diverse evolutionary selective forces on senescence that go beyond generalized expectations of classical evolutionary theories of aging. Given the molecular tool kits available in bacteria, the high control of experimental conditions, the high-throughput data collection using microfluidic systems, and the ease of life cell imaging of fluorescently marked transcription, translation, and proteomic dynamics, in combination with the simple demographics of growth, division, and mortality of bacteria, make the challenges surprising. The diversity of mechanisms and patterns revealed and their environmental dependencies not only present challenges but also open exciting opportunities for the discovery and deeper understanding of aging and its mechanisms, maybe beyond bacteria and aging.
Collapse
Affiliation(s)
- Ulrich Karl Steiner
- Evolutionary Demography Group, Institute of Biology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
86
|
Mattiazzi Usaj M, Yeung CHL, Friesen H, Boone C, Andrews BJ. Single-cell image analysis to explore cell-to-cell heterogeneity in isogenic populations. Cell Syst 2021; 12:608-621. [PMID: 34139168 PMCID: PMC9112900 DOI: 10.1016/j.cels.2021.05.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 04/26/2021] [Accepted: 05/12/2021] [Indexed: 12/26/2022]
Abstract
Single-cell image analysis provides a powerful approach for studying cell-to-cell heterogeneity, which is an important attribute of isogenic cell populations, from microbial cultures to individual cells in multicellular organisms. This phenotypic variability must be explained at a mechanistic level if biologists are to fully understand cellular function and address the genotype-to-phenotype relationship. Variability in single-cell phenotypes is obscured by bulk readouts or averaging of phenotypes from individual cells in a sample; thus, single-cell image analysis enables a higher resolution view of cellular function. Here, we consider examples of both small- and large-scale studies carried out with isogenic cell populations assessed by fluorescence microscopy, and we illustrate the advantages, challenges, and the promise of quantitative single-cell image analysis.
Collapse
Affiliation(s)
- Mojca Mattiazzi Usaj
- Department of Chemistry and Biology, Ryerson University, Toronto, ON M5B 2K3, Canada
| | - Clarence Hue Lok Yeung
- The Donnelly Centre, University of Toronto, Toronto, ON M5S 3E1, Canada; Department of Molecular Genetics, University of Toronto, Toronto, ON M5S 3E1, Canada
| | - Helena Friesen
- The Donnelly Centre, University of Toronto, Toronto, ON M5S 3E1, Canada
| | - Charles Boone
- The Donnelly Centre, University of Toronto, Toronto, ON M5S 3E1, Canada; Department of Molecular Genetics, University of Toronto, Toronto, ON M5S 3E1, Canada; RIKEN Centre for Sustainable Resource Science, Wako, Saitama 351-0198, Japan
| | - Brenda J Andrews
- The Donnelly Centre, University of Toronto, Toronto, ON M5S 3E1, Canada; Department of Molecular Genetics, University of Toronto, Toronto, ON M5S 3E1, Canada.
| |
Collapse
|
87
|
Cottle L, Gilroy I, Deng K, Loudovaris T, Thomas HE, Gill AJ, Samra JS, Kebede MA, Kim J, Thorn P. Machine Learning Algorithms, Applied to Intact Islets of Langerhans, Demonstrate Significantly Enhanced Insulin Staining at the Capillary Interface of Human Pancreatic β Cells. Metabolites 2021; 11:metabo11060363. [PMID: 34200432 PMCID: PMC8229564 DOI: 10.3390/metabo11060363] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 05/27/2021] [Accepted: 05/28/2021] [Indexed: 11/16/2022] Open
Abstract
Pancreatic β cells secrete the hormone insulin into the bloodstream and are critical in the control of blood glucose concentrations. β cells are clustered in the micro-organs of the islets of Langerhans, which have a rich capillary network. Recent work has highlighted the intimate spatial connections between β cells and these capillaries, which lead to the targeting of insulin secretion to the region where the β cells contact the capillary basement membrane. In addition, β cells orientate with respect to the capillary contact point and many proteins are differentially distributed at the capillary interface compared with the rest of the cell. Here, we set out to develop an automated image analysis approach to identify individual β cells within intact islets and to determine if the distribution of insulin across the cells was polarised. Our results show that a U-Net machine learning algorithm correctly identified β cells and their orientation with respect to the capillaries. Using this information, we then quantified insulin distribution across the β cells to show enrichment at the capillary interface. We conclude that machine learning is a useful analytical tool to interrogate large image datasets and analyse sub-cellular organisation.
Collapse
Affiliation(s)
- Louise Cottle
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| | - Ian Gilroy
- School of Computer Science, University of Sydney, Camperdown 2006, Australia
| | - Kylie Deng
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| | | | - Helen E Thomas
- St Vincent's Institute, Fitzroy 3065, Australia
- Department of Medicine, St Vincent's Hospital, University of Melbourne, Fitzroy 3065, Australia
| | - Anthony J Gill
- Northern Clinical School, University of Sydney, St Leonards 2065, Australia
- Department of Anatomical Pathology, Royal North Shore Hospital, St Leonards 2065, Australia
- Cancer Diagnosis and Pathology Research Group, Kolling Institute of Medical Research, St Leonards 2065, Australia
| | - Jaswinder S Samra
- Northern Clinical School, University of Sydney, St Leonards 2065, Australia
- Upper Gastrointestinal Surgical Unit, Royal North Shore Hospital, St Leonards 2065, Australia
| | - Melkam A Kebede
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, Camperdown 2006, Australia
| | - Peter Thorn
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| |
Collapse
|
88
|
Liu Z, Jin L, Chen J, Fang Q, Ablameyko S, Yin Z, Xu Y. A survey on applications of deep learning in microscopy image analysis. Comput Biol Med 2021; 134:104523. [PMID: 34091383 DOI: 10.1016/j.compbiomed.2021.104523] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 05/13/2021] [Accepted: 05/17/2021] [Indexed: 01/12/2023]
Abstract
Advanced microscopy enables us to acquire quantities of time-lapse images to visualize the dynamic characteristics of tissues, cells or molecules. Microscopy images typically vary in signal-to-noise ratios and include a wealth of information which require multiple parameters and time-consuming iterative algorithms for processing. Precise analysis and statistical quantification are often needed for the understanding of the biological mechanisms underlying these dynamic image sequences, which has become a big challenge in the field. As deep learning technologies develop quickly, they have been applied in bioimage processing more and more frequently. Novel deep learning models based on convolution neural networks have been developed and illustrated to achieve inspiring outcomes. This review article introduces the applications of deep learning algorithms in microscopy image analysis, which include image classification, region segmentation, object tracking and super-resolution reconstruction. We also discuss the drawbacks of existing deep learning-based methods, especially on the challenges of training datasets acquisition and evaluation, and propose the potential solutions. Furthermore, the latest development of augmented intelligent microscopy that based on deep learning technology may lead to revolution in biomedical research.
Collapse
Affiliation(s)
- Zhichao Liu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Luhong Jin
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Jincheng Chen
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China
| | - Qiuyu Fang
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China
| | - Sergey Ablameyko
- National Academy of Sciences, United Institute of Informatics Problems, Belarusian State University, Minsk, 220012, Belarus
| | - Zhaozheng Yin
- AI Institute, Department of Biomedical Informatics and Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yingke Xu
- Department of Biomedical Engineering, MOE Key Laboratory of Biomedical Engineering, State Key Laboratory of Modern Optical Instrumentation, Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, 310027, China; Department of Endocrinology, The Affiliated Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, 310016, China; Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Hangzhou, 310058, China.
| |
Collapse
|
89
|
Deep Learning and Transfer Learning for Automatic Cell Counting in Microscope Images of Human Cancer Cell Lines. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11114912] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
In biology and medicine, cell counting is one of the most important elements of cytometry, with applications to research and clinical practice. For instance, the complete cell count could help to determine conditions for which cancer cells could grow or not. However, cell counting is a laborious and time-consuming process, and its automatization is highly demanded. Here, we propose use of a Convolutional Neural Network-based regressor, a regression model trained end-to-end, to provide the cell count. First, unlike most of the related work, we formulate the problem of cell counting as the regression task rather than the classification task. This allows not only to reduce the required annotation information (i.e., the number of cells instead of pixel-level annotations) but also to reduce the burden of segmenting potential cells and then classifying them. Second, we propose use of xResNet, a successful convolutional architecture with residual connection, together with transfer learning (using a pretrained model) to achieve human-level performance. We demonstrate the performance of our approach to real-life data of two cell lines, human osteosarcoma and human leukemia, collected at the University of Amsterdam (133 training images, and 32 test images). We show that the proposed method (deep learning and transfer learning) outperforms currently used machine learning methods. It achieves the test mean absolute error equal 12 (±15) against 32 (±33) obtained by the deep learning without transfer learning, and 41 (±37) of the best-performing machine learning pipeline (Random Forest Regression with the Histogram of Gradients features).
Collapse
|
90
|
Pedone E, de Cesare I, Zamora-Chimal CG, Haener D, Postiglione L, La Regina A, Shannon B, Savery NJ, Grierson CS, di Bernardo M, Gorochowski TE, Marucci L. Cheetah: A Computational Toolkit for Cybergenetic Control. ACS Synth Biol 2021; 10:979-989. [PMID: 33904719 DOI: 10.1021/acssynbio.0c00463] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Advances in microscopy, microfluidics, and optogenetics enable single-cell monitoring and environmental regulation and offer the means to control cellular phenotypes. The development of such systems is challenging and often results in bespoke setups that hinder reproducibility. To address this, we introduce Cheetah, a flexible computational toolkit that simplifies the integration of real-time microscopy analysis with algorithms for cellular control. Central to the platform is an image segmentation system based on the versatile U-Net convolutional neural network. This is supplemented with functionality to robustly count, characterize, and control cells over time. We demonstrate Cheetah's core capabilities by analyzing long-term bacterial and mammalian cell growth and by dynamically controlling protein expression in mammalian cells. In all cases, Cheetah's segmentation accuracy exceeds that of a commonly used thresholding-based method, allowing for more accurate control signals to be generated. Availability of this easy-to-use platform will make control engineering techniques more accessible and offer new ways to probe and manipulate living cells.
Collapse
Affiliation(s)
- Elisa Pedone
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
- School of Cellular and Molecular Medicine, University of Bristol, Biomedical Sciences Building, University Walk, BS8 1TD Bristol, United Kingdom
| | - Irene de Cesare
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
| | - Criseida G. Zamora-Chimal
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
| | - David Haener
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
| | - Lorena Postiglione
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
| | - Antonella La Regina
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
- School of Cellular and Molecular Medicine, University of Bristol, Biomedical Sciences Building, University Walk, BS8 1TD Bristol, United Kingdom
| | - Barbara Shannon
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
- School of Biochemistry, University of Bristol, Biomedical Sciences Building, University Walk, BS8 1TD Bristol, United Kingdom
| | - Nigel J. Savery
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
- School of Biochemistry, University of Bristol, Biomedical Sciences Building, University Walk, BS8 1TD Bristol, United Kingdom
| | - Claire S. Grierson
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
- School of Biological Sciences, University of Bristol, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
| | - Mario di Bernardo
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
- Department of EE and ICT, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy
| | - Thomas E. Gorochowski
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
- School of Biological Sciences, University of Bristol, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
| | - Lucia Marucci
- Department of Engineering Mathematics, University of Bristol, Ada Lovelace Building, University Walk, BS8 1TW Bristol, United Kingdom
- School of Cellular and Molecular Medicine, University of Bristol, Biomedical Sciences Building, University Walk, BS8 1TD Bristol, United Kingdom
- BrisSynBio, Life Sciences Building, Tyndall Avenue, BS8 1TQ Bristol, United Kingdom
| |
Collapse
|
91
|
Pratapa A, Doron M, Caicedo JC. Image-based cell phenotyping with deep learning. Curr Opin Chem Biol 2021; 65:9-17. [PMID: 34023800 DOI: 10.1016/j.cbpa.2021.04.001] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 04/10/2021] [Indexed: 12/25/2022]
Abstract
A cell's phenotype is the culmination of several cellular processes through a complex network of molecular interactions that ultimately result in a unique morphological signature. Visual cell phenotyping is the characterization and quantification of these observable cellular traits in images. Recently, cellular phenotyping has undergone a massive overhaul in terms of scale, resolution, and throughput, which is attributable to advances across electronic, optical, and chemical technologies for imaging cells. Coupled with the rapid acceleration of deep learning-based computational tools, these advances have opened up new avenues for innovation across a wide variety of high-throughput cell biology applications. Here, we review applications wherein deep learning is powering the recognition, profiling, and prediction of visual phenotypes to answer important biological questions. As the complexity and scale of imaging assays increase, deep learning offers computational solutions to elucidate the details of previously unexplored cellular phenotypes.
Collapse
|
92
|
Multi-layer segmentation framework for cell nuclei using improved GVF Snake model, Watershed, and ellipse fitting. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102516] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
93
|
Hardo G, Bakshi S. Challenges of analysing stochastic gene expression in bacteria using single-cell time-lapse experiments. Essays Biochem 2021; 65:67-79. [PMID: 33835126 PMCID: PMC8056041 DOI: 10.1042/ebc20200015] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 03/02/2021] [Accepted: 03/04/2021] [Indexed: 02/07/2023]
Abstract
Stochastic gene expression causes phenotypic heterogeneity in a population of genetically identical bacterial cells. Such non-genetic heterogeneity can have important consequences for the population fitness, and therefore cells implement regulation strategies to either suppress or exploit such heterogeneity to adapt to their circumstances. By employing time-lapse microscopy of single cells, the fluctuation dynamics of gene expression may be analysed, and their regulatory mechanisms thus deciphered. However, a careful consideration of the experimental design and data-analysis is needed to produce useful data for deriving meaningful insights from them. In the present paper, the individual steps and challenges involved in a time-lapse experiment are discussed, and a rigorous framework for designing, performing, and extracting single-cell gene expression dynamics data from such experiments is outlined.
Collapse
Affiliation(s)
- Georgeos Hardo
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| | - Somenath Bakshi
- Department of Engineering, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
94
|
Xu YKT, Call CL, Sulam J, Bergles DE. Automated in vivo Tracking of Cortical Oligodendrocytes. Front Cell Neurosci 2021; 15:667595. [PMID: 33912017 PMCID: PMC8072161 DOI: 10.3389/fncel.2021.667595] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Accepted: 03/19/2021] [Indexed: 11/18/2022] Open
Abstract
Oligodendrocytes exert a profound influence on neural circuits by accelerating action potential conduction, altering excitability, and providing metabolic support. As oligodendrogenesis continues in the adult brain and is essential for myelin repair, uncovering the factors that control their dynamics is necessary to understand the consequences of adaptive myelination and develop new strategies to enhance remyelination in diseases such as multiple sclerosis. Unfortunately, few methods exist for analysis of oligodendrocyte dynamics, and even fewer are suitable for in vivo investigation. Here, we describe the development of a fully automated cell tracking pipeline using convolutional neural networks (Oligo-Track) that provides rapid volumetric segmentation and tracking of thousands of cells over weeks in vivo. This system reliably replicated human analysis, outperformed traditional analytic approaches, and extracted injury and repair dynamics at multiple cortical depths, establishing that oligodendrogenesis after cuprizone-mediated demyelination is suppressed in deeper cortical layers. Volumetric data provided by this analysis revealed that oligodendrocyte soma size progressively decreases after their generation, and declines further prior to death, providing a means to predict cell age and eventual cell death from individual time points. This new CNN-based analysis pipeline offers a rapid, robust method to quantitatively analyze oligodendrocyte dynamics in vivo, which will aid in understanding how changes in these myelinating cells influence circuit function and recovery from injury and disease.
Collapse
Affiliation(s)
- Yu Kang T. Xu
- The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, United States
- Kavli Neuroscience Discovery Institute, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Cody L. Call
- The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, United States
| | - Jeremias Sulam
- Kavli Neuroscience Discovery Institute, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Dwight E. Bergles
- The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, United States
- Kavli Neuroscience Discovery Institute, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
95
|
Imboden S, Liu X, Lee BS, Payne MC, Hsieh CJ, Lin NYC. Investigating heterogeneities of live mesenchymal stromal cells using AI-based label-free imaging. Sci Rep 2021; 11:6728. [PMID: 33762607 PMCID: PMC7991643 DOI: 10.1038/s41598-021-85905-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/08/2021] [Indexed: 12/27/2022] Open
Abstract
Mesenchymal stromal cells (MSCs) are multipotent cells that have great potential for regenerative medicine, tissue repair, and immunotherapy. Unfortunately, the outcomes of MSC-based research and therapies can be highly inconsistent and difficult to reproduce, largely due to the inherently significant heterogeneity in MSCs, which has not been well investigated. To quantify cell heterogeneity, a standard approach is to measure marker expression on the protein level via immunochemistry assays. Performing such measurements non-invasively and at scale has remained challenging as conventional methods such as flow cytometry and immunofluorescence microscopy typically require cell fixation and laborious sample preparation. Here, we developed an artificial intelligence (AI)-based method that converts transmitted light microscopy images of MSCs into quantitative measurements of protein expression levels. By training a U-Net+ conditional generative adversarial network (cGAN) model that accurately (mean [Formula: see text] = 0.77) predicts expression of 8 MSC-specific markers, we showed that expression of surface markers provides a heterogeneity characterization that is complementary to conventional cell-level morphological analyses. Using this label-free imaging method, we also observed a multi-marker temporal-spatial fluctuation of protein distributions in live MSCs. These demonstrations suggest that our AI-based microscopy can be utilized to perform quantitative, non-invasive, single-cell, and multi-marker characterizations of heterogeneous live MSC culture. Our method provides a foundational step toward the instant integrative assessment of MSC properties, which is critical for high-throughput screening and quality control in cellular therapies.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, 90095, USA.
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, 90095, USA
| | - Brandon S Lee
- Department of Bioengineering, University of California, Los Angeles, 90095, USA
| | - Marie C Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, 90095, USA
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, 90095, USA
| | - Neil Y C Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, 90095, USA.,Department of Bioengineering, University of California, Los Angeles, 90095, USA.,Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, 90095, USA
| |
Collapse
|
96
|
YeastNet: Deep-Learning-Enabled Accurate Segmentation of Budding Yeast Cells in Bright-Field Microscopy. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11062692] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Accurate and efficient segmentation of live-cell images is critical in maximizing data extraction and knowledge generation from high-throughput biology experiments. Despite recent development of deep-learning tools for biomedical imaging applications, great demand for automated segmentation tools for high-resolution live-cell microscopy images remains in order to accelerate the analysis. YeastNet dramatically improves the performance of the non-trainable classic algorithm, and performs considerably better than the current state-of-the-art yeast-cell segmentation tools. We have designed and trained a U-Net convolutional network (named YeastNet) to conduct semantic segmentation on bright-field microscopy images and generate segmentation masks for cell labeling and tracking. YeastNet enables accurate automatic segmentation and tracking of yeast cells in biomedical applications. YeastNet is freely provided with model weights as a Python package on GitHub.
Collapse
|
97
|
Gilmore MC, Ritzl-Rinkenberger B, Cava F. An updated toolkit for exploring bacterial cell wall structure and dynamics. Fac Rev 2021; 10:14. [PMID: 33659932 PMCID: PMC7894271 DOI: 10.12703/r/10-14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The bacterial cell wall is made primarily from peptidoglycan, a complex biomolecule which forms a bag-like exoskeleton that envelops the cell. As it is unique to bacteria and typically essential for their growth and survival, it represents one of the most successful targets for antibiotics. Although peptidoglycan has been studied intensively for over 50 years, the past decade has seen major steps in our understanding of this molecule because of the advent of new analytical and imaging methods. Here, we outline the most recent developments in tools that have helped to elucidate peptidoglycan structure and dynamics.
Collapse
Affiliation(s)
- Michael C Gilmore
- Laboratory for Molecular Infection Medicine Sweden (MIMS), Department of Molecular Biology, Umeå University, Umeå, Sweden
| | - Barbara Ritzl-Rinkenberger
- Laboratory for Molecular Infection Medicine Sweden (MIMS), Department of Molecular Biology, Umeå University, Umeå, Sweden
| | - Felipe Cava
- Laboratory for Molecular Infection Medicine Sweden (MIMS), Department of Molecular Biology, Umeå University, Umeå, Sweden
| |
Collapse
|
98
|
de Cesare I, Zamora-Chimal CG, Postiglione L, Khazim M, Pedone E, Shannon B, Fiore G, Perrino G, Napolitano S, di Bernardo D, Savery NJ, Grierson C, di Bernardo M, Marucci L. ChipSeg: An Automatic Tool to Segment Bacterial and Mammalian Cells Cultured in Microfluidic Devices. ACS OMEGA 2021; 6:2473-2476. [PMID: 33553865 PMCID: PMC7859942 DOI: 10.1021/acsomega.0c03906] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 11/20/2020] [Indexed: 05/14/2023]
Abstract
Extracting quantitative measurements from time-lapse images is necessary in external feedback control applications, where segmentation results are used to inform control algorithms. We describe ChipSeg, a computational tool that segments bacterial and mammalian cells cultured in microfluidic devices and imaged by time-lapse microscopy, which can be used also in the context of external feedback control. The method is based on thresholding and uses the same core functions for both cell types. It allows us to segment individual cells in high cell density microfluidic devices, to quantify fluorescent protein expression over a time-lapse experiment, and to track individual mammalian cells. ChipSeg enables robust segmentation in external feedback control experiments and can be easily customized for other experimental settings and research aims.
Collapse
Affiliation(s)
- Irene de Cesare
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
| | - Criseida G. Zamora-Chimal
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
| | - Lorena Postiglione
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
| | - Mahmoud Khazim
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- School
of Cellular and Molecular Medicine, University
of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Elisa Pedone
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- School
of Cellular and Molecular Medicine, University
of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Barbara Shannon
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Biochemistry, University of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Gianfranco Fiore
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
| | - Giansimone Perrino
- Telethon
Institute of Genetic and Medicine Via Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Sara Napolitano
- Telethon
Institute of Genetic and Medicine Via Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Diego di Bernardo
- Telethon
Institute of Genetic and Medicine Via Campi Flegrei 34, 80078 Pozzuoli, Italy
- Department
of Chemical, Materials and Industrial Production Engineering, University of Naples Federico II, 80125 Naples, Italy
| | - Nigel J. Savery
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Biochemistry, University of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Claire Grierson
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Biological Sciences, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
| | - Mario di Bernardo
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- Department
of EE and ICT, University of Naples Federico
II, Via Claudio 21, 80125 Naples, Italy
| | - Lucia Marucci
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Cellular and Molecular Medicine, University
of Bristol, University Walk, Bristol BS8 1TD, U.K.
| |
Collapse
|
99
|
Deep Learning for Imaging and Detection of Microorganisms. Trends Microbiol 2021; 29:569-572. [PMID: 33531192 DOI: 10.1016/j.tim.2021.01.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 01/12/2021] [Accepted: 01/13/2021] [Indexed: 01/03/2023]
Abstract
Despite tremendous recent interest, the application of deep learning in microbiology has still not reached its full potential. To tackle the challenges faced by human-operated microscopy, deep-learning-based methods have been proposed for microscopic image analysis of a wide range of microorganisms, including viruses, bacteria, fungi, and parasites. We believe that deep-learning technology-based systems will be on the front line of monitoring and investigation of microorganisms.
Collapse
|
100
|
Fraikin N, Van Melderen L, Goormaghtigh F. Phenotypic Characterization of Antibiotic Persisters at the Single-Cell Level: From Data Acquisition to Data Analysis. Methods Mol Biol 2021; 2357:95-106. [PMID: 34590254 DOI: 10.1007/978-1-0716-1621-5_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Persister cells are present at low frequency in isogenic populations. Moreover, they are only distinguishable from the bulk at the recovery time, after the antibiotic treatment. Therefore, time-lapse microscopy is the gold-standard method to investigate this phenomenon. Here, we describe an exhaustive procedure for acquiring single-cell data which is particularly suitable for persister cell analysis but could be applied to any other fields of research involving single-cell time-lapse microscopy. In addition, we discuss the challenges and critical aspects of the procedure with respect to the generation of robust data.
Collapse
Affiliation(s)
- Nathan Fraikin
- Cellular and Molecular Microbiology, Faculté des Sciences, Université Libre de Bruxelles (ULB), Gosselies, Belgium
| | - Laurence Van Melderen
- Cellular and Molecular Microbiology, Faculté des Sciences, Université Libre de Bruxelles (ULB), Gosselies, Belgium
| | | |
Collapse
|