1
|
Tokunaga T, Sato N, Arai M, Nakamura T, Ishihara T. Mechanism of sensory perception unveiled by simultaneous measurement of membrane voltage and intracellular calcium. Commun Biol 2024; 7:1150. [PMID: 39284959 PMCID: PMC11405522 DOI: 10.1038/s42003-024-06778-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 08/26/2024] [Indexed: 09/22/2024] Open
Abstract
Measuring neuronal activity is important for understanding neuronal function. Ca2+ imaging by genetically encoded calcium indicators (GECIs) is a powerful way to measure neuronal activity. Although it revealed important aspects of neuronal function, measuring the neuronal membrane voltage is important to understand neuronal function as it triggers neuronal activation. Recent progress of genetically encoded voltage indicators (GEVIs) enabled us fast and precise measurements of neuronal membrane voltage. To clarify the relation of the membrane voltage and intracellular Ca2+, we analyzed neuronal activities of olfactory neuron AWA in Caenorhabditis elegans by GCaMP6f (GECI) and paQuasAr3 (GEVI) responding to odorants. We found that the membrane voltage encodes the stimuli change by the timing and the duration by the weak semi-stable depolarization. However, the change of the intracellular Ca2+ encodes the strength of the stimuli. Furthermore, ODR-3, a G-protein alpha subunit, was shown to be important for stabilizing the membrane voltage. These results suggest that the combination of calcium and voltage imaging provides a deeper understanding of the information in neural circuits.
Collapse
Affiliation(s)
- Terumasa Tokunaga
- Department of Artificial Intelligence, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, Fukuoka, Japan.
| | - Noriko Sato
- Department of Artificial Intelligence, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, Fukuoka, Japan
- Department of Biology, Faculty of Science, Kyushu University, Fukuoka, Japan
| | - Mary Arai
- Department of Biology, Faculty of Science, Kyushu University, Fukuoka, Japan
| | - Takumi Nakamura
- Department of Artificial Intelligence, Faculty of Computer Science and Systems Engineering, Kyushu Institute of Technology, Fukuoka, Japan
| | - Takeshi Ishihara
- Department of Biology, Faculty of Science, Kyushu University, Fukuoka, Japan.
| |
Collapse
|
2
|
James E, Caetano AJ, Sharpe PT. Computational Methods for Image Analysis in Craniofacial Development and Disease. J Dent Res 2024:220345241265048. [PMID: 39272216 DOI: 10.1177/00220345241265048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024] Open
Abstract
Observation is at the center of all biological sciences. Advances in imaging technologies are therefore essential to derive novel biological insights to better understand the complex workings of living systems. Recent high-throughput sequencing and imaging techniques are allowing researchers to simultaneously address complex molecular variations spatially and temporarily in tissues and organs. The availability of increasingly large dataset sizes has allowed for the evolution of robust deep learning models, designed to interrogate biomedical imaging data. These models are emerging as transformative tools in diagnostic medicine. Combined, these advances allow for dynamic, quantitative, and predictive observations of entire organisms and tissues. Here, we address 3 main tasks of bioimage analysis, image restoration, segmentation, and tracking and discuss new computational tools allowing for 3-dimensional spatial genomics maps. Finally, we demonstrate how these advances have been applied in studies of craniofacial development and oral disease pathogenesis.
Collapse
Affiliation(s)
- E James
- Centre for Oral Immunobiology and Regenerative Medicine, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, UK
| | - A J Caetano
- Centre for Oral Immunobiology and Regenerative Medicine, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, UK
| | - P T Sharpe
- Centre for Craniofacial and Regenerative Biology, Faculty of Dentistry, Oral and Craniofacial Sciences, King's College London, London, UK
| |
Collapse
|
3
|
Liu J, Du H, Huang L, Xie W, Liu K, Zhang X, Chen S, Zhang Y, Li D, Pan H. AI-Powered Microfluidics: Shaping the Future of Phenotypic Drug Discovery. ACS APPLIED MATERIALS & INTERFACES 2024; 16:38832-38851. [PMID: 39016521 DOI: 10.1021/acsami.4c07665] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/18/2024]
Abstract
Phenotypic drug discovery (PDD), which involves harnessing biological systems directly to uncover effective drugs, has undergone a resurgence in recent years. The rapid advancement of artificial intelligence (AI) over the past few years presents numerous opportunities for augmenting phenotypic drug screening on microfluidic platforms, leveraging its predictive capabilities, data analysis, efficient data processing, etc. Microfluidics coupled with AI is poised to revolutionize the landscape of phenotypic drug discovery. By integrating advanced microfluidic platforms with AI algorithms, researchers can rapidly screen large libraries of compounds, identify novel drug candidates, and elucidate complex biological pathways with unprecedented speed and efficiency. This review provides an overview of recent advances and challenges in AI-based microfluidics and their applications in drug discovery. We discuss the synergistic combination of microfluidic systems for high-throughput screening and AI-driven analysis for phenotype characterization, drug-target interactions, and predictive modeling. In addition, we highlight the potential of AI-powered microfluidics to achieve an automated drug screening system. Overall, AI-powered microfluidics represents a promising approach to shaping the future of phenotypic drug discovery by enabling rapid, cost-effective, and accurate identification of therapeutically relevant compounds.
Collapse
Affiliation(s)
- Junchi Liu
- Department of Anesthesiology, The First Hospital of Jilin University, 71 Xinmin Street, Changchun 130012, China
| | - Hanze Du
- Department of Endocrinology, Key Laboratory of Endocrinology of National Health Commission, Translation Medicine Centre, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Endocrinology of National Health Commission, Department of Endocrinology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| | - Lei Huang
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Wangni Xie
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Kexuan Liu
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Xue Zhang
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Shi Chen
- Department of Endocrinology, Key Laboratory of Endocrinology of National Health Commission, Translation Medicine Centre, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Endocrinology of National Health Commission, Department of Endocrinology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| | - Yuan Zhang
- Department of Anesthesiology, The First Hospital of Jilin University, 71 Xinmin Street, Changchun 130012, China
| | - Daowei Li
- Jilin Provincial Key Laboratory of Tooth Development and Bone Remodeling, School and Hospital of Stomatology, Jilin University, 1500 Qinghua Road, Changchun 130012, China
| | - Hui Pan
- Department of Endocrinology, Key Laboratory of Endocrinology of National Health Commission, Translation Medicine Centre, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Beijing 100730, China
- Key Laboratory of Endocrinology of National Health Commission, Department of Endocrinology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
4
|
Sprague DY, Rusch K, Dunn RL, Borchardt JM, Ban S, Bubnis G, Chiu GC, Wen C, Suzuki R, Chaudhary S, Lee HJ, Yu Z, Dichter B, Ly R, Onami S, Lu H, Kimura KD, Yemini E, Kato S. Unifying community-wide whole-brain imaging datasets enables robust automated neuron identification and reveals determinants of neuron positioning in C. elegans. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.28.591397. [PMID: 38746302 PMCID: PMC11092512 DOI: 10.1101/2024.04.28.591397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
We develop a data harmonization approach for C. elegans volumetric microscopy data, still or video, consisting of a standardized format, data pre-processing techniques, and a set of human-in-the-loop machine learning based analysis software tools. We unify a diverse collection of 118 whole-brain neural activity imaging datasets from 5 labs, storing these and accompanying tools in an online repository called WormID (wormid.org). We use this repository to train three existing automated cell identification algorithms to, for the first time, enable accuracy in neural identification that generalizes across labs, approaching human performance in some cases. We mine this repository to identify factors that influence the developmental positioning of neurons. To facilitate communal use of this repository, we created open-source software, code, web-based tools, and tutorials to explore and curate datasets for contribution to the scientific community. This repository provides a growing resource for experimentalists, theorists, and toolmakers to (a) study neuroanatomical organization and neural activity across diverse experimental paradigms, (b) develop and benchmark algorithms for automated neuron detection, segmentation, cell identification, tracking, and activity extraction, and (c) inform models of neurobiological development and function.
Collapse
Affiliation(s)
| | - Kevin Rusch
- Department of Neurobiology, UMass Chan Medical School
| | - Raymond L Dunn
- Department of Neurology, University of California San Francisco
| | | | - Steven Ban
- Department of Neurology, University of California San Francisco
| | - Greg Bubnis
- Department of Neurology, University of California San Francisco
| | - Grace C Chiu
- Department of Neurology, University of California San Francisco
| | - Chentao Wen
- RIKEN Center for Biosystems Dynamics Research
| | - Ryoga Suzuki
- Graduate School of Science, Nagoya City University
| | - Shivesh Chaudhary
- School of Chemical and Biomolecular Engineering, Georgia Institute of Technology
| | - Hyun Jee Lee
- School of Chemical and Biomolecular Engineering, Georgia Institute of Technology
| | - Zikai Yu
- School of Chemical and Biomolecular Engineering, Georgia Institute of Technology
| | | | - Ryan Ly
- Scientific Data Division, Lawrence Berkeley National Laboratory
| | | | - Hang Lu
- School of Chemical and Biomolecular Engineering, Georgia Institute of Technology
| | | | | | - Saul Kato
- Department of Neurology, University of California San Francisco
| |
Collapse
|
5
|
Holtzen SE, Rakshit A, Palmer AE. Protocol for measuring cell cycle Zn 2+ dynamics using a FRET-based biosensor. STAR Protoc 2024; 5:103122. [PMID: 38861382 PMCID: PMC11209638 DOI: 10.1016/j.xpro.2024.103122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/25/2024] [Accepted: 05/21/2024] [Indexed: 06/13/2024] Open
Abstract
The exchangeable Zn2+ pool in cells is not static but responds to perturbations as well as fluctuates naturally through the cell cycle. Here, we present a protocol to carry out long-term live-cell imaging of cells expressing a cytosolic Zn2+ sensor. We then describe how to track cells using the published pipeline EllipTrack and how to analyze the single-cell traces to determine changes in labile Zn2+ in response to perturbation. For complete details on the use and execution of this protocol, please refer to Rakshit and Holtzen et al.1.
Collapse
Affiliation(s)
- Samuel E Holtzen
- Molecular, Cellular & Developmental Biology, University of Colorado Boulder, Boulder, CO 80309, USA; Department of Biochemistry and BioFrontiers Institute, 3415 Colorado Avenue, University of Colorado Boulder, Boulder, CO 80303, USA
| | - Ananya Rakshit
- Department of Biochemistry and BioFrontiers Institute, 3415 Colorado Avenue, University of Colorado Boulder, Boulder, CO 80303, USA
| | - Amy E Palmer
- Department of Biochemistry and BioFrontiers Institute, 3415 Colorado Avenue, University of Colorado Boulder, Boulder, CO 80303, USA.
| |
Collapse
|
6
|
Joubbi S, Micheli A, Milazzo P, Maccari G, Ciano G, Cardamone D, Medini D. Antibody design using deep learning: from sequence and structure design to affinity maturation. Brief Bioinform 2024; 25:bbae307. [PMID: 38960409 PMCID: PMC11221890 DOI: 10.1093/bib/bbae307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 05/20/2024] [Accepted: 06/12/2024] [Indexed: 07/05/2024] Open
Abstract
Deep learning has achieved impressive results in various fields such as computer vision and natural language processing, making it a powerful tool in biology. Its applications now encompass cellular image classification, genomic studies and drug discovery. While drug development traditionally focused deep learning applications on small molecules, recent innovations have incorporated it in the discovery and development of biological molecules, particularly antibodies. Researchers have devised novel techniques to streamline antibody development, combining in vitro and in silico methods. In particular, computational power expedites lead candidate generation, scaling and potential antibody development against complex antigens. This survey highlights significant advancements in protein design and optimization, specifically focusing on antibodies. This includes various aspects such as design, folding, antibody-antigen interactions docking and affinity maturation.
Collapse
Affiliation(s)
- Sara Joubbi
- Department of Computer Science, University of Pisa, Largo B. Pontecorvo, 3, 56127, Pisa, Italy
- Data Science for Health (DaScH) Lab, Fondazione Toscana Life Sciences, Via Fiorentina, 1, 53100, Siena, Italy
| | - Alessio Micheli
- Department of Computer Science, University of Pisa, Largo B. Pontecorvo, 3, 56127, Pisa, Italy
| | - Paolo Milazzo
- Department of Computer Science, University of Pisa, Largo B. Pontecorvo, 3, 56127, Pisa, Italy
| | - Giuseppe Maccari
- Data Science for Health (DaScH) Lab, Fondazione Toscana Life Sciences, Via Fiorentina, 1, 53100, Siena, Italy
| | - Giorgio Ciano
- Data Science for Health (DaScH) Lab, Fondazione Toscana Life Sciences, Via Fiorentina, 1, 53100, Siena, Italy
| | - Dario Cardamone
- Data Science for Health (DaScH) Lab, Fondazione Toscana Life Sciences, Via Fiorentina, 1, 53100, Siena, Italy
| | - Duccio Medini
- Data Science for Health (DaScH) Lab, Fondazione Toscana Life Sciences, Via Fiorentina, 1, 53100, Siena, Italy
| |
Collapse
|
7
|
Zhou FY, Yapp C, Shang Z, Daetwyler S, Marin Z, Islam MT, Nanes B, Jenkins E, Gihana GM, Chang BJ, Weems A, Dustin M, Morrison S, Fiolka R, Dean K, Jamieson A, Sorger PK, Danuser G. A general algorithm for consensus 3D cell segmentation from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Zhiguo Shang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin Nanes
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dustin
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Sean Morrison
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
8
|
Li Y, Lai C, Wang M, Wu J, Li Y, Peng H, Qu L. Automated segmentation and recognition of C. elegans whole-body cells. Bioinformatics 2024; 40:btae324. [PMID: 38775410 PMCID: PMC11139520 DOI: 10.1093/bioinformatics/btae324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 04/17/2024] [Accepted: 05/20/2024] [Indexed: 06/01/2024] Open
Abstract
MOTIVATION Accurate segmentation and recognition of C.elegans cells are critical for various biological studies, including gene expression, cell lineages, and cell fates analysis at single-cell level. However, the highly dense distribution, similar shapes, and inhomogeneous intensity profiles of whole-body cells in 3D fluorescence microscopy images make automatic cell segmentation and recognition a challenging task. Existing methods either rely on additional fiducial markers or only handle a subset of cells. Given the difficulty or expense associated with generating fiducial features in many experimental settings, a marker-free approach capable of reliably segmenting and recognizing C.elegans whole-body cells is highly desirable. RESULTS We report a new pipeline, called automated segmentation and recognition (ASR) of cells, and applied it to 3D fluorescent microscopy images of L1-stage C.elegans with 558 whole-body cells. A novel displacement vector field based deep learning model is proposed to address the problem of reliable segmentation of highly crowded cells with blurred boundary. We then realize the cell recognition by encoding and exploiting statistical priors on cell positions and structural similarities of neighboring cells. To the best of our knowledge, this is the first method successfully applied to the segmentation and recognition of C.elegans whole-body cells. The ASR-segmentation module achieves an F1-score of 0.8956 on a dataset of 116 C.elegans image stacks with 64 728 cells (accuracy 0.9880, AJI 0.7813). Based on the segmentation results, the ASR recognition module achieved an average accuracy of 0.8879. We also show ASR's applicability to other cell types, e.g. platynereis and rat kidney cells. AVAILABILITY AND IMPLEMENTATION The code is available at https://github.com/reaneyli/ASR.
Collapse
Affiliation(s)
- Yuanyuan Li
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Chuxiao Lai
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Meng Wang
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
| | - Yongbin Li
- College of Life Sciences, Capital Normal University, Beijing 100048, China
| | - Hanchuan Peng
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, Anhui 230039, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230039, China
- Hefei National Laboratory, University of Science and Technology of China, Hefei, Anhui 230039, China
| |
Collapse
|
9
|
Ryu J, Nejatbakhsh A, Torkashvand M, Gangadharan S, Seyedolmohadesin M, Kim J, Paninski L, Venkatachalam V. Versatile multiple object tracking in sparse 2D/3D videos via deformable image registration. PLoS Comput Biol 2024; 20:e1012075. [PMID: 38768230 PMCID: PMC11142724 DOI: 10.1371/journal.pcbi.1012075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 05/31/2024] [Accepted: 04/14/2024] [Indexed: 05/22/2024] Open
Abstract
Tracking body parts in behaving animals, extracting fluorescence signals from cells embedded in deforming tissue, and analyzing cell migration patterns during development all require tracking objects with partially correlated motion. As dataset sizes increase, manual tracking of objects becomes prohibitively inefficient and slow, necessitating automated and semi-automated computational tools. Unfortunately, existing methods for multiple object tracking (MOT) are either developed for specific datasets and hence do not generalize well to other datasets, or require large amounts of training data that are not readily available. This is further exacerbated when tracking fluorescent sources in moving and deforming tissues, where the lack of unique features and sparsely populated images create a challenging environment, especially for modern deep learning techniques. By leveraging technology recently developed for spatial transformer networks, we propose ZephIR, an image registration framework for semi-supervised MOT in 2D and 3D videos. ZephIR can generalize to a wide range of biological systems by incorporating adjustable parameters that encode spatial (sparsity, texture, rigidity) and temporal priors of a given data class. We demonstrate the accuracy and versatility of our approach in a variety of applications, including tracking the body parts of a behaving mouse and neurons in the brain of a freely moving C. elegans. We provide an open-source package along with a web-based graphical user interface that allows users to provide small numbers of annotations to interactively improve tracking results.
Collapse
Affiliation(s)
- James Ryu
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Amin Nejatbakhsh
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Mahdi Torkashvand
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Sahana Gangadharan
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Maedeh Seyedolmohadesin
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Jinmahn Kim
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| | - Liam Paninski
- Department of Neuroscience, Columbia University, New York, New York, United States of America
| | - Vivek Venkatachalam
- Department of Physics, Northeastern University, Boston, Massachusetts, United States of America
| |
Collapse
|
10
|
Lanza E, Lucente V, Nicoletti M, Schwartz S, Cavallo IF, Caprini D, Connor CW, Saifuddin MFA, Miller JM, L’Etoile ND, Folli V. See Elegans: Simple-to-use, accurate, and automatic 3D detection of neural activity from densely packed neurons. PLoS One 2024; 19:e0300628. [PMID: 38517838 PMCID: PMC10959381 DOI: 10.1371/journal.pone.0300628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/29/2024] [Indexed: 03/24/2024] Open
Abstract
In the emerging field of whole-brain imaging at single-cell resolution, which represents one of the new frontiers to investigate the link between brain activity and behavior, the nematode Caenorhabditis elegans offers one of the most characterized models for systems neuroscience. Whole-brain recordings consist of 3D time series of volumes that need to be processed to obtain neuronal traces. Current solutions for this task are either computationally demanding or limited to specific acquisition setups. Here, we propose See Elegans, a direct programming algorithm that combines different techniques for automatic neuron segmentation and tracking without the need for the RFP channel, and we compare it with other available algorithms. While outperforming them in most cases, our solution offers a novel method to guide the identification of a subset of head neurons based on position and activity. The built-in interface allows the user to follow and manually curate each of the processing steps. See Elegans is thus a simple-to-use interface aimed at speeding up the post-processing of volumetric calcium imaging recordings while maintaining a high level of accuracy and low computational demands. (Contact: enrico.lanza@iit.it).
Collapse
Affiliation(s)
- Enrico Lanza
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
| | - Valeria Lucente
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
- D-tails s.r.l., Rome, Italy
| | - Martina Nicoletti
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
- Department of Engineering, Campus Bio-Medico University, Rome, Italy
| | - Silvia Schwartz
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
| | - Ilaria F. Cavallo
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
- D-tails s.r.l., Rome, Italy
| | - Davide Caprini
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
| | - Christopher W. Connor
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Mashel Fatema A. Saifuddin
- Department of Cell and Tissue Biology, University of California, San Francisco, San Francisco, CA, United States of America
| | - Julia M. Miller
- Department of Cell and Tissue Biology, University of California, San Francisco, San Francisco, CA, United States of America
| | - Noelle D. L’Etoile
- Department of Cell and Tissue Biology, University of California, San Francisco, San Francisco, CA, United States of America
| | - Viola Folli
- Center for Life Nano- and Neuro-Science@Sapienza, Istituto Italiano di Tecnologia (IIT), Rome, Italy
- D-tails s.r.l., Rome, Italy
| |
Collapse
|
11
|
Chen H, Murphy RF. 3DCellComposer - A Versatile Pipeline Utilizing 2D Cell Segmentation Methods for 3D Cell Segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.08.584082. [PMID: 38559093 PMCID: PMC10979887 DOI: 10.1101/2024.03.08.584082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Background Cell segmentation is crucial in bioimage informatics, as its accuracy directly impacts conclusions drawn from cellular analyses. While many approaches to 2D cell segmentation have been described, 3D cell segmentation has received much less attention. 3D segmentation faces significant challenges, including limited training data availability due to the difficulty of the task for human annotators, and inherent three-dimensional complexity. As a result, existing 3D cell segmentation methods often lack broad applicability across different imaging modalities. Results To address this, we developed a generalizable approach for using 2D cell segmentation methods to produce accurate 3D cell segmentations. We implemented this approach in 3DCellComposer, a versatile, open-source package that allows users to choose any existing 2D segmentation model appropriate for their tissue or cell type(s) without requiring any additional training. Importantly, we have enhanced our open source CellSegmentationEvaluator quality evaluation tool to support 3D images. It provides metrics that allow selection of the best approach for a given imaging source and modality, without the need for human annotations to assess performance. Using these metrics, we demonstrated that our approach produced high-quality 3D segmentations of tissue images, and that it could outperform an existing 3D segmentation method on the cell culture images with which it was trained. Conclusions 3DCellComposer, when paired with well-trained 2D segmentation models, provides an important alternative to acquiring human-annotated 3D images for new sample types or imaging modalities and then training 3D segmentation models using them. It is expected to be of significant value for large scale projects such as the Human BioMolecular Atlas Program.
Collapse
Affiliation(s)
- Haoran Chen
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, USA
| | - Robert F Murphy
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, USA
| |
Collapse
|
12
|
Liu B, Zhu Y, Yang Z, Yan HHN, Leung SY, Shi J. Deep Learning-Based 3D Single-Cell Imaging Analysis Pipeline Enables Quantification of Cell-Cell Interaction Dynamics in the Tumor Microenvironment. Cancer Res 2024; 84:517-526. [PMID: 38085180 DOI: 10.1158/0008-5472.can-23-1100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 04/29/2023] [Accepted: 12/05/2023] [Indexed: 02/16/2024]
Abstract
The three-dimensional (3D) tumor microenvironment (TME) comprises multiple interacting cell types that critically impact tumor pathology and therapeutic response. Efficient 3D imaging assays and analysis tools could facilitate profiling and quantifying distinctive cell-cell interaction dynamics in the TMEs of a wide spectrum of human cancers. Here, we developed a 3D live-cell imaging assay using confocal microscopy of patient-derived tumor organoids and a software tool, SiQ-3D (single-cell image quantifier for 3D), that optimizes deep learning (DL)-based 3D image segmentation, single-cell phenotype classification, and tracking to automatically acquire multidimensional dynamic data for different interacting cell types in the TME. An organoid model of tumor cells interacting with natural killer cells was used to demonstrate the effectiveness of the 3D imaging assay to reveal immuno-oncology dynamics as well as the accuracy and efficiency of SiQ-3D to extract quantitative data from large 3D image datasets. SiQ-3D is Python-based, publicly available, and customizable to analyze data from both in vitro and in vivo 3D imaging. The DL-based 3D imaging analysis pipeline can be employed to study not only tumor interaction dynamics with diverse cell types in the TME but also various cell-cell interactions involved in other tissue/organ physiology and pathology. SIGNIFICANCE A 3D single-cell imaging pipeline that quantifies cancer cell interaction dynamics with other TME cell types using primary patient-derived samples can elucidate how cell-cell interactions impact tumor behavior and treatment responses.
Collapse
Affiliation(s)
- Bodong Liu
- Center for Quantitative Systems Biology, Department of Physics, Hong Kong Baptist University, Hong Kong SAR, P.R. China
| | - Yanting Zhu
- Center for Quantitative Systems Biology, Department of Physics, Hong Kong Baptist University, Hong Kong SAR, P.R. China
- Laboratory for Synthetic Chemistry and Chemical Biology Limited, Hong Kong SAR, P.R. China
| | - Zhenye Yang
- MOE Key Laboratory for Cellular Dynamics, The CAS Key Laboratory of Innate Immunity and Chronic Disease, School of Basic Medical Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, P.R. China
| | - Helen H N Yan
- Department of Pathology, School of Clinical Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong SAR, P.R. China
| | - Suet Yi Leung
- Department of Pathology, School of Clinical Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong SAR, P.R. China
| | - Jue Shi
- Center for Quantitative Systems Biology, Department of Physics, Hong Kong Baptist University, Hong Kong SAR, P.R. China
- Laboratory for Synthetic Chemistry and Chemical Biology Limited, Hong Kong SAR, P.R. China
| |
Collapse
|
13
|
Jan M, Spangaro A, Lenartowicz M, Mattiazzi Usaj M. From pixels to insights: Machine learning and deep learning for bioimage analysis. Bioessays 2024; 46:e2300114. [PMID: 38058114 DOI: 10.1002/bies.202300114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 10/25/2023] [Accepted: 11/13/2023] [Indexed: 12/08/2023]
Abstract
Bioimage analysis plays a critical role in extracting information from biological images, enabling deeper insights into cellular structures and processes. The integration of machine learning and deep learning techniques has revolutionized the field, enabling the automated, reproducible, and accurate analysis of biological images. Here, we provide an overview of the history and principles of machine learning and deep learning in the context of bioimage analysis. We discuss the essential steps of the bioimage analysis workflow, emphasizing how machine learning and deep learning have improved preprocessing, segmentation, feature extraction, object tracking, and classification. We provide examples that showcase the application of machine learning and deep learning in bioimage analysis. We examine user-friendly software and tools that enable biologists to leverage these techniques without extensive computational expertise. This review is a resource for researchers seeking to incorporate machine learning and deep learning in their bioimage analysis workflows and enhance their research in this rapidly evolving field.
Collapse
Affiliation(s)
- Mahta Jan
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Allie Spangaro
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Michelle Lenartowicz
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Mojca Mattiazzi Usaj
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
14
|
Burattini M, Lo Muzio FP, Hu M, Bonalumi F, Rossi S, Pagiatakis C, Salvarani N, Fassina L, Luciani GB, Miragoli M. Unlocking cardiac motion: assessing software and machine learning for single-cell and cardioid kinematic insights. Sci Rep 2024; 14:1782. [PMID: 38245558 PMCID: PMC10799933 DOI: 10.1038/s41598-024-52081-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 01/12/2024] [Indexed: 01/22/2024] Open
Abstract
The heart coordinates its functional parameters for optimal beat-to-beat mechanical activity. Reliable detection and quantification of these parameters still represent a hot topic in cardiovascular research. Nowadays, computer vision allows the development of open-source algorithms to measure cellular kinematics. However, the analysis software can vary based on analyzed specimens. In this study, we compared different software performances in in-silico model, in-vitro mouse adult ventricular cardiomyocytes and cardioids. We acquired in-vitro high-resolution videos during suprathreshold stimulation at 0.5-1-2 Hz, adapting the protocol for the cardioids. Moreover, we exposed the samples to inotropic and depolarizing substances. We analyzed in-silico and in-vitro videos by (i) MUSCLEMOTION, the gold standard among open-source software; (ii) CONTRACTIONWAVE, a recently developed tracking software; and (iii) ViKiE, an in-house customized video kinematic evaluation software. We enriched the study with three machine-learning algorithms to test the robustness of the motion-tracking approaches. Our results revealed that all software produced comparable estimations of cardiac mechanical parameters. For instance, in cardioids, beat duration measurements at 0.5 Hz were 1053.58 ms (MUSCLEMOTION), 1043.59 ms (CONTRACTIONWAVE), and 937.11 ms (ViKiE). ViKiE exhibited higher sensitivity in exposed samples due to its localized kinematic analysis, while MUSCLEMOTION and CONTRACTIONWAVE offered temporal correlation, combining global assessment with time-efficient analysis. Finally, machine learning reveals greater accuracy when trained with MUSCLEMOTION dataset in comparison with the other software (accuracy > 83%). In conclusion, our findings provide valuable insights for the accurate selection and integration of software tools into the kinematic analysis pipeline, tailored to the experimental protocol.
Collapse
Affiliation(s)
- Margherita Burattini
- Department of Surgery, Dentistry and Maternity, University of Verona, Verona, Italy
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Francesco Paolo Lo Muzio
- Department of Medicine and Surgery, University of Parma, Parma, Italy
- Deutsches Herzzentrum Der Charité, Department of Cardiology, Angiology and Intensive Care Medicine, Berlin, Germany
| | - Mirko Hu
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Flavia Bonalumi
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Stefano Rossi
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Christina Pagiatakis
- Humanitas Research Hospital, IRCCS, Rozzano (Milan), Italy
- Department of Biotechnology and Life Sciences, University of Insubria, Varese, Italy
| | - Nicolò Salvarani
- Humanitas Research Hospital, IRCCS, Rozzano (Milan), Italy
- Institute of Genetic and Biomedical Research (IRGB), UOS of Milan, National Research Council of Italy, Milan, Italy
| | - Lorenzo Fassina
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | | | - Michele Miragoli
- Department of Medicine and Surgery, University of Parma, Parma, Italy.
- Humanitas Research Hospital, IRCCS, Rozzano (Milan), Italy.
| |
Collapse
|
15
|
Zhang X, Saberigarakani A, Almasian M, Hassan S, Nekkanti M, Ding Y. 4D Light-sheet Imaging of Zebrafish Cardiac Contraction. J Vis Exp 2024:10.3791/66263. [PMID: 38251787 PMCID: PMC10939705 DOI: 10.3791/66263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024] Open
Abstract
Zebrafish is an intriguing model organism known for its remarkable cardiac regeneration capacity. Studying the contracting heart in vivo is essential for gaining insights into structural and functional changes in response to injuries. However, obtaining high-resolution and high-speed 4-dimensional (4D, 3D spatial + 1D temporal) images of the zebrafish heart to assess cardiac architecture and contractility remains challenging. In this context, an in-house light-sheet microscope (LSM) and customized computational analysis are used to overcome these technical limitations. This strategy, involving LSM system construction, retrospective synchronization, single cell tracking, and user-directed analysis, enables one to investigate the micro-structure and contractile function across the entire heart at the single-cell resolution in the transgenic Tg(myl7:nucGFP) zebrafish larvae. Additionally, we are able to further incorporate microinjection of small molecule compounds to induce cardiac injury in a precise and controlled manner. Overall, this framework allows one to track physiological and pathophysiological changes, as well as the regional mechanics at the single-cell level during cardiac morphogenesis and regeneration.
Collapse
Affiliation(s)
- Xinyuan Zhang
- Department of Bioengineering, The University of Texas at Dallas
| | | | - Milad Almasian
- Department of Bioengineering, The University of Texas at Dallas
| | - Sohail Hassan
- Department of Bioengineering, The University of Texas at Dallas
| | - Manasa Nekkanti
- Department of Bioengineering, The University of Texas at Dallas
| | - Yichen Ding
- Department of Bioengineering, The University of Texas at Dallas; Center for Imaging and Surgical Innovation, The University of Texas at Dallas; Hamon Center for Regenerative Science and Medicine, UT Southwestern Medical Center;
| |
Collapse
|
16
|
Wen C. Deep Learning-Based Cell Tracking in Deforming Organs and Moving Animals. Methods Mol Biol 2024; 2800:203-215. [PMID: 38709486 DOI: 10.1007/978-1-0716-3834-7_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Cell tracking is an essential step in extracting cellular signals from moving cells, which is vital for understanding the mechanisms underlying various biological functions and processes, particularly in organs such as the brain and heart. However, cells in living organisms often exhibit extensive and complex movements caused by organ deformation and whole-body motion. These movements pose a challenge in obtaining high-quality time-lapse cell images and tracking the intricate cell movements in the captured images. Recent advances in deep learning techniques provide powerful tools for detecting cells in low-quality images with densely packed cell populations, as well as estimating cell positions for cells undergoing large nonrigid movements. This chapter introduces the challenges of cell tracking in deforming organs and moving animals, outlines the solutions to these challenges, and presents a detailed protocol for data preparation, as well as for performing cell segmentation and tracking using the latest version of 3DeeCellTracker. This protocol is expected to enable researchers to gain deeper insights into organ dynamics and biological processes.
Collapse
Affiliation(s)
- Chentao Wen
- RIKEN Center for Biodynamic Research, Kobe, Japan.
| |
Collapse
|
17
|
Park CF, Barzegar-Keshteli M, Korchagina K, Delrocq A, Susoy V, Jones CL, Samuel ADT, Rahi SJ. Automated neuron tracking inside moving and deforming C. elegans using deep learning and targeted augmentation. Nat Methods 2024; 21:142-149. [PMID: 38052988 DOI: 10.1038/s41592-023-02096-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 10/20/2023] [Indexed: 12/07/2023]
Abstract
Reading out neuronal activity from three-dimensional (3D) functional imaging requires segmenting and tracking individual neurons. This is challenging in behaving animals if the brain moves and deforms. The traditional approach is to train a convolutional neural network with ground-truth (GT) annotations of images representing different brain postures. For 3D images, this is very labor intensive. We introduce 'targeted augmentation', a method to automatically synthesize artificial annotations from a few manual annotations. Our method ('Targettrack') learns the internal deformations of the brain to synthesize annotations for new postures by deforming GT annotations. This reduces the need for manual annotation and proofreading. A graphical user interface allows the application of the method end-to-end. We demonstrate Targettrack on recordings where neurons are labeled as key points or 3D volumes. Analyzing freely moving animals exposed to odor pulses, we uncover rich patterns in interneuron dynamics, including switching neuronal entrainment on and off.
Collapse
Affiliation(s)
- Core Francisco Park
- Department of Physics and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Mahsa Barzegar-Keshteli
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Kseniia Korchagina
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Ariane Delrocq
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vladislav Susoy
- Department of Physics and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Corinne L Jones
- Swiss Data Science Center, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Aravinthan D T Samuel
- Department of Physics and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
18
|
Roberto de Barros N, Wang C, Maity S, Peirsman A, Nasiri R, Herland A, Ermis M, Kawakita S, Gregatti Carvalho B, Hosseinzadeh Kouchehbaghi N, Donizetti Herculano R, Tirpáková Z, Mohammad Hossein Dabiri S, Lucas Tanaka J, Falcone N, Choroomi A, Chen R, Huang S, Zisblatt E, Huang Y, Rashad A, Khorsandi D, Gangrade A, Voskanian L, Zhu Y, Li B, Akbari M, Lee J, Remzi Dokmeci M, Kim HJ, Khademhosseini A. Engineered organoids for biomedical applications. Adv Drug Deliv Rev 2023; 203:115142. [PMID: 37967768 PMCID: PMC10842104 DOI: 10.1016/j.addr.2023.115142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 10/03/2023] [Accepted: 11/10/2023] [Indexed: 11/17/2023]
Abstract
As miniaturized and simplified stem cell-derived 3D organ-like structures, organoids are rapidly emerging as powerful tools for biomedical applications. With their potential for personalized therapeutic interventions and high-throughput drug screening, organoids have gained significant attention recently. In this review, we discuss the latest developments in engineering organoids and using materials engineering, biochemical modifications, and advanced manufacturing technologies to improve organoid culture and replicate vital anatomical structures and functions of human tissues. We then explore the diverse biomedical applications of organoids, including drug development and disease modeling, and highlight the tools and analytical techniques used to investigate organoids and their microenvironments. We also examine the latest clinical trials and patents related to organoids that show promise for future clinical translation. Finally, we discuss the challenges and future perspectives of using organoids to advance biomedical research and potentially transform personalized medicine.
Collapse
Affiliation(s)
| | - Canran Wang
- Andrew and Peggy Cherng Department of Medical Engineering, Division of Engineering and Applied Science, California Institute of Technology, Pasadena, CA 91125, USA
| | - Surjendu Maity
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Arne Peirsman
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Plastic and Reconstructive Surgery, Ghent University Hospital, Ghent, Belgium
| | - Rohollah Nasiri
- Division of Nanobiotechnology, Department of Protein Science, Science for Life Laboratory, KTH Royal Institute of Technology, 17165 Solna, Sweden
| | - Anna Herland
- Division of Nanobiotechnology, Department of Protein Science, Science for Life Laboratory, KTH Royal Institute of Technology, 17165 Solna, Sweden
| | - Menekse Ermis
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Satoru Kawakita
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Bruna Gregatti Carvalho
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Department of Material and Bioprocess Engineering, School of Chemical Engineering, University of Campinas (UNICAMP), 13083-970 Campinas, Brazil
| | - Negar Hosseinzadeh Kouchehbaghi
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Department of Textile Engineering, Amirkabir University of Technology (Tehran Polytechnic), Hafez Avenue, 1591634311 Tehran, Iran
| | - Rondinelli Donizetti Herculano
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Autonomy Research Center for STEAHM (ARCS), California State University, Northridge, CA 91324, USA; São Paulo State University (UNESP), Bioengineering and Biomaterials Group, School of Pharmaceutical Sciences, Araraquara, SP, Brazil
| | - Zuzana Tirpáková
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Department of Biology and Physiology, University of Veterinary Medicine and Pharmacy in Kosice, Komenskeho 73, 04181 Kosice, Slovakia
| | - Seyed Mohammad Hossein Dabiri
- Laboratory for Innovations in Micro Engineering (LiME), Department of Mechanical Engineering, University of Victoria, Victoria, BC V8P 5C2, Canada
| | - Jean Lucas Tanaka
- Butantan Institute, Viral Biotechnology Laboratory, São Paulo, SP Brazil; University of São Paulo (USP), São Paulo, SP Brazil
| | - Natashya Falcone
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Auveen Choroomi
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - RunRun Chen
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Autonomy Research Center for STEAHM (ARCS), California State University, Northridge, CA 91324, USA
| | - Shuyi Huang
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Autonomy Research Center for STEAHM (ARCS), California State University, Northridge, CA 91324, USA
| | - Elisheva Zisblatt
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Yixuan Huang
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Ahmad Rashad
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Danial Khorsandi
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Ankit Gangrade
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Leon Voskanian
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Yangzhi Zhu
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA
| | - Bingbing Li
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; Autonomy Research Center for STEAHM (ARCS), California State University, Northridge, CA 91324, USA
| | - Mohsen Akbari
- Laboratory for Innovations in Micro Engineering (LiME), Department of Mechanical Engineering, University of Victoria, Victoria, BC V8P 5C2, Canada
| | - Junmin Lee
- Department of Materials Science and Engineering, Pohang University of Science and Technology (POSTECH), Pohang, Gyeongbuk 37673, Republic of Korea
| | | | - Han-Jun Kim
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA; College of Pharmacy, Korea University, Sejong 30019, Republic of Korea.
| | - Ali Khademhosseini
- Terasaki Institute for Biomedical Innovation (TIBI), Los Angeles, CA 90064, USA.
| |
Collapse
|
19
|
Ngo TKN, Yang SJ, Mao BH, Nguyen TKM, Ng QD, Kuo YL, Tsai JH, Saw SN, Tu TY. A deep learning-based pipeline for analyzing the influences of interfacial mechanochemical microenvironments on spheroid invasion using differential interference contrast microscopic images. Mater Today Bio 2023; 23:100820. [PMID: 37810748 PMCID: PMC10558776 DOI: 10.1016/j.mtbio.2023.100820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 07/16/2023] [Accepted: 09/24/2023] [Indexed: 10/10/2023] Open
Abstract
Metastasis is the leading cause of cancer-related deaths. During this process, cancer cells are likely to navigate discrete tissue-tissue interfaces, enabling them to infiltrate and spread throughout the body. Three-dimensional (3D) spheroid modeling is receiving more attention due to its strengths in studying the invasive behavior of metastatic cancer cells. While microscopy is a conventional approach for investigating 3D invasion, post-invasion image analysis, which is a time-consuming process, remains a significant challenge for researchers. In this study, we presented an image processing pipeline that utilized a deep learning (DL) solution, with an encoder-decoder architecture, to assess and characterize the invasion dynamics of tumor spheroids. The developed models, equipped with feature extraction and measurement capabilities, could be successfully utilized for the automated segmentation of the invasive protrusions as well as the core region of spheroids situated within interfacial microenvironments with distinct mechanochemical factors. Our findings suggest that a combination of the spheroid culture and DL-based image analysis enable identification of time-lapse migratory patterns for tumor spheroids above matrix-substrate interfaces, thus paving the foundation for delineating the mechanism of local invasion during cancer metastasis.
Collapse
Affiliation(s)
- Thi Kim Ngan Ngo
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Sze Jue Yang
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Bin-Hsu Mao
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Thi Kim Mai Nguyen
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Qi Ding Ng
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Yao-Lung Kuo
- Department of Surgery, College of Medicine, National Cheng Kung University, Tainan, 70101, Taiwan
- Department of Surgery, National Cheng Kung University Hospital, Tainan, 70101, Taiwan
| | - Jui-Hung Tsai
- Department of Internal Medicine, National Cheng Kung University Hospital, Tainan, 70101, Taiwan
| | - Shier Nee Saw
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Ting-Yuan Tu
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
- Medical Device Innovation Center, National Cheng Kung University, Tainan, 70101, Taiwan
| |
Collapse
|
20
|
Alieva M, Wezenaar AKL, Wehrens EJ, Rios AC. Bridging live-cell imaging and next-generation cancer treatment. Nat Rev Cancer 2023; 23:731-745. [PMID: 37704740 DOI: 10.1038/s41568-023-00610-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/25/2023] [Indexed: 09/15/2023]
Abstract
By providing spatial, molecular and morphological data over time, live-cell imaging can provide a deeper understanding of the cellular and signalling events that determine cancer response to treatment. Understanding this dynamic response has the potential to enhance clinical outcome by identifying biomarkers or actionable targets to improve therapeutic efficacy. Here, we review recent applications of live-cell imaging for uncovering both tumour heterogeneity in treatment response and the mode of action of cancer-targeting drugs. Given the increasing uses of T cell therapies, we discuss the unique opportunity of time-lapse imaging for capturing the interactivity and motility of immunotherapies. Although traditionally limited in the number of molecular features captured, novel developments in multidimensional imaging and multi-omics data integration offer strategies to connect single-cell dynamics to molecular phenotypes. We review the effect of these recent technological advances on our understanding of the cellular dynamics of tumour targeting and discuss their implication for next-generation precision medicine.
Collapse
Affiliation(s)
- Maria Alieva
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Instituto de Investigaciones Biomedicas Sols-Morreale (IIBM), CSIC-UAM, Madrid, Spain
| | - Amber K L Wezenaar
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands
- Oncode Institute, Utrecht, The Netherlands
| | - Ellen J Wehrens
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Oncode Institute, Utrecht, The Netherlands.
| | - Anne C Rios
- Princess Máxima Center for Pediatric Oncology, Utrecht, The Netherlands.
- Oncode Institute, Utrecht, The Netherlands.
| |
Collapse
|
21
|
Wang J, Yu H, Yin Chiang P. Dual-mode LiDAR SoC with an on-chip interframe filter and common optical platform design for keystone correction and auto-focus. APPLIED OPTICS 2023; 62:7658-7668. [PMID: 37855473 DOI: 10.1364/ao.499533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 09/12/2023] [Indexed: 10/20/2023]
Abstract
This paper presents an innovative methodology that incorporates direct time-of-flight technology into intelligent sensing for projectors, along with a lightweight, dual-mode optically integrated LiDAR system. The proposed LiDAR system-on-chip, which utilizes a single-photon avalanche diode and time to digital converter with 0.13 µm bipolar CMOS DMOS technology, integrates an on-chip interframe filter, a common optical platform design, and a lightweight keystone correction assist algorithm. This comprehensive integration enables the system to achieve a measurement range of 11 m with 1% relative precision (simulations indicate the potential to achieve 30 m) in auto-focus mode. Additionally, it facilitates high frame-per-second keystone correction within a range of ±30∘ with an error of ±2∘ under illumination conditions of 20 klux.
Collapse
|
22
|
Tee LF, Young JJ, Maruyama K, Kimura S, Suzuki R, Endo Y, Kimura KD. Electric shock causes a fleeing-like persistent behavioral response in the nematode Caenorhabditis elegans. Genetics 2023; 225:iyad148. [PMID: 37595066 PMCID: PMC10550322 DOI: 10.1093/genetics/iyad148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 07/27/2023] [Indexed: 08/20/2023] Open
Abstract
Behavioral persistency reflects internal brain states, which are the foundations of multiple brain functions. However, experimental paradigms enabling genetic analyses of behavioral persistency and its associated brain functions have been limited. Here, we report novel persistent behavioral responses caused by electric stimuli in the nematode Caenorhabditis elegans. When the animals on bacterial food are stimulated by alternating current, their movement speed suddenly increases 2- to 3-fold, persisting for more than 1 minute even after a 5-second stimulation. Genetic analyses reveal that voltage-gated channels in the neurons are required for the response, possibly as the sensors, and neuropeptide signaling regulates the duration of the persistent response. Additional behavioral analyses implicate that the animal's response to electric shock is scalable and has a negative valence. These properties, along with persistence, have been recently regarded as essential features of emotion, suggesting that C. elegans response to electric shock may reflect a form of emotion, akin to fear.
Collapse
Affiliation(s)
- Ling Fei Tee
- Graduate School of Science, Nagoya City University, Nagoya 467-8501, Japan
| | - Jared J Young
- Mills College at Northeastern University, Oakland, CA 94613, USA
| | - Keisuke Maruyama
- Graduate School of Science, Nagoya City University, Nagoya 467-8501, Japan
| | - Sota Kimura
- Graduate School of Science, Nagoya City University, Nagoya 467-8501, Japan
| | - Ryoga Suzuki
- Graduate School of Science, Nagoya City University, Nagoya 467-8501, Japan
| | - Yuto Endo
- Graduate School of Science, Nagoya City University, Nagoya 467-8501, Japan
- Department of Biological Sciences, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
| | - Koutarou D Kimura
- Graduate School of Science, Nagoya City University, Nagoya 467-8501, Japan
- Department of Biological Sciences, Graduate School of Science, Osaka University, Toyonaka, Osaka 560-0043, Japan
| |
Collapse
|
23
|
Corallo D, Dalla Vecchia M, Lazic D, Taschner-Mandl S, Biffi A, Aveic S. The molecular basis of tumor metastasis and current approaches to decode targeted migration-promoting events in pediatric neuroblastoma. Biochem Pharmacol 2023; 215:115696. [PMID: 37481138 DOI: 10.1016/j.bcp.2023.115696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/12/2023] [Accepted: 07/12/2023] [Indexed: 07/24/2023]
Abstract
Cell motility is a crucial biological process that plays a critical role in the development of multicellular organisms and is essential for tissue formation and regeneration. However, uncontrolled cell motility can lead to the development of various diseases, including neoplasms. In this review, we discuss recent advances in the discovery of regulatory mechanisms underlying the metastatic spread of neuroblastoma, a solid pediatric tumor that originates in the embryonic migratory cells of the neural crest. The highly motile phenotype of metastatic neuroblastoma cells requires targeting of intracellular and extracellular processes, that, if affected, would be helpful for the treatment of high-risk patients with neuroblastoma, for whom current therapies remain inadequate. Development of new potentially migration-inhibiting compounds and standardized preclinical approaches for the selection of anti-metastatic drugs in neuroblastoma will also be discussed.
Collapse
Affiliation(s)
- Diana Corallo
- Laboratory of Target Discovery and Biology of Neuroblastoma, Istituto di Ricerca Pediatrica (IRP), Fondazione Città della Speranza, 35127 Padova, Italy
| | - Marco Dalla Vecchia
- Laboratory of Target Discovery and Biology of Neuroblastoma, Istituto di Ricerca Pediatrica (IRP), Fondazione Città della Speranza, 35127 Padova, Italy
| | - Daria Lazic
- St. Anna Children's Cancer Research Institute, CCRI, Zimmermannplatz 10, 1090, Vienna, Austria
| | - Sabine Taschner-Mandl
- St. Anna Children's Cancer Research Institute, CCRI, Zimmermannplatz 10, 1090, Vienna, Austria
| | - Alessandra Biffi
- Pediatric Hematology, Oncology and Stem Cell Transplant Division, Woman's and Child Health Department, University of Padova, 35121 Padova, Italy
| | - Sanja Aveic
- Laboratory of Target Discovery and Biology of Neuroblastoma, Istituto di Ricerca Pediatrica (IRP), Fondazione Città della Speranza, 35127 Padova, Italy.
| |
Collapse
|
24
|
Körber N. MIA is an open-source standalone deep learning application for microscopic image analysis. CELL REPORTS METHODS 2023; 3:100517. [PMID: 37533647 PMCID: PMC10391334 DOI: 10.1016/j.crmeth.2023.100517] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 02/10/2023] [Accepted: 06/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, the amount of data generated by imaging techniques has grown rapidly, along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, the Microscopic Image Analyzer (MIA) was developed. MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep-learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and uses open data formats, which are compatible with commonly used open-source software packages. The software provides a unified interface for easy image labeling, model training, and inference. Furthermore, the software was evaluated in a public competition and performed among the top three for all tested datasets.
Collapse
Affiliation(s)
- Nils Körber
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany
| |
Collapse
|
25
|
Zhang X, Almasian M, Hassan SS, Jotheesh R, Kadam VA, Polk AR, Saberigarakani A, Rahat A, Yuan J, Lee J, Carroll K, Ding Y. 4D Light-sheet imaging and interactive analysis of cardiac contractility in zebrafish larvae. APL Bioeng 2023; 7:026112. [PMID: 37351330 PMCID: PMC10283270 DOI: 10.1063/5.0153214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 06/05/2023] [Indexed: 06/24/2023] Open
Abstract
Despite ongoing efforts in cardiovascular research, the acquisition of high-resolution and high-speed images for the purpose of assessing cardiac contraction remains challenging. Light-sheet fluorescence microscopy (LSFM) offers superior spatiotemporal resolution and minimal photodamage, providing an indispensable opportunity for the in vivo study of cardiac micro-structure and contractile function in zebrafish larvae. To track the myocardial architecture and contractility, we have developed an imaging strategy ranging from LSFM system construction, retrospective synchronization, single cell tracking, to user-directed virtual reality (VR) analysis. Our system enables the four-dimensional (4D) investigation of individual cardiomyocytes across the entire atrium and ventricle during multiple cardiac cycles in a zebrafish larva at the cellular resolution. To enhance the throughput of our model reconstruction and assessment, we have developed a parallel computing-assisted algorithm for 4D synchronization, resulting in a nearly tenfold enhancement of reconstruction efficiency. The machine learning-based nuclei segmentation and VR-based interaction further allow us to quantify cellular dynamics in the myocardium from end-systole to end-diastole. Collectively, our strategy facilitates noninvasive cardiac imaging and user-directed data interpretation with improved efficiency and accuracy, holding great promise to characterize functional changes and regional mechanics at the single cell level during cardiac development and regeneration.
Collapse
Affiliation(s)
- Xinyuan Zhang
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Milad Almasian
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Sohail S. Hassan
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Rosemary Jotheesh
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Vinay A. Kadam
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Austin R. Polk
- Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Alireza Saberigarakani
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Aayan Rahat
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Jie Yuan
- Department of Bioengineering, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, Texas 75080, USA
| | - Juhyun Lee
- Department of Bioengineering, The University of Texas at Arlington, Arlington, Texas 76019, USA
| | - Kelli Carroll
- Department of Biology, Austin College, Sherman, Texas 75090, USA
| | - Yichen Ding
- Author to whom correspondence should be addressed:. Tel.: 972–883-7217
| |
Collapse
|
26
|
Van Os L, Engelhardt B, Guenat OT. Integration of immune cells in organs-on-chips: a tutorial. Front Bioeng Biotechnol 2023; 11:1191104. [PMID: 37324438 PMCID: PMC10267470 DOI: 10.3389/fbioe.2023.1191104] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 05/10/2023] [Indexed: 06/17/2023] Open
Abstract
Viral and bacterial infections continue to pose significant challenges for numerous individuals globally. To develop novel therapies to combat infections, more insight into the actions of the human innate and adaptive immune system during infection is necessary. Human in vitro models, such as organs-on-chip (OOC) models, have proven to be a valuable addition to the tissue modeling toolbox. The incorporation of an immune component is needed to bring OOC models to the next level and enable them to mimic complex biological responses. The immune system affects many (patho)physiological processes in the human body, such as those taking place during an infection. This tutorial review introduces the reader to the building blocks of an OOC model of acute infection to investigate recruitment of circulating immune cells into the infected tissue. The multi-step extravasation cascade in vivo is described, followed by an in-depth guide on how to model this process on a chip. Next to chip design, creation of a chemotactic gradient and incorporation of endothelial, epithelial, and immune cells, the review focuses on the hydrogel extracellular matrix (ECM) to accurately model the interstitial space through which extravasated immune cells migrate towards the site of infection. Overall, this tutorial review is a practical guide for developing an OOC model of immune cell migration from the blood into the interstitial space during infection.
Collapse
Affiliation(s)
- Lisette Van Os
- Organs-on-Chip Technologies, ARTORG Center for Biomedical Engineering, University of Bern, Bern, Switzerland
- Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | | | - Olivier T. Guenat
- Organs-on-Chip Technologies, ARTORG Center for Biomedical Engineering, University of Bern, Bern, Switzerland
- Department of Pulmonary Medicine, Inselspital, University Hospital of Bern, Bern, Switzerland
- Department of General Thoracic Surgery, Inselspital, University Hospital of Bern, Bern, Switzerland
| |
Collapse
|
27
|
Malik H, Idris AS, Toha SF, Mohd Idris I, Daud MF, Azmi NL. A review of open-source image analysis tools for mammalian cell culture: algorithms, features and implementations. PeerJ Comput Sci 2023; 9:e1364. [PMID: 37346656 PMCID: PMC10280419 DOI: 10.7717/peerj-cs.1364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 04/04/2023] [Indexed: 06/23/2023]
Abstract
Cell culture is undeniably important for multiple scientific applications, including pharmaceuticals, transplants, and cosmetics. However, cell culture involves multiple manual steps, such as regularly analyzing cell images for their health and morphology. Computer scientists have developed algorithms to automate cell imaging analysis, but they are not widely adopted by biologists, especially those lacking an interactive platform. To address the issue, we compile and review existing open-source cell image processing tools that provide interactive interfaces for management and prediction tasks. We highlight the prediction tools that can detect, segment, and track different mammalian cell morphologies across various image modalities and present a comparison of algorithms and unique features of these tools, whether they work locally or in the cloud. This would guide non-experts to determine which is best suited for their purposes and, developers to acknowledge what is worth further expansion. In addition, we provide a general discussion on potential implementations of the tools for a more extensive scope, which guides the reader to not restrict them to prediction tasks only. Finally, we conclude the article by stating new considerations for the development of interactive cell imaging tools and suggesting new directions for future research.
Collapse
Affiliation(s)
- Hafizi Malik
- Healthcare Engineering and Rehabilitation Research, Department of Mechatronics Engineering, International Islamic University Malaysia, Gombak, Selangor, Malaysia
| | - Ahmad Syahrin Idris
- Department of Electrical and Electronic Engineering, University of Southampton Malaysia, Iskandar Puteri, Johor, Malaysia
| | - Siti Fauziah Toha
- Healthcare Engineering and Rehabilitation Research, Department of Mechatronics Engineering, International Islamic University Malaysia, Gombak, Selangor, Malaysia
| | - Izyan Mohd Idris
- Institute for Medical Research (IMR), National Institutes of Health (NIH), Ministry of Health Malaysia, Shah Alam, Selangor, Malaysia
| | - Muhammad Fauzi Daud
- Institute of Medical Science Technology, Universiti Kuala Lumpur, Kajang, Selangor, Malaysia
| | - Nur Liyana Azmi
- Healthcare Engineering and Rehabilitation Research, Department of Mechatronics Engineering, International Islamic University Malaysia, Gombak, Selangor, Malaysia
| |
Collapse
|
28
|
Rivera‐Arbeláez JM, Keekstra D, Cofiño‐Fabres C, Boonen T, Dostanic M, ten Den SA, Vermeul K, Mastrangeli M, van den Berg A, Segerink LI, Ribeiro MC, Strisciuglio N, Passier R. Automated assessment of human engineered heart tissues using deep learning and template matching for segmentation and tracking. Bioeng Transl Med 2023; 8:e10513. [PMID: 37206226 PMCID: PMC10189437 DOI: 10.1002/btm2.10513] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 02/27/2023] [Accepted: 03/08/2023] [Indexed: 05/21/2023] Open
Abstract
The high rate of drug withdrawal from the market due to cardiovascular toxicity or lack of efficacy, the economic burden, and extremely long time before a compound reaches the market, have increased the relevance of human in vitro models like human (patient-derived) pluripotent stem cell (hPSC)-derived engineered heart tissues (EHTs) for the evaluation of the efficacy and toxicity of compounds at the early phase in the drug development pipeline. Consequently, the EHT contractile properties are highly relevant parameters for the analysis of cardiotoxicity, disease phenotype, and longitudinal measurements of cardiac function over time. In this study, we developed and validated the software HAARTA (Highly Accurate, Automatic and Robust Tracking Algorithm), which automatically analyzes contractile properties of EHTs by segmenting and tracking brightfield videos, using deep learning and template matching with sub-pixel precision. We demonstrate the robustness, accuracy, and computational efficiency of the software by comparing it to the state-of-the-art method (MUSCLEMOTION), and by testing it with a data set of EHTs from three different hPSC lines. HAARTA will facilitate standardized analysis of contractile properties of EHTs, which will be beneficial for in vitro drug screening and longitudinal measurements of cardiac function.
Collapse
Affiliation(s)
- José M. Rivera‐Arbeláez
- Department of Applied Stem Cell Technologies, TechMed CentreUniversity of TwenteEnschedethe Netherlands
- BIOS Lab on a Chip Group, MESA+ Institute for Nanotechnology, TechMed Centre, Max Planck Institute for Complex Fluid DynamicsUniversity of TwenteEnschedethe Netherlands
| | - Danjel Keekstra
- Data Management & Biometrics (DMB) GroupUniversity of TwenteEnschedethe Netherlands
| | - Carla Cofiño‐Fabres
- Department of Applied Stem Cell Technologies, TechMed CentreUniversity of TwenteEnschedethe Netherlands
| | | | | | - Simone A. ten Den
- Department of Applied Stem Cell Technologies, TechMed CentreUniversity of TwenteEnschedethe Netherlands
| | - Kim Vermeul
- Department of Applied Stem Cell Technologies, TechMed CentreUniversity of TwenteEnschedethe Netherlands
| | | | - Albert van den Berg
- BIOS Lab on a Chip Group, MESA+ Institute for Nanotechnology, TechMed Centre, Max Planck Institute for Complex Fluid DynamicsUniversity of TwenteEnschedethe Netherlands
| | - Loes I. Segerink
- BIOS Lab on a Chip Group, MESA+ Institute for Nanotechnology, TechMed Centre, Max Planck Institute for Complex Fluid DynamicsUniversity of TwenteEnschedethe Netherlands
| | | | - Nicola Strisciuglio
- Data Management & Biometrics (DMB) GroupUniversity of TwenteEnschedethe Netherlands
| | - Robert Passier
- Department of Applied Stem Cell Technologies, TechMed CentreUniversity of TwenteEnschedethe Netherlands
- Department of Anatomy and EmbryologyLeiden University Medical CentreLeidenthe Netherlands
| |
Collapse
|
29
|
Tsai HF, Podder S, Chen PY. Microsystem Advances through Integration with Artificial Intelligence. MICROMACHINES 2023; 14:826. [PMID: 37421059 PMCID: PMC10141994 DOI: 10.3390/mi14040826] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 04/04/2023] [Accepted: 04/06/2023] [Indexed: 07/09/2023]
Abstract
Microfluidics is a rapidly growing discipline that involves studying and manipulating fluids at reduced length scale and volume, typically on the scale of micro- or nanoliters. Under the reduced length scale and larger surface-to-volume ratio, advantages of low reagent consumption, faster reaction kinetics, and more compact systems are evident in microfluidics. However, miniaturization of microfluidic chips and systems introduces challenges of stricter tolerances in designing and controlling them for interdisciplinary applications. Recent advances in artificial intelligence (AI) have brought innovation to microfluidics from design, simulation, automation, and optimization to bioanalysis and data analytics. In microfluidics, the Navier-Stokes equations, which are partial differential equations describing viscous fluid motion that in complete form are known to not have a general analytical solution, can be simplified and have fair performance through numerical approximation due to low inertia and laminar flow. Approximation using neural networks trained by rules of physical knowledge introduces a new possibility to predict the physicochemical nature. The combination of microfluidics and automation can produce large amounts of data, where features and patterns that are difficult to discern by a human can be extracted by machine learning. Therefore, integration with AI introduces the potential to revolutionize the microfluidic workflow by enabling the precision control and automation of data analysis. Deployment of smart microfluidics may be tremendously beneficial in various applications in the future, including high-throughput drug discovery, rapid point-of-care-testing (POCT), and personalized medicine. In this review, we summarize key microfluidic advances integrated with AI and discuss the outlook and possibilities of combining AI and microfluidics.
Collapse
Affiliation(s)
- Hsieh-Fu Tsai
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan;
- Department of Neurosurgery, Chang Gung Memorial Hospital, Keelung, Keelung City 204, Taiwan
- Center for Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan
| | - Soumyajit Podder
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan;
| | - Pin-Yuan Chen
- Department of Biomedical Engineering, Chang Gung University, Taoyuan City 333, Taiwan;
- Department of Neurosurgery, Chang Gung Memorial Hospital, Keelung, Keelung City 204, Taiwan
| |
Collapse
|
30
|
Park SA, Sipka T, Krivá Z, Lutfalla G, Nguyen-Chi M, Mikula K. Segmentation-based tracking of macrophages in 2D+time microscopy movies inside a living animal. Comput Biol Med 2023; 153:106499. [PMID: 36599208 DOI: 10.1016/j.compbiomed.2022.106499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 12/19/2022] [Accepted: 12/27/2022] [Indexed: 12/31/2022]
Abstract
The automated segmentation and tracking of macrophages during their migration are challenging tasks due to their dynamically changing shapes and motions. This paper proposes a new algorithm to achieve automatic cell tracking in time-lapse microscopy macrophage data. First, we design a segmentation method employing space-time filtering, local Otsu's thresholding, and the SUBSURF (subjective surface segmentation) method. Next, the partial trajectories for cells overlapping in the temporal direction are extracted in the segmented images. Finally, the extracted trajectories are linked by considering their direction of movement. The segmented images and the obtained trajectories from the proposed method are compared with those of the semi-automatic segmentation and manual tracking. The proposed tracking achieved 97.4% of accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. We expect that the automatically extracted trajectories of macrophages can provide pieces of evidence of how macrophages migrate depending on their polarization modes in the situation, such as during wound healing.
Collapse
Affiliation(s)
- Seol Ah Park
- Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, Radlinskeho 11, Bratislava, 810 05, Slovakia.
| | - Tamara Sipka
- LPHI Laboratory of Pathogen Host Interaction, CNRS, Univ. Montpellier, Place E.Bataillon-Building 24, 34095, Montpellier Cedex 05, France.
| | - Zuzana Krivá
- Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, Radlinskeho 11, Bratislava, 810 05, Slovakia.
| | - Georges Lutfalla
- LPHI Laboratory of Pathogen Host Interaction, CNRS, Univ. Montpellier, Place E.Bataillon-Building 24, 34095, Montpellier Cedex 05, France.
| | - Mai Nguyen-Chi
- LPHI Laboratory of Pathogen Host Interaction, CNRS, Univ. Montpellier, Place E.Bataillon-Building 24, 34095, Montpellier Cedex 05, France.
| | - Karol Mikula
- Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, Radlinskeho 11, Bratislava, 810 05, Slovakia.
| |
Collapse
|
31
|
Antonello P, Morone D, Pirani E, Uguccioni M, Thelen M, Krause R, Pizzagalli DU. Tracking unlabeled cancer cells imaged with low resolution in wide migration chambers via U-NET class-1 probability (pseudofluorescence). J Biol Eng 2023; 17:5. [PMID: 36694208 PMCID: PMC9872392 DOI: 10.1186/s13036-022-00321-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/27/2022] [Indexed: 01/26/2023] Open
Abstract
Cell migration is a pivotal biological process, whose dysregulation is found in many diseases including inflammation and cancer. Advances in microscopy technologies allow now to study cell migration in vitro, within engineered microenvironments that resemble in vivo conditions. However, to capture an entire 3D migration chamber for extended periods of time and with high temporal resolution, images are generally acquired with low resolution, which poses a challenge for data analysis. Indeed, cell detection and tracking are hampered due to the large pixel size (i.e., cell diameter down to 2 pixels), the possible low signal-to-noise ratio, and distortions in the cell shape due to changes in the z-axis position. Although fluorescent staining can be used to facilitate cell detection, it may alter cell behavior and it may suffer from fluorescence loss over time (photobleaching).Here we describe a protocol that employs an established deep learning method (U-NET), to specifically convert transmitted light (TL) signal from unlabeled cells imaged with low resolution to a fluorescent-like signal (class 1 probability). We demonstrate its application to study cancer cell migration, obtaining a significant improvement in tracking accuracy, while not suffering from photobleaching. This is reflected in the possibility of tracking cells for three-fold longer periods of time. To facilitate the application of the protocol we provide WID-U, an open-source plugin for FIJI and Imaris imaging software, the training dataset used in this paper, and the code to train the network for custom experimental settings.
Collapse
Affiliation(s)
- Paola Antonello
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland ,grid.5734.50000 0001 0726 5157Graduate School of Cellular and Molecular Sciences, University of Bern, CH-3012 Bern, Switzerland
| | - Diego Morone
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland ,grid.5734.50000 0001 0726 5157Graduate School of Cellular and Molecular Sciences, University of Bern, CH-3012 Bern, Switzerland
| | - Edisa Pirani
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Mariagrazia Uguccioni
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Marcus Thelen
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Rolf Krause
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Euler institute, CH-6962 Lugano-Viganello, Switzerland ,FernUni, Faculty of Mathematics and Informatics, Brig, Switzerland
| | - Diego Ulisse Pizzagalli
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland ,grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Euler institute, CH-6962 Lugano-Viganello, Switzerland
| |
Collapse
|
32
|
BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations. NPJ Biofilms Microbiomes 2022; 8:99. [PMID: 36529755 PMCID: PMC9760640 DOI: 10.1038/s41522-022-00362-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
Collapse
|
33
|
Computational Analysis of Cardiac Contractile Function. Curr Cardiol Rep 2022; 24:1983-1994. [PMID: 36301405 PMCID: PMC10091868 DOI: 10.1007/s11886-022-01814-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/14/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE OF REVIEW Heart failure results in the high incidence and mortality all over the world. Mechanical properties of myocardium are critical determinants of cardiac function, with regional variations in myocardial contractility demonstrated within infarcted ventricles. Quantitative assessment of cardiac contractile function is therefore critical to identify myocardial infarction for the early diagnosis and therapeutic intervention. RECENT FINDINGS Current advancement of cardiac functional assessments is in pace with the development of imaging techniques. The methods tailored to advanced imaging have been widely used in cardiac magnetic resonance, echocardiography, and optical microscopy. In this review, we introduce fundamental concepts and applications of representative methods for each imaging modality used in both fundamental research and clinical investigations. All these methods have been designed or developed to quantify time-dependent 2-dimensional (2D) or 3D cardiac mechanics, holding great potential to unravel global or regional myocardial deformation and contractile function from end-systole to end-diastole. Computational methods to assess cardiac contractile function provide a quantitative insight into the analysis of myocardial mechanics during cardiac development, injury, and remodeling.
Collapse
|
34
|
Wu Y, Wu S, Wang X, Lang C, Zhang Q, Wen Q, Xu T. Rapid detection and recognition of whole brain activity in a freely behaving Caenorhabditis elegans. PLoS Comput Biol 2022; 18:e1010594. [PMID: 36215325 PMCID: PMC9584436 DOI: 10.1371/journal.pcbi.1010594] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 10/20/2022] [Accepted: 09/22/2022] [Indexed: 11/07/2022] Open
Abstract
Advanced volumetric imaging methods and genetically encoded activity indicators have permitted a comprehensive characterization of whole brain activity at single neuron resolution in Caenorhabditis elegans. The constant motion and deformation of the nematode nervous system, however, impose a great challenge for consistent identification of densely packed neurons in a behaving animal. Here, we propose a cascade solution for long-term and rapid recognition of head ganglion neurons in a freely moving C. elegans. First, potential neuronal regions from a stack of fluorescence images are detected by a deep learning algorithm. Second, 2-dimensional neuronal regions are fused into 3-dimensional neuron entities. Third, by exploiting the neuronal density distribution surrounding a neuron and relative positional information between neurons, a multi-class artificial neural network transforms engineered neuronal feature vectors into digital neuronal identities. With a small number of training samples, our bottom-up approach is able to process each volume—1024 × 1024 × 18 in voxels—in less than 1 second and achieves an accuracy of 91% in neuronal detection and above 80% in neuronal tracking over a long video recording. Our work represents a step towards rapid and fully automated algorithms for decoding whole brain activity underlying naturalistic behaviors. An important question in neuroscience is to understand the relationship between brain dynamics and naturalistic behaviors when an animal is freely exploring its environment. In the last decade, it has become possible to genetically engineer animals whose neurons produce fluorescence reporters that change their brightness in response to brain activity. In small animals such as the nematode C. elegans, we can now record the fluorescence changes in and thereby infer neural activity from most neurons in the head of a worm, when the animal is freely moving. These neurons are densely packed in a small volume. Since the brain and body are moving and its shape undergoes significant deformation, a human expert, even after long hours of inspection, may still have difficulty to tell where the neurons are and who they are. We sought to develop an automatic method for rapidly detecting and tracking most of these neurons in a moving animal. To do this, we asked a human expert to annotate all head neurons—their locations and digital identities—across a small number of volumes. Then, we trained a computer to learn the locations and digital identities of these neurons across different imaging volumes. Our machine inference method is fast and accurate. While it takes a human expert several hours to complete a sequence of volumes, a machine can finish the task in a few minutes. We hope our method provides a better and more efficient engine for extracting knowledge from whole brain imaging datasets and animal behaviors.
Collapse
Affiliation(s)
- Yuxiang Wu
- Chinese Academy of Sciences Key Laboratory of Brain Function and Diseases, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Shang Wu
- Chinese Academy of Sciences Key Laboratory of Brain Function and Diseases, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
| | - Xin Wang
- John Hopcroft Center for Computer Science, School of electronic information and electrical engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chengtian Lang
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, China
| | - Quanshi Zhang
- John Hopcroft Center for Computer Science, School of electronic information and electrical engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Quan Wen
- Chinese Academy of Sciences Key Laboratory of Brain Function and Diseases, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
- * E-mail: (QW); (TX)
| | - Tianqi Xu
- Chinese Academy of Sciences Key Laboratory of Brain Function and Diseases, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Hefei National Laboratory for Physical Sciences at the Microscale, Center for Integrative Imaging, University of Science and Technology of China, Hefei, China
- * E-mail: (QW); (TX)
| |
Collapse
|
35
|
Hilzenrat G, Gill ET, McArthur SL. Imaging approaches for monitoring three-dimensional cell and tissue culture systems. JOURNAL OF BIOPHOTONICS 2022; 15:e202100380. [PMID: 35357086 DOI: 10.1002/jbio.202100380] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 03/27/2022] [Accepted: 03/28/2022] [Indexed: 06/14/2023]
Abstract
The past decade has seen an increasing demand for more complex, reproducible and physiologically relevant tissue cultures that can mimic the structural and biological features of living tissues. Monitoring the viability, development and responses of such tissues in real-time are challenging due to the complexities of cell culture physical characteristics and the environments in which these cultures need to be maintained in. Significant developments in optics, such as optical manipulation, improved detection and data analysis, have made optical imaging a preferred choice for many three-dimensional (3D) cell culture monitoring applications. The aim of this review is to discuss the challenges associated with imaging and monitoring 3D tissues and cell culture, and highlight topical label-free imaging tools that enable bioengineers and biophysicists to non-invasively characterise engineered living tissues.
Collapse
Affiliation(s)
- Geva Hilzenrat
- Bioengineering Engineering Group, School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, Victoria, Australia
- Biomedical Manufacturing, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Clayton, Victoria, Australia
| | - Emma T Gill
- Bioengineering Engineering Group, School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, Victoria, Australia
- Biomedical Manufacturing, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Clayton, Victoria, Australia
| | - Sally L McArthur
- Bioengineering Engineering Group, School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, Victoria, Australia
- Biomedical Manufacturing, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Clayton, Victoria, Australia
| |
Collapse
|
36
|
Skuhersky M, Wu T, Yemini E, Nejatbakhsh A, Boyden E, Tegmark M. Toward a more accurate 3D atlas of C. elegans neurons. BMC Bioinformatics 2022; 23:195. [PMID: 35643434 PMCID: PMC9145532 DOI: 10.1186/s12859-022-04738-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 05/17/2022] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
Determining cell identity in volumetric images of tagged neuronal nuclei is an ongoing challenge in contemporary neuroscience. Frequently, cell identity is determined by aligning and matching tags to an “atlas” of labeled neuronal positions and other identifying characteristics. Previous analyses of such C. elegans datasets have been hampered by the limited accuracy of such atlases, especially for neurons present in the ventral nerve cord, and also by time-consuming manual elements of the alignment process.
Results
We present a novel automated alignment method for sparse and incomplete point clouds of the sort resulting from typical C. elegans fluorescence microscopy datasets. This method involves a tunable learning parameter and a kernel that enforces biologically realistic deformation. We also present a pipeline for creating alignment atlases from datasets of the recently developed NeuroPAL transgene. In combination, these advances allow us to label neurons in volumetric images with confidence much higher than previous methods.
Conclusions
We release, to the best of our knowledge, the most complete full-body C. elegans 3D positional neuron atlas, incorporating positional variability derived from at least 7 animals per neuron, for the purposes of cell-type identity prediction for myriad applications (e.g., imaging neuronal activity, gene expression, and cell-fate).
Collapse
|
37
|
Connecting the dots in ethology: applying network theory to understand neural and animal collectives. Curr Opin Neurobiol 2022; 73:102532. [DOI: 10.1016/j.conb.2022.102532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 02/04/2022] [Accepted: 03/02/2022] [Indexed: 11/24/2022]
|
38
|
Bioimaging approaches for quantification of individual cell behavior during cell fate decisions. Biochem Soc Trans 2022; 50:513-527. [PMID: 35166330 DOI: 10.1042/bst20210534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 01/10/2022] [Accepted: 01/24/2022] [Indexed: 11/17/2022]
Abstract
Tracking individual cells has allowed a new understanding of cellular behavior in human health and disease by adding a dynamic component to the already complex heterogeneity of single cells. Technically, despite countless advances, numerous experimental variables can affect data collection and interpretation and need to be considered. In this review, we discuss the main technical aspects and biological findings in the analysis of the behavior of individual cells. We discuss the most relevant contributions provided by these approaches in clinically relevant human conditions like embryo development, stem cells biology, inflammation, cancer and microbiology, along with the cellular mechanisms and molecular pathways underlying these conditions. We also discuss the key technical aspects to be considered when planning and performing experiments involving the analysis of individual cells over long periods. Despite the challenges in automatic detection, features extraction and long-term tracking that need to be tackled, the potential impact of single-cell bioimaging is enormous in understanding the pathogenesis and development of new therapies in human pathophysiology.
Collapse
|
39
|
Hernández-Herrera P, Ugartechea-Chirino Y, Torres-Martínez HH, Arzola AV, Chairez-Veloz JE, García-Ponce B, Sánchez MDLP, Garay-Arroyo A, Álvarez-Buylla ER, Dubrovsky JG, Corkidi G. Live Plant Cell Tracking: Fiji plugin to analyze cell proliferation dynamics and understand morphogenesis. PLANT PHYSIOLOGY 2022; 188:846-860. [PMID: 34791452 PMCID: PMC8825436 DOI: 10.1093/plphys/kiab530] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 10/19/2021] [Indexed: 05/13/2023]
Abstract
Arabidopsis (Arabidopsis thaliana) primary and lateral roots (LRs) are well suited for 3D and 4D microscopy, and their development provides an ideal system for studying morphogenesis and cell proliferation dynamics. With fast-advancing microscopy techniques used for live-imaging, whole tissue data are increasingly available, yet present the great challenge of analyzing complex interactions within cell populations. We developed a plugin "Live Plant Cell Tracking" (LiPlaCeT) coupled to the publicly available ImageJ image analysis program and generated a pipeline that allows, with the aid of LiPlaCeT, 4D cell tracking and lineage analysis of populations of dividing and growing cells. The LiPlaCeT plugin contains ad hoc ergonomic curating tools, making it very simple to use for manual cell tracking, especially when the signal-to-noise ratio of images is low or variable in time or 3D space and when automated methods may fail. Performing time-lapse experiments and using cell-tracking data extracted with the assistance of LiPlaCeT, we accomplished deep analyses of cell proliferation and clonal relations in the whole developing LR primordia and constructed genealogical trees. We also used cell-tracking data for endodermis cells of the root apical meristem (RAM) and performed automated analyses of cell population dynamics using ParaView software (also publicly available). Using the RAM as an example, we also showed how LiPlaCeT can be used to generate information at the whole-tissue level regarding cell length, cell position, cell growth rate, cell displacement rate, and proliferation activity. The pipeline will be useful in live-imaging studies of roots and other plant organs to understand complex interactions within proliferating and growing cell populations. The plugin includes a step-by-step user manual and a dataset example that are available at https://www.ibt.unam.mx/documentos/diversos/LiPlaCeT.zip.
Collapse
Affiliation(s)
- Paul Hernández-Herrera
- Laboratorio de Imágenes y Visión por Computadora, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Yamel Ugartechea-Chirino
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Héctor H Torres-Martínez
- Departamento de Biología Molecular de Plantas, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Alejandro V Arzola
- Instituto de Física, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - José Eduardo Chairez-Veloz
- Departamento de Control Automático, Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Cd. de México, C.P. 07350, Mexico
| | - Berenice García-Ponce
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - María de la Paz Sánchez
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Adriana Garay-Arroyo
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Elena R Álvarez-Buylla
- Departamento de Ecología Funcional, Instituto de Ecología, Laboratorio de Genética Molecular, Epigenética, Desarrollo y Evolución de Plantas, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
- Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Joseph G Dubrovsky
- Departamento de Biología Molecular de Plantas, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| | - Gabriel Corkidi
- Laboratorio de Imágenes y Visión por Computadora, Instituto de Biotecnología, Universidad Nacional Autónoma de México, Cd. de México, C.P. 04510, Mexico
| |
Collapse
|
40
|
Sugawara K, Çevrim Ç, Averof M. Tracking cell lineages in 3D by incremental deep learning. eLife 2022; 11:e69380. [PMID: 34989675 PMCID: PMC8741210 DOI: 10.7554/elife.69380] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 12/07/2021] [Indexed: 11/13/2022] Open
Abstract
Deep learning is emerging as a powerful approach for bioimage analysis. Its use in cell tracking is limited by the scarcity of annotated data for the training of deep-learning models. Moreover, annotation, training, prediction, and proofreading currently lack a unified user interface. We present ELEPHANT, an interactive platform for 3D cell tracking that addresses these challenges by taking an incremental approach to deep learning. ELEPHANT provides an interface that seamlessly integrates cell track annotation, deep learning, prediction, and proofreading. This enables users to implement cycles of incremental learning starting from a few annotated nuclei. Successive prediction-validation cycles enrich the training data, leading to rapid improvements in tracking performance. We test the software's performance against state-of-the-art methods and track lineages spanning the entire course of leg regeneration in a crustacean over 1 week (504 timepoints). ELEPHANT yields accurate, fully-validated cell lineages with a modest investment in time and effort.
Collapse
Affiliation(s)
- Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| | - Çağrı Çevrim
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| | - Michalis Averof
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| |
Collapse
|
41
|
Abdelfattah AS, Ahuja S, Akkin T, Allu SR, Brake J, Boas DA, Buckley EM, Campbell RE, Chen AI, Cheng X, Čižmár T, Costantini I, De Vittorio M, Devor A, Doran PR, El Khatib M, Emiliani V, Fomin-Thunemann N, Fainman Y, Fernandez-Alfonso T, Ferri CGL, Gilad A, Han X, Harris A, Hillman EMC, Hochgeschwender U, Holt MG, Ji N, Kılıç K, Lake EMR, Li L, Li T, Mächler P, Miller EW, Mesquita RC, Nadella KMNS, Nägerl UV, Nasu Y, Nimmerjahn A, Ondráčková P, Pavone FS, Perez Campos C, Peterka DS, Pisano F, Pisanello F, Puppo F, Sabatini BL, Sadegh S, Sakadzic S, Shoham S, Shroff SN, Silver RA, Sims RR, Smith SL, Srinivasan VJ, Thunemann M, Tian L, Tian L, Troxler T, Valera A, Vaziri A, Vinogradov SA, Vitale F, Wang LV, Uhlířová H, Xu C, Yang C, Yang MH, Yellen G, Yizhar O, Zhao Y. Neurophotonic tools for microscopic measurements and manipulation: status report. NEUROPHOTONICS 2022; 9:013001. [PMID: 35493335 PMCID: PMC9047450 DOI: 10.1117/1.nph.9.s1.013001] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Neurophotonics was launched in 2014 coinciding with the launch of the BRAIN Initiative focused on development of technologies for advancement of neuroscience. For the last seven years, Neurophotonics' agenda has been well aligned with this focus on neurotechnologies featuring new optical methods and tools applicable to brain studies. While the BRAIN Initiative 2.0 is pivoting towards applications of these novel tools in the quest to understand the brain, this status report reviews an extensive and diverse toolkit of novel methods to explore brain function that have emerged from the BRAIN Initiative and related large-scale efforts for measurement and manipulation of brain structure and function. Here, we focus on neurophotonic tools mostly applicable to animal studies. A companion report, scheduled to appear later this year, will cover diffuse optical imaging methods applicable to noninvasive human studies. For each domain, we outline the current state-of-the-art of the respective technologies, identify the areas where innovation is needed, and provide an outlook for the future directions.
Collapse
Affiliation(s)
- Ahmed S. Abdelfattah
- Brown University, Department of Neuroscience, Providence, Rhode Island, United States
| | - Sapna Ahuja
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Taner Akkin
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, Minnesota, United States
| | - Srinivasa Rao Allu
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Joshua Brake
- Harvey Mudd College, Department of Engineering, Claremont, California, United States
| | - David A. Boas
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Erin M. Buckley
- Georgia Institute of Technology and Emory University, Wallace H. Coulter Department of Biomedical Engineering, Atlanta, Georgia, United States
- Emory University, Department of Pediatrics, Atlanta, Georgia, United States
| | - Robert E. Campbell
- University of Tokyo, Department of Chemistry, Tokyo, Japan
- University of Alberta, Department of Chemistry, Edmonton, Alberta, Canada
| | - Anderson I. Chen
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Xiaojun Cheng
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Tomáš Čižmár
- Institute of Scientific Instruments of the Czech Academy of Sciences, Brno, Czech Republic
| | - Irene Costantini
- University of Florence, European Laboratory for Non-Linear Spectroscopy, Department of Biology, Florence, Italy
- National Institute of Optics, National Research Council, Rome, Italy
| | - Massimo De Vittorio
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, Italy
| | - Anna Devor
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
- Massachusetts General Hospital, Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, United States
| | - Patrick R. Doran
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Mirna El Khatib
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | | | - Natalie Fomin-Thunemann
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Yeshaiahu Fainman
- University of California San Diego, Department of Electrical and Computer Engineering, La Jolla, California, United States
| | - Tomas Fernandez-Alfonso
- University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom
| | - Christopher G. L. Ferri
- University of California San Diego, Departments of Neurosciences, La Jolla, California, United States
| | - Ariel Gilad
- The Hebrew University of Jerusalem, Institute for Medical Research Israel–Canada, Department of Medical Neurobiology, Faculty of Medicine, Jerusalem, Israel
| | - Xue Han
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Andrew Harris
- Weizmann Institute of Science, Department of Brain Sciences, Rehovot, Israel
| | | | - Ute Hochgeschwender
- Central Michigan University, Department of Neuroscience, Mount Pleasant, Michigan, United States
| | - Matthew G. Holt
- University of Porto, Instituto de Investigação e Inovação em Saúde (i3S), Porto, Portugal
| | - Na Ji
- University of California Berkeley, Department of Physics, Berkeley, California, United States
| | - Kıvılcım Kılıç
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Evelyn M. R. Lake
- Yale School of Medicine, Department of Radiology and Biomedical Imaging, New Haven, Connecticut, United States
| | - Lei Li
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| | - Tianqi Li
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, Minnesota, United States
| | - Philipp Mächler
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Evan W. Miller
- University of California Berkeley, Departments of Chemistry and Molecular & Cell Biology and Helen Wills Neuroscience Institute, Berkeley, California, United States
| | | | | | - U. Valentin Nägerl
- Interdisciplinary Institute for Neuroscience University of Bordeaux & CNRS, Bordeaux, France
| | - Yusuke Nasu
- University of Tokyo, Department of Chemistry, Tokyo, Japan
| | - Axel Nimmerjahn
- Salk Institute for Biological Studies, Waitt Advanced Biophotonics Center, La Jolla, California, United States
| | - Petra Ondráčková
- Institute of Scientific Instruments of the Czech Academy of Sciences, Brno, Czech Republic
| | - Francesco S. Pavone
- National Institute of Optics, National Research Council, Rome, Italy
- University of Florence, European Laboratory for Non-Linear Spectroscopy, Department of Physics, Florence, Italy
| | - Citlali Perez Campos
- Columbia University, Zuckerman Mind Brain Behavior Institute, New York, United States
| | - Darcy S. Peterka
- Columbia University, Zuckerman Mind Brain Behavior Institute, New York, United States
| | - Filippo Pisano
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, Italy
| | - Ferruccio Pisanello
- Istituto Italiano di Tecnologia, Center for Biomolecular Nanotechnologies, Arnesano, Italy
| | - Francesca Puppo
- University of California San Diego, Departments of Neurosciences, La Jolla, California, United States
| | - Bernardo L. Sabatini
- Harvard Medical School, Howard Hughes Medical Institute, Department of Neurobiology, Boston, Massachusetts, United States
| | - Sanaz Sadegh
- University of California San Diego, Departments of Neurosciences, La Jolla, California, United States
| | - Sava Sakadzic
- Massachusetts General Hospital, Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts, United States
| | - Shy Shoham
- New York University Grossman School of Medicine, Tech4Health and Neuroscience Institutes, New York, New York, United States
| | - Sanaya N. Shroff
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - R. Angus Silver
- University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom
| | - Ruth R. Sims
- Sorbonne University, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Spencer L. Smith
- University of California Santa Barbara, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
| | - Vivek J. Srinivasan
- New York University Langone Health, Departments of Ophthalmology and Radiology, New York, New York, United States
| | - Martin Thunemann
- Boston University, Department of Biomedical Engineering, Boston, Massachusetts, United States
| | - Lei Tian
- Boston University, Departments of Electrical Engineering and Biomedical Engineering, Boston, Massachusetts, United States
| | - Lin Tian
- University of California Davis, Department of Biochemistry and Molecular Medicine, Davis, California, United States
| | - Thomas Troxler
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Antoine Valera
- University College London, Department of Neuroscience, Physiology and Pharmacology, London, United Kingdom
| | - Alipasha Vaziri
- Rockefeller University, Laboratory of Neurotechnology and Biophysics, New York, New York, United States
- The Rockefeller University, The Kavli Neural Systems Institute, New York, New York, United States
| | - Sergei A. Vinogradov
- University of Pennsylvania, Perelman School of Medicine, Department of Biochemistry and Biophysics, Philadelphia, Pennsylvania, United States
- University of Pennsylvania, School of Arts and Sciences, Department of Chemistry, Philadelphia, Pennsylvania, United States
| | - Flavia Vitale
- Center for Neuroengineering and Therapeutics, Departments of Neurology, Bioengineering, Physical Medicine and Rehabilitation, Philadelphia, Pennsylvania, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| | - Hana Uhlířová
- Institute of Scientific Instruments of the Czech Academy of Sciences, Brno, Czech Republic
| | - Chris Xu
- Cornell University, School of Applied and Engineering Physics, Ithaca, New York, United States
| | - Changhuei Yang
- California Institute of Technology, Departments of Electrical Engineering, Bioengineering and Medical Engineering, Pasadena, California, United States
| | - Mu-Han Yang
- University of California San Diego, Department of Electrical and Computer Engineering, La Jolla, California, United States
| | - Gary Yellen
- Harvard Medical School, Department of Neurobiology, Boston, Massachusetts, United States
| | - Ofer Yizhar
- Weizmann Institute of Science, Department of Brain Sciences, Rehovot, Israel
| | - Yongxin Zhao
- Carnegie Mellon University, Department of Biological Sciences, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
42
|
Wen C, Kimura K. Tracking Moving Cells in 3D Time Lapse Images Using 3DeeCellTracker. Bio Protoc 2022; 12:e4319. [DOI: 10.21769/bioprotoc.4319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 10/12/2021] [Accepted: 12/21/2021] [Indexed: 11/02/2022] Open
|
43
|
Hollandi R, Moshkov N, Paavolainen L, Tasnadi E, Piccinini F, Horvath P. Nucleus segmentation: towards automated solutions. Trends Cell Biol 2022; 32:295-310. [DOI: 10.1016/j.tcb.2021.12.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/30/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
|
44
|
Naert T, Çiçek Ö, Ogar P, Bürgi M, Shaidani NI, Kaminski MM, Xu Y, Grand K, Vujanovic M, Prata D, Hildebrandt F, Brox T, Ronneberger O, Voigt FF, Helmchen F, Loffing J, Horb ME, Willsey HR, Lienkamp SS. Deep learning is widely applicable to phenotyping embryonic development and disease. Development 2021; 148:273338. [PMID: 34739029 PMCID: PMC8602947 DOI: 10.1242/dev.199664] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 09/24/2021] [Indexed: 12/13/2022]
Abstract
Genome editing simplifies the generation of new animal models for congenital disorders. However, the detailed and unbiased phenotypic assessment of altered embryonic development remains a challenge. Here, we explore how deep learning (U-Net) can automate segmentation tasks in various imaging modalities, and we quantify phenotypes of altered renal, neural and craniofacial development in Xenopus embryos in comparison with normal variability. We demonstrate the utility of this approach in embryos with polycystic kidneys (pkd1 and pkd2) and craniofacial dysmorphia (six1). We highlight how in toto light-sheet microscopy facilitates accurate reconstruction of brain and craniofacial structures within X. tropicalis embryos upon dyrk1a and six1 loss of function or treatment with retinoic acid inhibitors. These tools increase the sensitivity and throughput of evaluating developmental malformations caused by chemical or genetic disruption. Furthermore, we provide a library of pre-trained networks and detailed instructions for applying deep learning to the reader's own datasets. We demonstrate the versatility, precision and scalability of deep neural network phenotyping on embryonic disease models. By combining light-sheet microscopy and deep learning, we provide a framework for higher-throughput characterization of embryonic model organisms. This article has an associated 'The people behind the papers' interview.
Collapse
Affiliation(s)
- Thomas Naert
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Özgün Çiçek
- Department of Computer Science, Albert-Ludwigs-University, Freiburg 79100, Germany
| | - Paulina Ogar
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Max Bürgi
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Nikko-Ideen Shaidani
- National Xenopus Resource and Eugene Bell Center for Regenerative Biology and Tissue Engineering, Marine Biological Laboratory, Woods Hole, MA 02543, USA
| | - Michael M Kaminski
- Berlin Institute for Medical Systems Biology, Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin 10115, Germany.,Department of Nephrology and Medical Intensive Care, Charité Universitätsmedizin Berlin, Berlin 10117, Germany
| | - Yuxiao Xu
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA 94158, USA
| | - Kelli Grand
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Marko Vujanovic
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Daniel Prata
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Friedhelm Hildebrandt
- Department of Pediatrics, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115,USA
| | - Thomas Brox
- Department of Computer Science, Albert-Ludwigs-University, Freiburg 79100, Germany
| | - Olaf Ronneberger
- Department of Computer Science, Albert-Ludwigs-University, Freiburg 79100, Germany.,BIOSS Centre for Biological Signalling Studies, Albert-Ludwigs-University, Freiburg, Germany.,DeepMind, London WC2H 8AG , UK
| | - Fabian F Voigt
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich 8057, Switzerland; Neuroscience Center Zurich, Zurich 8057, Switzerland
| | - Fritjof Helmchen
- Laboratory of Neural Circuit Dynamics, Brain Research Institute, University of Zurich, Zurich 8057, Switzerland; Neuroscience Center Zurich, Zurich 8057, Switzerland
| | - Johannes Loffing
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| | - Marko E Horb
- National Xenopus Resource and Eugene Bell Center for Regenerative Biology and Tissue Engineering, Marine Biological Laboratory, Woods Hole, MA 02543, USA
| | - Helen Rankin Willsey
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, CA 94158, USA
| | - Soeren S Lienkamp
- Institute of Anatomy, University of Zurich, Zurich 8057, Switzerland; Swiss National Centre of Competence in Research (NCCR) Kidney Control of Homeostasis (Kidney.CH), Zurich 8057, Switzerland
| |
Collapse
|
45
|
Laine RF, Arganda-Carreras I, Henriques R, Jacquemet G. Avoiding a replication crisis in deep-learning-based bioimage analysis. Nat Methods 2021; 18:1136-1144. [PMID: 34608322 PMCID: PMC7611896 DOI: 10.1038/s41592-021-01284-3] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning algorithms are powerful tools to analyse, restore and transform bioimaging data, increasingly used in life sciences research. These approaches now outperform most other algorithms for a broad range of image analysis tasks. In particular, one of the promises of deep learning is the possibility to provide parameter-free, one-click data analysis achieving expert-level performances in a fraction of the time previously required. However, as with most new and upcoming technologies, the potential for inappropriate use is raising concerns among the biomedical research community. This perspective aims to provide a short overview of key concepts that we believe are important for researchers to consider when using deep learning for their microscopy studies. These comments are based on our own experience gained while optimising various deep learning tools for bioimage analysis and discussions with colleagues from both the developer and user community. In particular, we focus on describing how results obtained using deep learning can be validated and discuss what should, in our views, be considered when choosing a suitable tool. We also suggest what aspects of a deep learning analysis would need to be reported in publications to describe the use of such tools to guarantee that the work can be reproduced. We hope this perspective will foster further discussion between developers, image analysis specialists, users and journal editors to define adequate guidelines and ensure that this transformative technology is used appropriately.
Collapse
Affiliation(s)
- Romain F Laine
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Micrographia Bio, Translation and Innovation Hub, London, UK
| | - Ignacio Arganda-Carreras
- Computer Science and Artificial Intelligence Department, University of the Basque Country (UPV/EHU), San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Ricardo Henriques
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland.
- Faculty of Science and Engineering, Biosciences, Åbo Akademi University, Turku, Finland.
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland.
| |
Collapse
|
46
|
Hallou A, Yevick HG, Dumitrascu B, Uhlmann V. Deep learning for bioimage analysis in developmental biology. Development 2021; 148:dev199616. [PMID: 34490888 PMCID: PMC8451066 DOI: 10.1242/dev.199616] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Deep learning has transformed the way large and complex image datasets can be processed, reshaping what is possible in bioimage analysis. As the complexity and size of bioimage data continues to grow, this new analysis paradigm is becoming increasingly ubiquitous. In this Review, we begin by introducing the concepts needed for beginners to understand deep learning. We then review how deep learning has impacted bioimage analysis and explore the open-source resources available to integrate it into a research project. Finally, we discuss the future of deep learning applied to cell and developmental biology. We analyze how state-of-the-art methodologies have the potential to transform our understanding of biological systems through new image-based analysis and modelling that integrate multimodal inputs in space and time.
Collapse
Affiliation(s)
- Adrien Hallou
- Cavendish Laboratory, Department of Physics, University of Cambridge, Cambridge, CB3 0HE, UK
- Wellcome Trust/Cancer Research UK Gurdon Institute, University of Cambridge, Cambridge, CB2 1QN, UK
- Wellcome Trust/Medical Research Council Stem Cell Institute, University of Cambridge, Cambridge, CB2 1QR, UK
| | - Hannah G. Yevick
- Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, 02142, USA
| | - Bianca Dumitrascu
- Computer Laboratory, Cambridge, University of Cambridge, Cambridge, CB3 0FD, UK
| | - Virginie Uhlmann
- European Bioinformatics Institute, European Molecular Biology Laboratory, Cambridge, CB10 1SD, UK
| |
Collapse
|