1
|
Katoh TA, Fukai YT, Ishibashi T. Optical microscopic imaging, manipulation, and analysis methods for morphogenesis research. Microscopy (Oxf) 2024; 73:226-242. [PMID: 38102756 PMCID: PMC11154147 DOI: 10.1093/jmicro/dfad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/20/2023] [Accepted: 03/22/2024] [Indexed: 12/17/2023] Open
Abstract
Morphogenesis is a developmental process of organisms being shaped through complex and cooperative cellular movements. To understand the interplay between genetic programs and the resulting multicellular morphogenesis, it is essential to characterize the morphologies and dynamics at the single-cell level and to understand how physical forces serve as both signaling components and driving forces of tissue deformations. In recent years, advances in microscopy techniques have led to improvements in imaging speed, resolution and depth. Concurrently, the development of various software packages has supported large-scale, analyses of challenging images at the single-cell resolution. While these tools have enhanced our ability to examine dynamics of cells and mechanical processes during morphogenesis, their effective integration requires specialized expertise. With this background, this review provides a practical overview of those techniques. First, we introduce microscopic techniques for multicellular imaging and image analysis software tools with a focus on cell segmentation and tracking. Second, we provide an overview of cutting-edge techniques for mechanical manipulation of cells and tissues. Finally, we introduce recent findings on morphogenetic mechanisms and mechanosensations that have been achieved by effectively combining microscopy, image analysis tools and mechanical manipulation techniques.
Collapse
Affiliation(s)
- Takanobu A Katoh
- Department of Cell Biology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Yohsuke T Fukai
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| | - Tomoki Ishibashi
- Laboratory for Physical Biology, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| |
Collapse
|
2
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
3
|
Zhou FY, Yapp C, Shang Z, Daetwyler S, Marin Z, Islam MT, Nanes B, Jenkins E, Gihana GM, Chang BJ, Weems A, Dustin M, Morrison S, Fiolka R, Dean K, Jamieson A, Sorger PK, Danuser G. A general algorithm for consensus 3D cell segmentation from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Zhiguo Shang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin Nanes
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dustin
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Sean Morrison
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
4
|
Zhao Y, Chen KL, Shen XY, Li MK, Wan YJ, Yang C, Yu RJ, Long YT, Yan F, Ying YL. HFM-Tracker: a cell tracking algorithm based on hybrid feature matching. Analyst 2024; 149:2629-2636. [PMID: 38563459 DOI: 10.1039/d4an00199k] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Cell migration is known to be a fundamental biological process, playing an essential role in development, homeostasis, and diseases. This paper introduces a cell tracking algorithm named HFM-Tracker (Hybrid Feature Matching Tracker) that automatically identifies cell migration behaviours in consecutive images. It combines Contour Attention (CA) and Adaptive Confusion Matrix (ACM) modules to accurately capture cell contours in each image and track the dynamic behaviors of migrating cells in the field of view. Cells are firstly located and identified via the CA module-based cell detection network, and then associated and tracked via a cell tracking algorithm employing a hybrid feature-matching strategy. This proposed HFM-Tracker exhibits superiorities in cell detection and tracking, achieving 75% in MOTA (Multiple Object Tracking Accuracy) and 65% in IDF1 (ID F1 score). It provides quantitative analysis of the cell morphology and migration features, which could further help in understanding the complicated and diverse cell migration processes.
Collapse
Affiliation(s)
- Yan Zhao
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
| | - Ke-Le Chen
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Xin-Yu Shen
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Ming-Kang Li
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Yong-Jing Wan
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
| | - Cheng Yang
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Ru-Jia Yu
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Yi-Tao Long
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Feng Yan
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Yi-Lun Ying
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
- Chemistry and Biomedicine Innovation Center, Nanjing University, Nanjing 210023, P. R. China
| |
Collapse
|
5
|
Quinsgaard EMB, Korsnes MS, Korsnes R, Moestue SA. Single-cell tracking as a tool for studying EMT-phenotypes. Exp Cell Res 2024; 437:113993. [PMID: 38485079 DOI: 10.1016/j.yexcr.2024.113993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 02/28/2024] [Accepted: 03/06/2024] [Indexed: 03/24/2024]
Abstract
This article demonstrates that label-free single-cell video tracking is a useful approach for in vitro studies of Epithelial-Mesenchymal Transition (EMT). EMT is a highly heterogeneous process, involved in wound healing, embryogenesis and cancer. The process promotes metastasis, and increased understanding can aid development of novel therapeutic strategies. The role of EMT-associated biomarkers depends on biological context, making it challenging to compare and interpret data from different studies. We demonstrate single-cell video tracking for comprehensive phenotype analysis. In this study we performed single-cell video tracking on 72-h long recordings. We quantified several behaviours at a single-cell level during induced EMT in MDA-MB-468 cells. This revealed notable variations in migration speed, with different dose-response patterns and varying distributions of speed. By registering cell morphologies during the recording, we determined preferred paths of morphological transitions. We also found a clear association between migration speed and cell morphology. We found elevated rates of cell death, diminished proliferation, and an increase in mitotic failures followed by re-fusion of sister-cells. The method allows tracking of phenotypes in cell lineages, which can be particularly useful in epigenetic studies. Sister-cells were found to have significant similarities in their speeds and morphologies, illustrating the heritability of these traits.
Collapse
Affiliation(s)
- Ellen Marie Botne Quinsgaard
- Norwegian University of Science and Technology (NTNU), Department of Clinical and Molecular Medicine, NO-7491 Trondheim, Norway.
| | - Mónica Suárez Korsnes
- Norwegian University of Science and Technology (NTNU), Department of Clinical and Molecular Medicine, NO-7491 Trondheim, Norway; Korsnes Biocomputing (KoBio), Trondheim, Norway
| | | | - Siver Andreas Moestue
- Norwegian University of Science and Technology (NTNU), Department of Clinical and Molecular Medicine, NO-7491 Trondheim, Norway; Department of Pharmacy, Nord University, Bodø, Norway
| |
Collapse
|
6
|
Ma J, Xie R, Ayyadhury S, Ge C, Gupta A, Gupta R, Gu S, Zhang Y, Lee G, Kim J, Lou W, Li H, Upschulte E, Dickscheid T, de Almeida JG, Wang Y, Han L, Yang X, Labagnara M, Gligorovski V, Scheder M, Rahi SJ, Kempster C, Pollitt A, Espinosa L, Mignot T, Middeke JM, Eckardt JN, Li W, Li Z, Cai X, Bai B, Greenwald NF, Van Valen D, Weisbart E, Cimini BA, Cheung T, Brück O, Bader GD, Wang B. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods 2024:10.1038/s41592-024-02233-6. [PMID: 38532015 DOI: 10.1038/s41592-024-02233-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.
Collapse
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Ronald Xie
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| | - Shamini Ayyadhury
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Cheng Ge
- School of Medicine and Pharmacy, Ocean University of China, Qingdao, China
| | - Anubha Gupta
- Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India
| | - Ritu Gupta
- Laboratory Oncology Unit, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Nanjing, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Gihun Lee
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Joonkee Kim
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Wei Lou
- Shenzhen Research Institute of Big Data, Shenzhen, China
- Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Haofeng Li
- Shenzhen Research Institute of Big Data, Shenzhen, China
| | - Eric Upschulte
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
- Faculty of Mathematics and Natural Sciences - Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - José Guilherme de Almeida
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
- Champalimaud Foundation - Centre for the Unknown, Lisbon, Portugal
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | - Lin Han
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Marco Labagnara
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vojislav Gligorovski
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maxime Scheder
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - Alice Pollitt
- School of Biological Sciences, University of Reading, Reading, UK
| | - Leon Espinosa
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Tâm Mignot
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Wangkai Li
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Zhaoyang Li
- Institute of Advanced Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochen Cai
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Bizhe Bai
- School of EECS, The University of Queensland, Brisbane, Queensland, Australia
| | | | - David Van Valen
- Division of Computing and Mathematical Science, Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Erin Weisbart
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Trevor Cheung
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Oscar Brück
- Hematoscope Laboratory, Comprehensive Cancer Center & Center of Diagnostics, Helsinki University Hospital, Helsinki, Finland
- Department of Oncology, University of Helsinki, Helsinki, Finland
| | - Gary D Bader
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Ontario, Canada
- CIFAR Multiscale Human Program, CIFAR, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
- UHN AI Hub, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
7
|
Jose A, Roy R, Moreno-Andrés D, Stegmaier J. Automatic detection of cell-cycle stages using recurrent neural networks. PLoS One 2024; 19:e0297356. [PMID: 38466708 PMCID: PMC10927108 DOI: 10.1371/journal.pone.0297356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 01/02/2024] [Indexed: 03/13/2024] Open
Abstract
Mitosis is the process by which eukaryotic cells divide to produce two similar daughter cells with identical genetic material. Research into the process of mitosis is therefore of critical importance both for the basic understanding of cell biology and for the clinical approach to manifold pathologies resulting from its malfunctioning, including cancer. In this paper, we propose an approach to study mitotic progression automatically using deep learning. We used neural networks to predict different mitosis stages. We extracted video sequences of cells undergoing division and trained a Recurrent Neural Network (RNN) to extract image features. The use of RNN enabled better extraction of features. The RNN-based approach gave better performance compared to classifier based feature extraction methods which do not use time information. Evaluation of precision, recall, and F-score indicates the superiority of the proposed model compared to the baseline. To study the loss in performance due to confusion between adjacent classes, we plotted the confusion matrix as well. In addition, we visualized the feature space to understand why RNNs are better at classifying the mitosis stages than other classifier models, which indicated the formation of strong clusters for the different classes, clearly confirming the advantage of the proposed RNN-based approach.
Collapse
Affiliation(s)
- Abin Jose
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Daniel Moreno-Andrés
- Institute of Biochemistry and Molecular Cell Biology, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johannes Stegmaier
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
8
|
Liu P, Li J, Chang J, Hu P, Sun Y, Jiang Y, Zhang F, Shao H. Software Tools for 2D Cell Segmentation. Cells 2024; 13:352. [PMID: 38391965 PMCID: PMC10886800 DOI: 10.3390/cells13040352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
Cell segmentation is an important task in the field of image processing, widely used in the life sciences and medical fields. Traditional methods are mainly based on pixel intensity and spatial relationships, but have limitations. In recent years, machine learning and deep learning methods have been widely used, providing more-accurate and efficient solutions for cell segmentation. The effort to develop efficient and accurate segmentation software tools has been one of the major focal points in the field of cell segmentation for years. However, each software tool has unique characteristics and adaptations, and no universal cell-segmentation software can achieve perfect results. In this review, we used three publicly available datasets containing multiple 2D cell-imaging modalities. Common segmentation metrics were used to evaluate the performance of eight segmentation tools to compare their generality and, thus, find the best-performing tool.
Collapse
Affiliation(s)
- Ping Liu
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
| | - Jun Li
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Jiaxing Chang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Pinli Hu
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yue Sun
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yanan Jiang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Fan Zhang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Haojing Shao
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| |
Collapse
|
9
|
Eschweiler D, Yilmaz R, Baumann M, Laube I, Roy R, Jose A, Brückner D, Stegmaier J. Denoising diffusion probabilistic models for generation of realistic fully-annotated microscopy image datasets. PLoS Comput Biol 2024; 20:e1011890. [PMID: 38377165 PMCID: PMC10906858 DOI: 10.1371/journal.pcbi.1011890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/01/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024] Open
Abstract
Recent advances in computer vision have led to significant progress in the generation of realistic image data, with denoising diffusion probabilistic models proving to be a particularly effective method. In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets through an unsupervised and intuitive approach, using rough sketches of desired structures as the starting point. The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches and enables the segmentation of diverse datasets without the need for human annotations. We demonstrate that segmentation models trained with a small set of synthetic image data reach accuracy levels comparable to those of generalist models trained with a large and diverse collection of manually annotated image data, thereby offering a streamlined and specialized application of segmentation models.
Collapse
Affiliation(s)
- Dennis Eschweiler
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rüveyda Yilmaz
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Matisse Baumann
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Ina Laube
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rijo Roy
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Abin Jose
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Daniel Brückner
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Johannes Stegmaier
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| |
Collapse
|
10
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
11
|
Maier-Hein L, Reinke A, Godau P, Tizabi MD, Buettner F, Christodoulou E, Glocker B, Isensee F, Kleesiek J, Kozubek M, Reyes M, Riegler MA, Wiesenfarth M, Kavur AE, Sudre CH, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Rädsch T, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Blaschko MB, Cardoso MJ, Cheplygina V, Cimini BA, Collins GS, Farahani K, Ferrer L, Galdran A, van Ginneken B, Haase R, Hashimoto DA, Hoffman MM, Huisman M, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Karthikesalingam A, Kofler F, Kopp-Schneider A, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Mattson P, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rajpoot N, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, van Smeden M, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Jäger PF. Metrics reloaded: recommendations for image analysis validation. Nat Methods 2024; 21:195-212. [PMID: 38347141 DOI: 10.1038/s41592-023-02151-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.
Collapse
Affiliation(s)
- Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- Medical Faculty, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Medicine, Goethe University Frankfurt, Frankfurt am Main, Germany
- Department of Informatics, Goethe University Frankfurt, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine, University Medicine Essen, Essen, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
| | - Manuel Wiesenfarth
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Québec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology & Laboratory Medicine, Indiana University School of Medicine, IU Health Information and Translational Sciences Building, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Matthew B Blaschko
- Center for Processing Speech and Images, Department of Electrical Engineering, KU Leuven, Leuven, Belgium
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Gary S Collins
- Centre for Statistics in Medicine, University of Oxford, Nuffield Orthopaedic Centre, Oxford, UK
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- BCN Medtech, Universitat Pompeu Fabra, Barcelona, Spain
- Australian Institute for Machine Learning AIML, University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Robert Haase
- Technische Universität (TU) Dresden, DFG Cluster of Excellence 'Physics of Life', Dresden, Germany
- Center for Systems Biology, Dresden, Germany
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Leipzig University, Leipzig, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Peter Mattson
- Google, 1600 Amphitheatre Pkwy, Mountain View, CA, USA
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Québec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| |
Collapse
|
12
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhao D, Zhou D, Tong G, Liu D, Ji Z. A systematic evaluation of computation methods for cell segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577670. [PMID: 38352578 PMCID: PMC10862744 DOI: 10.1101/2024.01.28.577670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including training data and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, which substantially reduces the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Biostatistics, Harvard T.H.Chan School of Public Health, Boston, MA, USA
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Dongfang Zhao
- Department of Computer Science & eScience Institute, University of Washington, Seattle, WA, USA
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Gang Tong
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
13
|
Qian K, Friedman B, Takatoh J, Wang F, Kleinfeld D, Freund Y. CellBoost: A pipeline for machine assisted annotation in Neuroanatomy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.13.557658. [PMID: 38293051 PMCID: PMC10827062 DOI: 10.1101/2023.09.13.557658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
One of the important yet labor intensive tasks in neuroanatomy is the identification of select populations of cells. Current high-throughput techniques enable marking cells with histochemical fluorescent molecules as well as through the genetic expression of fluorescent proteins. Modern scanning microscopes allow high resolution multi-channel imaging of the mechanically or optically sectioned brain with thousands of marked cells per square millimeter. Manual identification of all marked cells is prohibitively time consuming. At the same time, simple segmentation algorithms suffer from high error rates and sensitivity to variation in fluorescent intensity and spatial distribution. We present a methodology that combines human judgement and machine learning that serves to significantly reduce the labor of the anatomist while improving the consistency of the annotation. As a demonstration, we analyzed murine brains with marked premotor neurons in the brainstem. We compared the error rate of our method to the disagreement rate among human anatomists. This comparison shows that our method can reduce the time to annotate by as much as ten-fold without significantly increasing the rate of errors. We show that our method achieves significant reduction in labor while achieving an accuracy that is similar to the level of agreement between different anatomists.
Collapse
Affiliation(s)
- Kui Qian
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA
| | - Beth Friedman
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA
| | - Jun Takatoh
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Fan Wang
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - David Kleinfeld
- Department of Physics, University of California, San Diego, La Jolla, CA 92093, USA
- Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA
| | - Yoav Freund
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA
- Halıcıoğlu Data Science Institute, University of California, San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
14
|
Wen C. Deep Learning-Based Cell Tracking in Deforming Organs and Moving Animals. Methods Mol Biol 2024; 2800:203-215. [PMID: 38709486 DOI: 10.1007/978-1-0716-3834-7_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Cell tracking is an essential step in extracting cellular signals from moving cells, which is vital for understanding the mechanisms underlying various biological functions and processes, particularly in organs such as the brain and heart. However, cells in living organisms often exhibit extensive and complex movements caused by organ deformation and whole-body motion. These movements pose a challenge in obtaining high-quality time-lapse cell images and tracking the intricate cell movements in the captured images. Recent advances in deep learning techniques provide powerful tools for detecting cells in low-quality images with densely packed cell populations, as well as estimating cell positions for cells undergoing large nonrigid movements. This chapter introduces the challenges of cell tracking in deforming organs and moving animals, outlines the solutions to these challenges, and presents a detailed protocol for data preparation, as well as for performing cell segmentation and tracking using the latest version of 3DeeCellTracker. This protocol is expected to enable researchers to gain deeper insights into organ dynamics and biological processes.
Collapse
Affiliation(s)
- Chentao Wen
- RIKEN Center for Biodynamic Research, Kobe, Japan.
| |
Collapse
|
15
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
16
|
Bondoc-Naumovitz KG, Laeverenz-Schlogelhofer H, Poon RN, Boggon AK, Bentley SA, Cortese D, Wan KY. Methods and Measures for Investigating Microscale Motility. Integr Comp Biol 2023; 63:1485-1508. [PMID: 37336589 PMCID: PMC10755196 DOI: 10.1093/icb/icad075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/21/2023] Open
Abstract
Motility is an essential factor for an organism's survival and diversification. With the advent of novel single-cell technologies, analytical frameworks, and theoretical methods, we can begin to probe the complex lives of microscopic motile organisms and answer the intertwining biological and physical questions of how these diverse lifeforms navigate their surroundings. Herein, we summarize the main mechanisms of microscale motility and give an overview of different experimental, analytical, and mathematical methods used to study them across different scales encompassing the molecular-, individual-, to population-level. We identify transferable techniques, pressing challenges, and future directions in the field. This review can serve as a starting point for researchers who are interested in exploring and quantifying the movements of organisms in the microscale world.
Collapse
Affiliation(s)
| | | | - Rebecca N Poon
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Alexander K Boggon
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Samuel A Bentley
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Dario Cortese
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Kirsty Y Wan
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| |
Collapse
|
17
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
18
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
19
|
Aleksandrovych M, Strassberg M, Melamed J, Xu M. Polarization differential interference contrast microscopy with physics-inspired plug-and-play denoiser for single-shot high-performance quantitative phase imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:5833-5850. [PMID: 38021115 PMCID: PMC10659786 DOI: 10.1364/boe.499316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/31/2023] [Accepted: 09/15/2023] [Indexed: 12/01/2023]
Abstract
We present single-shot high-performance quantitative phase imaging with a physics-inspired plug-and-play denoiser for polarization differential interference contrast (PDIC) microscopy. The quantitative phase is recovered by the alternating direction method of multipliers (ADMM), balancing total variance regularization and a pre-trained dense residual U-net (DRUNet) denoiser. The custom DRUNet uses the Tanh activation function to guarantee the symmetry requirement for phase retrieval. In addition, we introduce an adaptive strategy accelerating convergence and explicitly incorporating measurement noise. After validating this deep denoiser-enhanced PDIC microscopy on simulated data and phantom experiments, we demonstrated high-performance phase imaging of histological tissue sections. The phase retrieval by the denoiser-enhanced PDIC microscopy achieves significantly higher quality and accuracy than the solution based on Fourier transforms or the iterative solution with total variance regularization alone.
Collapse
Affiliation(s)
- Mariia Aleksandrovych
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| | - Mark Strassberg
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| | - Jonathan Melamed
- Department of Pathology, New York University Langone School of Medicine, New York, NY 10016, USA
| | - Min Xu
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| |
Collapse
|
20
|
Lindwall G, Gerlee P. Bayesian inference on the Allee effect in cancer cell line populations using time-lapse microscopy images. J Theor Biol 2023; 574:111624. [PMID: 37769802 DOI: 10.1016/j.jtbi.2023.111624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 09/08/2023] [Accepted: 09/13/2023] [Indexed: 10/03/2023]
Abstract
The Allee effect describes the phenomenon that the per capita reproduction rate increases along with the population density at low densities. Allee effects have been observed at all scales, including in microscopic environments where individual cells are taken into account. This is great interest to cancer research, as understanding critical tumour density thresholds can inform treatment plans for patients. In this paper, we introduce a simple model for cell division in the case where the cancer cell population is modelled as an interacting particle system. The rate of the cell division is dependent on the local cell density, introducing an Allee effect. We perform parameter inference of the key model parameters through Markov Chain Monte Carlo, and apply our procedure to two image sequences from a cervical cancer cell line. The inference method is verified on in silico data to accurately identify the key parameters, and results on the in vitro data strongly suggest an Allee effect.
Collapse
|
21
|
Antonelli L, Polverino F, Albu A, Hada A, Asteriti IA, Degrassi F, Guarguaglini G, Maddalena L, Guarracino MR. ALFI: Cell cycle phenotype annotations of label-free time-lapse imaging data from cultured human cells. Sci Data 2023; 10:677. [PMID: 37794110 PMCID: PMC10551030 DOI: 10.1038/s41597-023-02540-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Detecting and tracking multiple moving objects in a video is a challenging task. For living cells, the task becomes even more arduous as cells change their morphology over time, can partially overlap, and mitosis leads to new cells. Differently from fluorescence microscopy, label-free techniques can be easily applied to almost all cell lines, reducing sample preparation complexity and phototoxicity. In this study, we present ALFI, a dataset of images and annotations for label-free microscopy, made publicly available to the scientific community, that notably extends the current panorama of expertly labeled data for detection and tracking of cultured living nontransformed and cancer human cells. It consists of 29 time-lapse image sequences from HeLa, U2OS, and hTERT RPE-1 cells under different experimental conditions, acquired by differential interference contrast microscopy, for a total of 237.9 hours. It contains various annotations (pixel-wise segmentation masks, object-wise bounding boxes, tracking information). The dataset is useful for testing and comparing methods for identifying interphase and mitotic events and reconstructing their lineage, and for discriminating different cellular phenotypes.
Collapse
Affiliation(s)
- Laura Antonelli
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy
| | - Federica Polverino
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Alexandra Albu
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Aroj Hada
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Italia A Asteriti
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Francesca Degrassi
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Giulia Guarguaglini
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy.
| | - Lucia Maddalena
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy.
| | - Mario R Guarracino
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
- Laboratory of Algorithms and Technologies for Networks Analysis, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
22
|
Bouchard C, Bernatchez R, Lavoie-Cardinal F. Addressing annotation and data scarcity when designing machine learning strategies for neurophotonics. NEUROPHOTONICS 2023; 10:044405. [PMID: 37636490 PMCID: PMC10447257 DOI: 10.1117/1.nph.10.4.044405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/29/2023]
Abstract
Machine learning has revolutionized the way data are processed, allowing information to be extracted in a fraction of the time it would take an expert. In the field of neurophotonics, machine learning approaches are used to automatically detect and classify features of interest in complex images. One of the key challenges in applying machine learning methods to the field of neurophotonics is the scarcity of available data and the complexity associated with labeling them, which can limit the performance of data-driven algorithms. We present an overview of various strategies, such as weakly supervised learning, active learning, and domain adaptation that can be used to address the problem of labeled data scarcity in neurophotonics. We provide a comprehensive overview of the strengths and limitations of each approach and discuss their potential applications to bioimaging datasets. In addition, we highlight how different strategies can be combined to increase model performance on those datasets. The approaches we describe can help to improve the accessibility of machine learning-based analysis with limited number of annotated images for training and can enable researchers to extract more meaningful insights from small datasets.
Collapse
Affiliation(s)
- Catherine Bouchard
- CERVO Brain Research Centre, Québec, Québec, Canada
- Université Laval, Institute Intelligence and Data, Québec, Québec, Canada
| | - Renaud Bernatchez
- CERVO Brain Research Centre, Québec, Québec, Canada
- Université Laval, Institute Intelligence and Data, Québec, Québec, Canada
| | - Flavie Lavoie-Cardinal
- CERVO Brain Research Centre, Québec, Québec, Canada
- Université Laval, Institute Intelligence and Data, Québec, Québec, Canada
- Université Laval, Département de psychiatrie et de neurosciences, Québec, Québec, Canada
| |
Collapse
|
23
|
Petkidis A, Andriasyan V, Greber UF. Machine learning for cross-scale microscopy of viruses. CELL REPORTS METHODS 2023; 3:100557. [PMID: 37751685 PMCID: PMC10545915 DOI: 10.1016/j.crmeth.2023.100557] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/28/2023]
Abstract
Despite advances in virological sciences and antiviral research, viruses continue to emerge, circulate, and threaten public health. We still lack a comprehensive understanding of how cells and individuals remain susceptible to infectious agents. This deficiency is in part due to the complexity of viruses, including the cell states controlling virus-host interactions. Microscopy samples distinct cellular infection stages in a multi-parametric, time-resolved manner at molecular resolution and is increasingly enhanced by machine learning and deep learning. Here we discuss how state-of-the-art artificial intelligence (AI) augments light and electron microscopy and advances virological research of cells. We describe current procedures for image denoising, object segmentation, tracking, classification, and super-resolution and showcase examples of how AI has improved the acquisition and analyses of microscopy data. The power of AI-enhanced microscopy will continue to help unravel virus infection mechanisms, develop antiviral agents, and improve viral vectors.
Collapse
Affiliation(s)
- Anthony Petkidis
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| | - Vardan Andriasyan
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - Urs F Greber
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| |
Collapse
|
24
|
Gao G, Walter NG. Critical Assessment of Condensate Boundaries in Dual-Color Single Particle Tracking. J Phys Chem B 2023; 127:7694-7707. [PMID: 37669232 DOI: 10.1021/acs.jpcb.3c03776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
Biomolecular condensates are membraneless cellular compartments generated by phase separation that regulate a broad variety of cellular functions by enriching some biomolecules while excluding others. Live-cell single particle tracking of individual fluorophore-labeled condensate components has provided insights into a condensate's mesoscopic organization and biological functions, such as revealing the recruitment, translation, and decay of RNAs within ribonucleoprotein (RNP) granules. Specifically, during dual-color tracking, one imaging channel provides a time series of individual biomolecule locations, while the other channel monitors the location of the condensate relative to these molecules. Therefore, an accurate assessment of a condensate's boundary is critical for combined live-cell single particle-condensate tracking. Despite its importance, a quantitative benchmarking and objective comparison of the various available boundary detection methods is missing due to the lack of an absolute ground truth for condensate images. Here, we use synthetic data of defined ground truth to generate noise-overlaid images of condensates with realistic phase separation parameters to benchmark the most commonly used methods for condensate boundary detection, including an emerging machine-learning method. We find that it is critical to carefully choose an optimal boundary detection method for a given dataset to obtain accurate measurements of single particle-condensate interactions. The criteria proposed in this study to guide the selection of an optimal boundary detection method can be broadly applied to imaging-based studies of condensates.
Collapse
Affiliation(s)
- Guoming Gao
- Biophysics Graduate Program, University of Michigan, Ann Arbor, Michigan 48109, United States
- Center for RNA Biomedicine, University of Michigan, Ann Arbor, Michigan 48109, United States
| | - Nils G Walter
- Center for RNA Biomedicine, University of Michigan, Ann Arbor, Michigan 48109, United States
- Department of Chemistry, University of Michigan, Ann Arbor, Michigan 48109, United States
| |
Collapse
|
25
|
Cohen AR, Vitanyi PMB. The Cluster Structure Function. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:11309-11320. [PMID: 37018105 PMCID: PMC10525042 DOI: 10.1109/tpami.2023.3264690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
For each partition of a data set into a given number of parts there is a partition such that every part is as much as possible a good model (an "algorithmic sufficient statistic") for the data in that part. Since this can be done for every number between one and the number of data, the result is a function, the cluster structure function. It maps the number of parts of a partition to values related to the deficiencies of being good models by the parts. Such a function starts with a value at least zero for no partition of the data set and descents to zero for the partition of the data set into singleton parts. The optimal clustering is the one selected by analyzing the cluster structure function. The theory behind the method is expressed in algorithmic information theory (Kolmogorov complexity). In practice the Kolmogorov complexities involved are approximated by a concrete compressor. We give examples using real data sets: the MNIST handwritten digits and the segmentation of real cells as used in stem cell research.
Collapse
|
26
|
Wagner R, Lopez CF, Stiller C. Self-supervised pseudo-colorizing of masked cells. PLoS One 2023; 18:e0290561. [PMID: 37616272 PMCID: PMC10449109 DOI: 10.1371/journal.pone.0290561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 08/09/2023] [Indexed: 08/26/2023] Open
Abstract
Self-supervised learning, which is strikingly referred to as the dark matter of intelligence, is gaining more attention in biomedical applications of deep learning. In this work, we introduce a novel self-supervision objective for the analysis of cells in biomedical microscopy images. We propose training deep learning models to pseudo-colorize masked cells. We use a physics-informed pseudo-spectral colormap that is well suited for colorizing cell topology. Our experiments reveal that approximating semantic segmentation by pseudo-colorization is beneficial for subsequent fine-tuning on cell detection. Inspired by the recent success of masked image modeling, we additionally mask out cell parts and train to reconstruct these parts to further enrich the learned representations. We compare our pre-training method with self-supervised frameworks including contrastive learning (SimCLR), masked autoencoders (MAEs), and edge-based self-supervision. We build upon our previous work and train hybrid models for cell detection, which contain both convolutional and vision transformer modules. Our pre-training method can outperform SimCLR, MAE-like masked image modeling, and edge-based self-supervision when pre-training on a diverse set of six fluorescence microscopy datasets. Code is available at: https://github.com/roydenwa/pseudo-colorize-masked-cells.
Collapse
Affiliation(s)
- Royden Wagner
- Karlsruhe Institute of Technology (KIT), Karlsruhe, BW, Germany
| | | | | |
Collapse
|
27
|
Körber N. MIA is an open-source standalone deep learning application for microscopic image analysis. CELL REPORTS METHODS 2023; 3:100517. [PMID: 37533647 PMCID: PMC10391334 DOI: 10.1016/j.crmeth.2023.100517] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 02/10/2023] [Accepted: 06/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, the amount of data generated by imaging techniques has grown rapidly, along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, the Microscopic Image Analyzer (MIA) was developed. MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep-learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and uses open data formats, which are compatible with commonly used open-source software packages. The software provides a unified interface for easy image labeling, model training, and inference. Furthermore, the software was evaluated in a public competition and performed among the top three for all tested datasets.
Collapse
Affiliation(s)
- Nils Körber
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany
| |
Collapse
|
28
|
Soelistyo CJ, Ulicna K, Lowe AR. Machine learning enhanced cell tracking. FRONTIERS IN BIOINFORMATICS 2023; 3:1228989. [PMID: 37521315 PMCID: PMC10380934 DOI: 10.3389/fbinf.2023.1228989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 07/03/2023] [Indexed: 08/01/2023] Open
Abstract
Quantifying cell biology in space and time requires computational methods to detect cells, measure their properties, and assemble these into meaningful trajectories. In this aspect, machine learning (ML) is having a transformational effect on bioimage analysis, now enabling robust cell detection in multidimensional image data. However, the task of cell tracking, or constructing accurate multi-generational lineages from imaging data, remains an open challenge. Most cell tracking algorithms are largely based on our prior knowledge of cell behaviors, and as such, are difficult to generalize to new and unseen cell types or datasets. Here, we propose that ML provides the framework to learn aspects of cell behavior using cell tracking as the task to be learned. We suggest that advances in representation learning, cell tracking datasets, metrics, and methods for constructing and evaluating tracking solutions can all form part of an end-to-end ML-enhanced pipeline. These developments will lead the way to new computational methods that can be used to understand complex, time-evolving biological systems.
Collapse
Affiliation(s)
- Christopher J. Soelistyo
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Kristina Ulicna
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Alan R. Lowe
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
29
|
Cimini BA, Eliceiri KW. The Twenty Questions of bioimage object analysis. Nat Methods 2023; 20:976-978. [PMID: 37434006 PMCID: PMC10561713 DOI: 10.1038/s41592-023-01919-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
The language used by microscopists who wish to find and measure objects in an image often differs in critical ways from that used by computer scientists who create tools to help them do this, making communication hard across disciplines. This work proposes a set of standardized questions that can guide analyses and shows how it can improve the future of bioimage analysis as a whole by making image analysis workflows and tools more FAIR (findable, accessible, interoperable and reusable).
Collapse
Affiliation(s)
- Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA.
| | - Kevin W Eliceiri
- Center for Quantitative Cell Imaging, University of Wisconsin-Madison and Morgridge Institute for Research, Madison, WI, USA
| |
Collapse
|
30
|
Zhou T, Wu W, Zhang J, Yu S, Fang L. Ultrafast dynamic machine vision with spatiotemporal photonic computing. SCIENCE ADVANCES 2023; 9:eadg4391. [PMID: 37285419 DOI: 10.1126/sciadv.adg4391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/02/2023] [Indexed: 06/09/2023]
Abstract
Ultrafast dynamic machine vision in the optical domain can provide unprecedented perspectives for high-performance computing. However, owing to the limited degrees of freedom, existing photonic computing approaches rely on the memory's slow read/write operations to implement dynamic processing. Here, we propose a spatiotemporal photonic computing architecture to match the highly parallel spatial computing with high-speed temporal computing and achieve a three-dimensional spatiotemporal plane. A unified training framework is devised to optimize the physical system and the network model. The photonic processing speed of the benchmark video dataset is increased by 40-fold on a space-multiplexed system with 35-fold fewer parameters. A wavelength-multiplexed system realizes all-optical nonlinear computing of dynamic light field with a frame time of 3.57 nanoseconds. The proposed architecture paves the way for ultrafast advanced machine vision free from the limits of memory wall and will find applications in unmanned systems, autonomous driving, ultrafast science, etc.
Collapse
Affiliation(s)
- Tiankuang Zhou
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Department of Automation, Tsinghua University, Beijing 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Wei Wu
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Jinzhi Zhang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Shaoliang Yu
- Research Center for Intelligent Optoelectronic Computing, Zhejiang Laboratory, Hangzhou 311100, China
| | - Lu Fang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China
| |
Collapse
|
31
|
Jang J, Lee K, Kim TK. Unsupervised Contour Tracking of Live Cells by Mechanical and Cycle Consistency Losses. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2023; 2023:227-236. [PMID: 38250674 PMCID: PMC10798679 DOI: 10.1109/cvpr52729.2023.00030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Analyzing the dynamic changes of cellular morphology is important for understanding the various functions and characteristics of live cells, including stem cells and metastatic cancer cells. To this end, we need to track all points on the highly deformable cellular contour in every frame of live cell video. Local shapes and textures on the contour are not evident, and their motions are complex, often with expansion and contraction of local contour features. The prior arts for optical flow or deep point set tracking are unsuited due to the fluidity of cells, and previous deep contour tracking does not consider point correspondence. We propose the first deep learning-based tracking of cellular (or more generally viscoelastic materials) contours with point correspondence by fusing dense representation between two contours with cross attention. Since it is impractical to manually label dense tracking points on the contour, unsupervised learning comprised of the mechanical and cyclical consistency losses is proposed to train our contour tracker. The mechanical loss forcing the points to move perpendicular to the contour effectively helps out. For quantitative evaluation, we labeled sparse tracking points along the contour of live cells from two live cell datasets taken with phase contrast and confocal fluorescence microscopes. Our contour tracker quantitatively outperforms compared methods and produces qualitatively more favorable results. Our code and data are publicly available at https://github.com/JunbongJang/contour-tracking/.
Collapse
Affiliation(s)
| | - Kwonmoo Lee
- Boston Children’s Hospital, Harvard Medical School
| | | |
Collapse
|
32
|
Maška M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Nečasová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, Mikut R, Guo T, Wang Y, Allebach JP, Bao R, Al-Shakarji NM, Rahmon G, Toubal IE, Palaniappan K, Lux F, Matula P, Sugawara K, Magnusson KEG, Aho L, Cohen AR, Arbelle A, Ben-Haim T, Raviv TR, Isensee F, Jäger PF, Maier-Hein KH, Zhu Y, Ederra C, Urbiola A, Meijering E, Cunha A, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solórzano C. The Cell Tracking Challenge: 10 years of objective benchmarking. Nat Methods 2023:10.1038/s41592-023-01879-y. [PMID: 37202537 PMCID: PMC10333123 DOI: 10.1038/s41592-023-01879-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 04/13/2023] [Indexed: 05/20/2023]
Abstract
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
Collapse
Affiliation(s)
- Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Estibaliz Gómez-de-Mariscal
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Fidel A Guerrero Peña
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tsang Ing Ren
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
| | - Elliot M Meyerowitz
- Division of Biology and Biological Engineering and Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Tianqi Guo
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Yin Wang
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Jan P Allebach
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Rina Bao
- Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Noor M Al-Shakarji
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Gani Rahmon
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Imad Eddine Toubal
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Kannappan Palaniappan
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de Lyon, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | | | - Layton Aho
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Andrew R Cohen
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Assaf Arbelle
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tal Ben-Haim
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tammy Riklin Raviv
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Paul F Jäger
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
- Griffith University, Nathan, Queensland, Australia
| | - Cristina Ederra
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Ainhoa Urbiola
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandre Cunha
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Arrate Muñoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic.
| | - Carlos Ortiz-de-Solórzano
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain.
| |
Collapse
|
33
|
Czolbe S, Pegios P, Krause O, Feragen A. Semantic similarity metrics for image registration. Med Image Anal 2023; 87:102830. [PMID: 37172390 DOI: 10.1016/j.media.2023.102830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 01/19/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023]
Abstract
Image registration aims to find geometric transformations that align images. Most algorithmic and deep learning-based methods solve the registration problem by minimizing a loss function, consisting of a similarity metric comparing the aligned images, and a regularization term ensuring smoothness of the transformation. Existing similarity metrics like Euclidean Distance or Normalized Cross-Correlation focus on aligning pixel intensity values or correlations, giving difficulties with low intensity contrast, noise, and ambiguous matching. We propose a semantic similarity metric for image registration, focusing on aligning image areas based on semantic correspondence instead. Our approach learns dataset-specific features that drive the optimization of a learning-based registration model. We train both an unsupervised approach extracting features with an auto-encoder, and a semi-supervised approach using supplemental segmentation data. We validate the semantic similarity metric using both deep-learning-based and algorithmic image registration methods. Compared to existing methods across four different image modalities and applications, the method achieves consistently high registration accuracy and smooth transformation fields.
Collapse
Affiliation(s)
- Steffen Czolbe
- Department of Computer Science, University of Copenhagen, Denmark.
| | | | - Oswin Krause
- Department of Computer Science, University of Copenhagen, Denmark
| | - Aasa Feragen
- DTU Compute, Technical University of Denmark, Denmark
| |
Collapse
|
34
|
Zhu Y, Yin X, Meijering E. A Compound Loss Function With Shape Aware Weight Map for Microscopy Cell Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1278-1288. [PMID: 36455082 DOI: 10.1109/tmi.2022.3226226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Microscopy cell segmentation is a crucial step in biological image analysis and a challenging task. In recent years, deep learning has been widely used to tackle this task, with promising results. A critical aspect of training complex neural networks for this purpose is the selection of the loss function, as it affects the learning process. In the field of cell segmentation, most of the recent research in improving the loss function focuses on addressing the problem of inter-class imbalance. Despite promising achievements, more work is needed, as the challenge of cell segmentation is not only the inter-class imbalance but also the intra-class imbalance (the cost imbalance between the false positives and false negatives of the inference model), the segmentation of cell minutiae, and the missing annotations. To deal with these challenges, in this paper, we propose a new compound loss function employing a shape aware weight map. The proposed loss function is inspired by Youden's J index to handle the problem of inter-class imbalance and uses a focal cross-entropy term to penalize the intra-class imbalance and weight easy/hard samples. The proposed shape aware weight map can handle the problem of missing annotations and facilitate valid segmentation of cell minutiae. Results of evaluations on all ten 2D+time datasets from the public cell tracking challenge demonstrate 1) the superiority of the proposed loss function with the shape aware weight map, and 2) that the performance of recent deep learning-based cell segmentation methods can be improved by using the proposed compound loss function.
Collapse
|
35
|
Nunley H, Shao B, Grover P, Singh J, Joyce B, Kim-Yip R, Kohrman A, Watters A, Gal Z, Kickuth A, Chalifoux M, Shvartsman S, Posfai E, Brown LM. A novel ground truth dataset enables robust 3D nuclear instance segmentation in early mouse embryos. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.14.532646. [PMID: 36993260 PMCID: PMC10055179 DOI: 10.1101/2023.03.14.532646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
For investigations into fate specification and cell rearrangements in live images of preimplantation embryos, automated and accurate 3D instance segmentation of nuclei is invaluable; however, the performance of segmentation methods is limited by the images' low signal-to-noise ratio and high voxel anisotropy and the nuclei's dense packing and variable shapes. Supervised machine learning approaches have the potential to radically improve segmentation accuracy but are hampered by a lack of fully annotated 3D data. In this work, we first establish a novel mouse line expressing near-infrared nuclear reporter H2B-miRFP720. H2B-miRFP720 is the longest wavelength nuclear reporter in mice and can be imaged simultaneously with other reporters with minimal overlap. We then generate a dataset, which we call BlastoSPIM, of 3D microscopy images of H2B-miRFP720-expressing embryos with ground truth for nuclear instance segmentation. Using BlastoSPIM, we benchmark the performance of five convolutional neural networks and identify Stardist-3D as the most accurate instance segmentation method across preimplantation development. Stardist-3D, trained on BlastoSPIM, performs robustly up to the end of preimplantation development (> 100 nuclei) and enables studies of fate patterning in the late blastocyst. We, then, demonstrate BlastoSPIM's usefulness as pre-train data for related problems. BlastoSPIM and its corresponding Stardist-3D models are available at: blastospim.flatironinstitute.org.
Collapse
Affiliation(s)
- Hayden Nunley
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
| | - Binglun Shao
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
- Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey, United States of America
| | - Prateek Grover
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
| | - Jaspreet Singh
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
| | - Bradley Joyce
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Rebecca Kim-Yip
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Abraham Kohrman
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Aaron Watters
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
| | - Zsombor Gal
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Alison Kickuth
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Madeleine Chalifoux
- Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey, United States of America
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Stanislav Shvartsman
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
- Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey, United States of America
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
- The Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, New Jersey, United States of America
| | - Eszter Posfai
- Department of Molecular Biology, Princeton University, Princeton, New Jersey, United States of America
| | - Lisa M. Brown
- Center for Computational Biology, Flatiron Institute - Simons Foundation, New York, United States of America
| |
Collapse
|
36
|
Jun BH, Ahmadzadegan A, Ardekani AM, Solorio L, Vlachos PP. Multi-feature-Based Robust Cell Tracking. Ann Biomed Eng 2023; 51:604-617. [PMID: 36103061 DOI: 10.1007/s10439-022-03073-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 09/02/2022] [Indexed: 11/29/2022]
Abstract
Cell tracking algorithms have been used to extract cell counts and motility information from time-lapse images of migrating cells. However, these algorithms often fail when the collected images have cells with spatially and temporally varying features, such as morphology, position, and signal-to-noise ratio. Consequently, state-of-the-art algorithms are not robust or reliable because they require manual inputs to overcome the cell feature changes. To address these issues, we present a fully automated, adaptive, and robust feature-based cell tracking algorithm for the accurate detection and tracking of cells in time-lapse images. Our algorithm tackles measurement limitations twofold. First, we use Hessian filtering and adaptive thresholding to detect the cells in images, overcoming spatial feature variations among the existing cells without manually changing the input thresholds. Second, cell feature parameters are measured, including position, diameter, mean intensity, area, and orientation, and these parameters are simultaneously used to accurately track the cells between subsequent frames, even under poor temporal resolution. Our technique achieved a minimum of 92% detection and tracking accuracy, compared to 16% from Mosaic and Trackmate. Our improved method allows for extended tracking and characterization of heterogeneous cell behavior that are of particular interest for intravital imaging users.
Collapse
Affiliation(s)
- Brian H Jun
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Adib Ahmadzadegan
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Arezoo M Ardekani
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Luis Solorio
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
- Purdue Center for Cancer Research, Purdue University, West Lafayette, IN, USA.
| | - Pavlos P Vlachos
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
37
|
Jiang J, Khan A, Shailja S, Belteton SA, Goebel M, Szymanski DB, Manjunath BS. Segmentation, tracking, and sub-cellular feature extraction in 3D time-lapse images. Sci Rep 2023; 13:3483. [PMID: 36859457 PMCID: PMC9977871 DOI: 10.1038/s41598-023-29149-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 01/31/2023] [Indexed: 03/03/2023] Open
Abstract
This paper presents a method for time-lapse 3D cell analysis. Specifically, we consider the problem of accurately localizing and quantitatively analyzing sub-cellular features, and for tracking individual cells from time-lapse 3D confocal cell image stacks. The heterogeneity of cells and the volume of multi-dimensional images presents a major challenge for fully automated analysis of morphogenesis and development of cells. This paper is motivated by the pavement cell growth process, and building a quantitative morphogenesis model. We propose a deep feature based segmentation method to accurately detect and label each cell region. An adjacency graph based method is used to extract sub-cellular features of the segmented cells. Finally, the robust graph based tracking algorithm using multiple cell features is proposed for associating cells at different time instances. We also demonstrate the generality of our tracking method on C. elegans fluorescent nuclei imagery. Extensive experiment results are provided and demonstrate the robustness of the proposed method. The code is available on GitHub and the method is available as a service through the BisQue portal.
Collapse
Affiliation(s)
- Jiaxiang Jiang
- Department of Electrical and Computer Engineering, University of California, Santa Barbara, USA.
| | - Amil Khan
- grid.133342.40000 0004 1936 9676Department of Electrical and Computer Engineering, University of California, Santa Barbara, USA
| | - S. Shailja
- grid.133342.40000 0004 1936 9676Department of Electrical and Computer Engineering, University of California, Santa Barbara, USA
| | - Samuel A. Belteton
- grid.169077.e0000 0004 1937 2197Department of Botany and Plant Pathology, Purdue University, West Lafayette, USA ,grid.24805.3b0000 0001 0687 2182Molecular Biology Program, New Mexico State University, Las Cruces, USA
| | - Michael Goebel
- grid.133342.40000 0004 1936 9676Department of Electrical and Computer Engineering, University of California, Santa Barbara, USA
| | - Daniel B. Szymanski
- grid.169077.e0000 0004 1937 2197Department of Botany and Plant Pathology, Purdue University, West Lafayette, USA
| | - B. S. Manjunath
- grid.133342.40000 0004 1936 9676Department of Electrical and Computer Engineering, University of California, Santa Barbara, USA
| |
Collapse
|
38
|
Yuan LX, Xu HM, Zhang ZY, Liu XW, Li JX, Wang JH, Cui HB, Huang HR, Zheng Y, Ma D. High precision tracking analysis of cell position and motion fields using 3D U-net network models. Comput Biol Med 2023; 154:106577. [PMID: 36753978 DOI: 10.1016/j.compbiomed.2023.106577] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 01/09/2023] [Accepted: 01/22/2023] [Indexed: 01/27/2023]
Abstract
Cells are the basic units of biological organization, and the quantitative analysis of cellular states is an important topic in medicine and is valuable in revealing the complex mechanisms of microscopic world organisms. In order to better understand cell cycle changes as well as drug actions, we need to track cell migration and division. In this paper, we propose a novel engineering model for tracking cells using cell position and motion fields (CPMF). The training sample does not need to be manually annotated, and we modify and edit it against the ground truth using auxiliary tools. The core idea of the project is to combine detection and correlation, and the cell sequence samples are trained by a U-Net network model composed of 3D CNNs, which can track the migration, division, and entry and exit of cells in the field of view with high accuracy in all directions. The average detection accuracy of the cell coordinates is 98.38% and the average tracking accuracy is 98.70%.
Collapse
Affiliation(s)
- Li-Xin Yuan
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Hong-Mei Xu
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China.
| | - Zi-Yu Zhang
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Xu-Wei Liu
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Jing-Xin Li
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Jia-He Wang
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Hao-Bo Cui
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Hao-Ran Huang
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Yue Zheng
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| | - Da Ma
- International Research Centre for Nano Handling and Manufacturing of China, ChangchunUniversity of Science and Technology, Changchun, 130022, China; Ministry of Education Key Laboratory for Cross-Scale Micro and Nano Manufacturing, Changchun University of Science and Technology, Changchun, 130022, China
| |
Collapse
|
39
|
Chen MJ, Pappas GA, Massella D, Schlothauer A, Motta SE, Falk V, Cesarovic N, Ermanni P. Tailoring crystallinity for hemocompatible and durable PEEK cardiovascular implants. BIOMATERIALS ADVANCES 2023; 146:213288. [PMID: 36731379 DOI: 10.1016/j.bioadv.2023.213288] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 01/09/2023] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Polymers have the potential to replace metallic or bioprosthetic heart valve components due to superior durability and inertness while allowing for native tissue-like flexibility. Despite these appealing properties, certain polymers such as polyetheretherketone (PEEK) have issues with hemocompatibility, which have previously been addressed through assorted complex processes. In this paper, we explore the enhancement of PEEK hemocompatibility with polymer crystallinity. Amorphous, semi-crystalline and crystalline PEEK are investigated in addition to a highly crystalline carbon fiber (CF)/PEEK composite material (CFPEEK). The functional group density of the PEEK samples is determined, showing that higher crystallinity results in increased amount of surface carbonyl functional groups. The increase of crystallinity (and negatively charged groups) appears to cause significant reductions in platelet adhesion (33 vs. 1.5 % surface coverage), hemolysis (1.55 vs. 0.75 %∙cm-2), and thrombin generation rate (4840 vs. 1585 mU/mL/min/cm2). In combination with the hemocompatibility study, mechanical characterization demonstrates that tailoring crystallinity is a simple and effective method to control both hemocompatibility and mechanical performance of PEEK. Furthermore, the results display that CFPEEK composite performed very well in all categories due to its enhanced crystallinity and complete carbon encapsulation, allowing the unique properties of CFPEEK to empower new concepts in cardiovascular device design.
Collapse
Affiliation(s)
- Mary Jialu Chen
- Laboratory of Composite Materials and Adaptive Structures, ETH Zürich, Switzerland.
| | - Georgios A Pappas
- Laboratory of Composite Materials and Adaptive Structures, ETH Zürich, Switzerland
| | - Daniele Massella
- Laboratory of Composite Materials and Adaptive Structures, ETH Zürich, Switzerland
| | - Arthur Schlothauer
- Laboratory of Composite Materials and Adaptive Structures, ETH Zürich, Switzerland
| | - Sarah E Motta
- Institute for Regenerative Medicine, University of Zürich, Switzerland
| | - Volkmar Falk
- Translational Cardiovascular Technologies, ETH Zürich, Switzerland; Klinik für Herz-, Thorax- und Gefäßchirurgie, Deutsches Herzzentrum Berlin, Germany; Klinik für Kardiovaskuläre Chirurgie, Charité Universitätsmedizin Berlin, Germany
| | - Nikola Cesarovic
- Translational Cardiovascular Technologies, ETH Zürich, Switzerland; Klinik für Herz-, Thorax- und Gefäßchirurgie, Deutsches Herzzentrum Berlin, Germany
| | - Paolo Ermanni
- Laboratory of Composite Materials and Adaptive Structures, ETH Zürich, Switzerland
| |
Collapse
|
40
|
Park SA, Sipka T, Krivá Z, Lutfalla G, Nguyen-Chi M, Mikula K. Segmentation-based tracking of macrophages in 2D+time microscopy movies inside a living animal. Comput Biol Med 2023; 153:106499. [PMID: 36599208 DOI: 10.1016/j.compbiomed.2022.106499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 12/19/2022] [Accepted: 12/27/2022] [Indexed: 12/31/2022]
Abstract
The automated segmentation and tracking of macrophages during their migration are challenging tasks due to their dynamically changing shapes and motions. This paper proposes a new algorithm to achieve automatic cell tracking in time-lapse microscopy macrophage data. First, we design a segmentation method employing space-time filtering, local Otsu's thresholding, and the SUBSURF (subjective surface segmentation) method. Next, the partial trajectories for cells overlapping in the temporal direction are extracted in the segmented images. Finally, the extracted trajectories are linked by considering their direction of movement. The segmented images and the obtained trajectories from the proposed method are compared with those of the semi-automatic segmentation and manual tracking. The proposed tracking achieved 97.4% of accuracy for macrophage data under challenging situations, feeble fluorescent intensity, irregular shapes, and motion of macrophages. We expect that the automatically extracted trajectories of macrophages can provide pieces of evidence of how macrophages migrate depending on their polarization modes in the situation, such as during wound healing.
Collapse
Affiliation(s)
- Seol Ah Park
- Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, Radlinskeho 11, Bratislava, 810 05, Slovakia.
| | - Tamara Sipka
- LPHI Laboratory of Pathogen Host Interaction, CNRS, Univ. Montpellier, Place E.Bataillon-Building 24, 34095, Montpellier Cedex 05, France.
| | - Zuzana Krivá
- Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, Radlinskeho 11, Bratislava, 810 05, Slovakia.
| | - Georges Lutfalla
- LPHI Laboratory of Pathogen Host Interaction, CNRS, Univ. Montpellier, Place E.Bataillon-Building 24, 34095, Montpellier Cedex 05, France.
| | - Mai Nguyen-Chi
- LPHI Laboratory of Pathogen Host Interaction, CNRS, Univ. Montpellier, Place E.Bataillon-Building 24, 34095, Montpellier Cedex 05, France.
| | - Karol Mikula
- Department of Mathematics and Descriptive Geometry, Slovak University of Technology in Bratislava, Radlinskeho 11, Bratislava, 810 05, Slovakia.
| |
Collapse
|
41
|
Antonello P, Morone D, Pirani E, Uguccioni M, Thelen M, Krause R, Pizzagalli DU. Tracking unlabeled cancer cells imaged with low resolution in wide migration chambers via U-NET class-1 probability (pseudofluorescence). J Biol Eng 2023; 17:5. [PMID: 36694208 PMCID: PMC9872392 DOI: 10.1186/s13036-022-00321-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/27/2022] [Indexed: 01/26/2023] Open
Abstract
Cell migration is a pivotal biological process, whose dysregulation is found in many diseases including inflammation and cancer. Advances in microscopy technologies allow now to study cell migration in vitro, within engineered microenvironments that resemble in vivo conditions. However, to capture an entire 3D migration chamber for extended periods of time and with high temporal resolution, images are generally acquired with low resolution, which poses a challenge for data analysis. Indeed, cell detection and tracking are hampered due to the large pixel size (i.e., cell diameter down to 2 pixels), the possible low signal-to-noise ratio, and distortions in the cell shape due to changes in the z-axis position. Although fluorescent staining can be used to facilitate cell detection, it may alter cell behavior and it may suffer from fluorescence loss over time (photobleaching).Here we describe a protocol that employs an established deep learning method (U-NET), to specifically convert transmitted light (TL) signal from unlabeled cells imaged with low resolution to a fluorescent-like signal (class 1 probability). We demonstrate its application to study cancer cell migration, obtaining a significant improvement in tracking accuracy, while not suffering from photobleaching. This is reflected in the possibility of tracking cells for three-fold longer periods of time. To facilitate the application of the protocol we provide WID-U, an open-source plugin for FIJI and Imaris imaging software, the training dataset used in this paper, and the code to train the network for custom experimental settings.
Collapse
Affiliation(s)
- Paola Antonello
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland ,grid.5734.50000 0001 0726 5157Graduate School of Cellular and Molecular Sciences, University of Bern, CH-3012 Bern, Switzerland
| | - Diego Morone
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland ,grid.5734.50000 0001 0726 5157Graduate School of Cellular and Molecular Sciences, University of Bern, CH-3012 Bern, Switzerland
| | - Edisa Pirani
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Mariagrazia Uguccioni
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Marcus Thelen
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland
| | - Rolf Krause
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Euler institute, CH-6962 Lugano-Viganello, Switzerland ,FernUni, Faculty of Mathematics and Informatics, Brig, Switzerland
| | - Diego Ulisse Pizzagalli
- grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Faculty of Biomedical Sciences, Institute for Research in Biomedicine, CH-6500 Bellinzona, Switzerland ,grid.29078.340000 0001 2203 2861Università della Svizzera italiana, Euler institute, CH-6962 Lugano-Viganello, Switzerland
| |
Collapse
|
42
|
Geometric deep learning reveals the spatiotemporal features of microscopic motion. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00595-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
AbstractThe characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Owing to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.
Collapse
|
43
|
Hu S, Zhao X, Huang L, Huang K. Global Instance Tracking: Locating Target More Like Humans. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:576-592. [PMID: 35196228 DOI: 10.1109/tpami.2022.3153312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Target tracking, the essential ability of the human visual system, has been simulated by computer vision tasks. However, existing trackers perform well in austere experimental environments but fail in challenges like occlusion and fast motion. The massive gap indicates that researches only measure tracking performance rather than intelligence. How to scientifically judge the intelligence level of trackers? Distinct from decision-making problems, lacking three requirements (a challenging task, a fair environment, and a scientific evaluation procedure) makes it strenuous to answer the question. In this article, we first propose the global instance tracking (GIT) task, which is supposed to search an arbitrary user-specified instance in a video without any assumptions about camera or motion consistency, to model the human visual tracking ability. Whereafter, we construct a high-quality and large-scale benchmark VideoCube to create a challenging environment. Finally, we design a scientific evaluation procedure using human capabilities as the baseline to judge tracking intelligence. Additionally, we provide an online platform with toolkit and an updated leaderboard. Although the experimental results indicate a definite gap between trackers and humans, we expect to take a step forward to generate authentic human-like trackers. The database, toolkit, evaluation server, and baseline results are available at http://videocube.aitestunion.com.
Collapse
|
44
|
Hradecka L, Wiesner D, Sumbal J, Koledova ZS, Maska M. Segmentation and Tracking of Mammary Epithelial Organoids in Brightfield Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:281-290. [PMID: 36170389 DOI: 10.1109/tmi.2022.3210714] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present an automated and deep-learning-based workflow to quantitatively analyze the spatiotemporal development of mammary epithelial organoids in two-dimensional time-lapse (2D+t) sequences acquired using a brightfield microscope at high resolution. It involves a convolutional neural network (U-Net), purposely trained using computer-generated bioimage data created by a conditional generative adversarial network (pix2pixHD), to infer semantic segmentation, adaptive morphological filtering to identify organoid instances, and a shape-similarity-constrained, instance-segmentation-correcting tracking procedure to reliably cherry-pick the organoid instances of interest in time. By validating it using real 2D+t sequences of mouse mammary epithelial organoids of morphologically different phenotypes, we clearly demonstrate that the workflow achieves reliable segmentation and tracking performance, providing a reproducible and laborless alternative to manual analyses of the acquired bioimage data.
Collapse
|
45
|
Malin-Mayor C, Hirsch P, Guignard L, McDole K, Wan Y, Lemon WC, Kainmueller D, Keller PJ, Preibisch S, Funke J. Automated reconstruction of whole-embryo cell lineages by learning from sparse annotations. Nat Biotechnol 2023; 41:44-49. [PMID: 36065022 PMCID: PMC7614077 DOI: 10.1038/s41587-022-01427-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 07/12/2022] [Indexed: 01/19/2023]
Abstract
We present a method to automatically identify and track nuclei in time-lapse microscopy recordings of entire developing embryos. The method combines deep learning and global optimization. On a mouse dataset, it reconstructs 75.8% of cell lineages spanning 1 h, as compared to 31.8% for the competing method. Our approach improves understanding of where and when cell fate decisions are made in developing embryos, tissues, and organs.
Collapse
Affiliation(s)
| | - Peter Hirsch
- Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
- Faculty of Mathematics and Natural Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Leo Guignard
- HHMI Janelia, Ashburn, VA, USA
- CNRS, UTLN, LIS 7020, Turing Centre for Living Systems, Aix Marseille University, Marseille, France
| | - Katie McDole
- HHMI Janelia, Ashburn, VA, USA
- MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Yinan Wan
- HHMI Janelia, Ashburn, VA, USA
- Biozentrum, University of Basel, Basel, Switzerland
| | | | - Dagmar Kainmueller
- Max Delbrück Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
- Faculty of Mathematics and Natural Sciences, Humboldt-Universität zu Berlin, Berlin, Germany
| | | | | | | |
Collapse
|
46
|
BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations. NPJ Biofilms Microbiomes 2022; 8:99. [PMID: 36529755 PMCID: PMC9760640 DOI: 10.1038/s41522-022-00362-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
Collapse
|
47
|
Alabdaly AA, El-Sayed WG, Hassan YF. RAMRU-CAM: Residual-Atrous MultiResUnet with Channel Attention Mechanism for cell segmentation. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-222631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The task of cell segmentation in microscope images is difficult and popular. In recent years, deep learning-based techniques have made incredible progress in medical and microscopy image segmentation applications. In this paper, we propose a novel deep learning approach called Residual-Atrous MultiResUnet with Channel Attention Mechanism (RAMRU-CAM) for cell segmentation, which combines MultiResUnet architecture with Channel Attention Mechanism (CAM) and Residual-Atrous connections. The Residual-Atrous path mitigates the semantic gap between the encoder and decoder stages and manages the spatial dimension of feature maps. Furthermore, the Channel Attention Mechanism (CAM) blocks are used in the decoder stages to better maintain the spatial details before concatenating the feature maps from the encoder phases to the decoder phases. We evaluated our proposed model on the PhC-C2DH-U373 and Fluo-N2DH-GOWT1 datasets. The experimental results show that our proposed model outperforms recent variants of the U-Net model and the state-of-the-art approaches. We have demonstrated how our model can segment cells precisely while using fewer parameters and low computational complexity.
Collapse
Affiliation(s)
- Ammar A. Alabdaly
- Department of Mathematics and Computer Science, Alexandria University, Alexandria, Egypt
| | - Wagdy G. El-Sayed
- Department of Mathematics and Computer Science, Alexandria University, Alexandria, Egypt
| | - Yasser F. Hassan
- Faculty of Computer and Data Science, Alexandria University, Alexandria, Egypt
| |
Collapse
|
48
|
Kenneweg P, Stallmann D, Hammer B. Novel transfer learning schemes based on Siamese networks and synthetic data. Neural Comput Appl 2022; 35:8423-8436. [PMID: 36568475 PMCID: PMC9757634 DOI: 10.1007/s00521-022-08115-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 11/23/2022] [Indexed: 12/23/2022]
Abstract
Transfer learning schemes based on deep networks which have been trained on huge image corpora offer state-of-the-art technologies in computer vision. Here, supervised and semi-supervised approaches constitute efficient technologies which work well with comparably small data sets. Yet, such applications are currently restricted to application domains where suitable deep network models are readily available. In this contribution, we address an important application area in the domain of biotechnology, the automatic analysis of CHO-K1 suspension growth in microfluidic single-cell cultivation, where data characteristics are very dissimilar to existing domains and trained deep networks cannot easily be adapted by classical transfer learning. We propose a novel transfer learning scheme which expands a recently introduced Twin-VAE architecture, which is trained on realistic and synthetic data, and we modify its specialized training procedure to the transfer learning domain. In the specific domain, often only few to no labels exist and annotations are costly. We investigate a novel transfer learning strategy, which incorporates a simultaneous retraining on natural and synthetic data using an invariant shared representation as well as suitable target variables, while it learns to handle unseen data from a different microscopy technology. We show the superiority of the variation of our Twin-VAE architecture over the state-of-the-art transfer learning methodology in image processing as well as classical image processing technologies, which persists, even with strongly shortened training times and leads to satisfactory results in this domain. The source code is available at https://github.com/dstallmann/transfer_learning_twinvae, works cross-platform, is open-source and free (MIT licensed) software. We make the data sets available at https://pub.uni-bielefeld.de/record/2960030.
Collapse
Affiliation(s)
- Philip Kenneweg
- grid.7491.b0000 0001 0944 9128Machine Learning Group, Bielefeld University, Bielefeld, Germany
| | - Dominik Stallmann
- grid.7491.b0000 0001 0944 9128Machine Learning Group, Bielefeld University, Bielefeld, Germany
| | - Barbara Hammer
- grid.7491.b0000 0001 0944 9128Machine Learning Group, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
49
|
Fukai YT, Kawaguchi K. LapTrack: linear assignment particle tracking with tunable metrics. Bioinformatics 2022; 39:6887138. [PMID: 36495181 PMCID: PMC9825786 DOI: 10.1093/bioinformatics/btac799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/09/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
MOTIVATION Particle tracking is an important step of analysis in a variety of scientific fields and is particularly indispensable for the construction of cellular lineages from live images. Although various supervised machine learning methods have been developed for cell tracking, the diversity of the data still necessitates heuristic methods that require parameter estimations from small amounts of data. For this, solving tracking as a linear assignment problem (LAP) has been widely applied and demonstrated to be efficient. However, there has been no implementation that allows custom connection costs, parallel parameter tuning with ground truth annotations, and the functionality to preserve ground truth connections, limiting the application to datasets with partial annotations. RESULTS We developed LapTrack, a LAP-based tracker which allows including arbitrary cost functions and inputs, parallel parameter tuning and ground-truth track preservation. Analysis of real and artificial datasets demonstrates the advantage of custom metric functions for tracking score improvement from distance-only cases. The tracker can be easily combined with other Python-based tools for particle detection, segmentation and visualization. AVAILABILITY AND IMPLEMENTATION LapTrack is available as a Python package on PyPi, and the notebook examples are shared at https://github.com/yfukai/laptrack. The data and code for this publication are hosted at https://github.com/NoneqPhysLivingMatterLab/laptrack-optimisation. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | - Kyogo Kawaguchi
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, Kobe 650-0047, Japan,RIKEN Cluster for Pioneering Research, Kobe 650-0047, Japan,Universal Biology Institute, The University of Tokyo, Tokyo 113-0033, Japan
| |
Collapse
|
50
|
Midtvedt B, Pineda J, Skärberg F, Olsén E, Bachimanchi H, Wesén E, Esbjörner EK, Selander E, Höök F, Midtvedt D, Volpe G. Single-shot self-supervised object detection in microscopy. Nat Commun 2022; 13:7492. [PMID: 36470883 PMCID: PMC9722899 DOI: 10.1038/s41467-022-35004-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 11/15/2022] [Indexed: 12/12/2022] Open
Abstract
Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.
Collapse
Affiliation(s)
- Benjamin Midtvedt
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Jesús Pineda
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Fredrik Skärberg
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Erik Olsén
- grid.5371.00000 0001 0775 6028Department of Physics, Chalmers University of Technology, Gothenburg, Sweden
| | - Harshith Bachimanchi
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Emelie Wesén
- grid.5371.00000 0001 0775 6028Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Elin K. Esbjörner
- grid.5371.00000 0001 0775 6028Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Erik Selander
- grid.8761.80000 0000 9919 9582Department of Marine Sciences, University of Gothenburg, Gothenburg, Sweden
| | - Fredrik Höök
- grid.5371.00000 0001 0775 6028Department of Physics, Chalmers University of Technology, Gothenburg, Sweden
| | - Daniel Midtvedt
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Giovanni Volpe
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|