1
|
Robert F, Calovoulos A, Facq L, Decoeur F, Gontier E, Grosset CF, Denis de Senneville B. Enhancing cell instance segmentation in scanning electron microscopy images via a deep contour closing operator. Comput Biol Med 2025; 190:109972. [PMID: 40174501 DOI: 10.1016/j.compbiomed.2025.109972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 02/05/2025] [Accepted: 03/02/2025] [Indexed: 04/04/2025]
Abstract
Accurately segmenting and individualizing cells in scanning electron microscopy (SEM) images is a highly promising technique for elucidating tissue architecture in oncology. While current artificial intelligence (AI)-based methods are effective, errors persist, necessitating time-consuming manual corrections, particularly in areas where the quality of cell contours in the image is poor and requires gap filling. This study presents a novel AI-driven approach for refining cell boundary delineation to improve instance-based cell segmentation in SEM images, also reducing the necessity for residual manual correction. A convolutional neural network (CNN) Closing Operator (COp-Net) is introduced to address gaps in cell contours, effectively filling in regions with deficient or absent information. The network takes as input cell contour probability maps with potentially inadequate or missing information and outputs corrected cell contour delineations. The lack of training data was addressed by generating low integrity probability maps using a tailored partial differential equation (PDE). To ensure reproducibility, COp-Net weights and the source code for solving the PDE are publicly available at https://github.com/Florian-40/CellSegm. We showcase the efficacy of our approach in augmenting cell boundary precision using both private SEM images from patient-derived xenograft (PDX) hepatoblastoma tissues and publicly accessible images datasets. The proposed cell contour closing operator exhibits a notable improvement in tested datasets, achieving respectively close to 50% (private data) and 10% (public data) increase in the accurately-delineated cell proportion compared to state-of-the-art methods. Additionally, the need for manual corrections was significantly reduced, therefore facilitating the overall digitalization process. Our results demonstrate a notable enhancement in the accuracy of cell instance segmentation, particularly in highly challenging regions where image quality compromises the integrity of cell boundaries, necessitating gap filling. Therefore, our work should ultimately facilitate the study of tumour tissue bioarchitecture in onconanotomy field.
Collapse
Affiliation(s)
- Florian Robert
- Univ. of Bordeaux, CNRS, Institut de Mathématiques de Bordeaux, IMB, UMR5251, 351 cours de la Libération, Talence, F-33400, France; INRIA Bordeaux, MONC team, 200 avenue de la Vieille Tour, Talence, F-33400, France; Univ. Bordeaux, INSERM, Bordeaux Institute in Oncology, BRIC, U1312, MIRCADE team, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Alexia Calovoulos
- Univ. Bordeaux, INSERM, Bordeaux Institute in Oncology, BRIC, U1312, MIRCADE team, 146 rue Léo Saignat, Bordeaux, 33000, France; Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR 3420, US 4, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Laurent Facq
- Univ. of Bordeaux, CNRS, Institut de Mathématiques de Bordeaux, IMB, UMR5251, 351 cours de la Libération, Talence, F-33400, France.
| | - Fanny Decoeur
- Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR 3420, US 4, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Etienne Gontier
- Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR 3420, US 4, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Christophe F Grosset
- Univ. Bordeaux, INSERM, Bordeaux Institute in Oncology, BRIC, U1312, MIRCADE team, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Baudouin Denis de Senneville
- Univ. of Bordeaux, CNRS, Institut de Mathématiques de Bordeaux, IMB, UMR5251, 351 cours de la Libération, Talence, F-33400, France; INRIA Bordeaux, MONC team, 200 avenue de la Vieille Tour, Talence, F-33400, France.
| |
Collapse
|
2
|
Annasamudram N, Zhao J, Oluwadare O, Prashanth A, Makrogiannis S. Scale selection and machine learning based cell segmentation and tracking in time lapse microscopy. Sci Rep 2025; 15:11717. [PMID: 40188205 PMCID: PMC11972337 DOI: 10.1038/s41598-025-95993-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 03/25/2025] [Indexed: 04/07/2025] Open
Abstract
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques. The code of our method is publicly available on https://github.com/smakrogi/CSTQ_Pub/ , (release v.3.2).
Collapse
Affiliation(s)
- Nagasoujanya Annasamudram
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Jian Zhao
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Olaitan Oluwadare
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Aashish Prashanth
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA.
| |
Collapse
|
3
|
Stamatov R, Uzunova S, Kicheva Y, Karaboeva M, Blagoev T, Stoynov S. Supra-second tracking and live-cell karyotyping reveal principles of mitotic chromosome dynamics. Nat Cell Biol 2025; 27:654-667. [PMID: 40185948 PMCID: PMC11991918 DOI: 10.1038/s41556-025-01637-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 02/11/2025] [Indexed: 04/07/2025]
Abstract
Mitotic chromosome dynamics are essential for the three-dimensional organization of the genome during the cell cycle, but the spatiotemporal characteristics of this process remain unclear due to methodological challenges. While Hi-C methods capture interchromosomal contacts, they lack single-cell temporal dynamics, whereas microscopy struggles with bleaching and phototoxicity. Here, to overcome these limitations, we introduce Facilitated Segmentation and Tracking of Chromosomes in Mitosis Pipeline (FAST CHIMP), pairing time-lapse super-resolution microscopy with deep learning. FAST CHIMP tracked all human chromosomes with 8-s resolution from prophase to telophase, identified 15 out of 23 homologue pairs in single cells and compared chromosomal positioning between mother and daughter cells. It revealed a centrosome-motion-dependent flow that governs the mapping between chromosome locations at prophase and their metaphase plate position. In addition, FAST CHIMP measured supra-second dynamics of intra- and interchromosomal contacts. This tool adds a dynamic dimension to the study of chromatin behaviour in live cells, promising advances beyond the scope of existing methods.
Collapse
Affiliation(s)
- Rumen Stamatov
- Institute of Molecular Biology, Bulgarian Academy of Sciences, Sofia, Bulgaria.
| | - Sonya Uzunova
- Institute of Molecular Biology, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Yoana Kicheva
- Institute of Molecular Biology, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Maria Karaboeva
- Institute of Molecular Biology, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Tavian Blagoev
- Institute of Molecular Biology, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Stoyno Stoynov
- Institute of Molecular Biology, Bulgarian Academy of Sciences, Sofia, Bulgaria.
| |
Collapse
|
4
|
Melnikova A, Maška M, Matula P. Topology-preserving contourwise shape fusion. Sci Rep 2025; 15:10713. [PMID: 40155428 PMCID: PMC11953431 DOI: 10.1038/s41598-025-94977-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 03/18/2025] [Indexed: 04/01/2025] Open
Abstract
The preservation of morphological features, such as protrusions and concavities, and of the topology of input shapes is important when establishing reference data for benchmarking segmentation algorithms or when constructing a mean or median shape. We present a contourwise topology-preserving fusion method, called shape-aware topology-preserving means (SATM), for merging complex simply connected shapes. The method is based on key point matching and piecewise contour averaging. Unlike existing pixelwise and contourwise fusion methods, SATM preserves topology and does not smooth morphological features. We also present a detailed comparison of SATM with state-of-the-art fusion techniques for the purpose of benchmarking and median shape construction. Our experiments show that SATM outperforms these techniques in terms of shape-related measures that reflect shape complexity, manifesting itself as a reliable method for both establishing a consensus of segmentation annotations and for computing mean shapes.
Collapse
|
5
|
Kaondal S, Taassob A, Jeon S, Lee SH, Nuñez HL, Akindipe BA, Lee H, Joo SY, Oliveira SM, Argüello-Miranda O. Generative frame interpolation enhances tracking of biological objects in time-lapse microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.23.644838. [PMID: 40196554 PMCID: PMC11974701 DOI: 10.1101/2025.03.23.644838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
Object tracking in microscopy videos is crucial for understanding biological processes. While existing methods often require fine-tuning tracking algorithms to fit the image dataset, here we explored an alternative paradigm: augmenting the image time-lapse dataset to fit the tracking algorithm. To test this approach, we evaluated whether generative video frame interpolation can augment the temporal resolution of time-lapse microscopy and facilitate object tracking in multiple biological contexts. We systematically compared the capacity of Latent Diffusion Model for Video Frame Interpolation (LDMVFI), Real-time Intermediate Flow Estimation (RIFE), Compression-Driven Frame Interpolation (CDFI), and Frame Interpolation for Large Motion (FILM) to generate synthetic microscopy images derived from interpolating real images. Our testing image time series ranged from fluorescently labeled nuclei to bacteria, yeast, cancer cells, and organoids. We showed that the off-the-shelf frame interpolation algorithms produced bio-realistic image interpolation even without dataset-specific retraining, as judged by high structural image similarity and the capacity to produce segmentations that closely resemble results from real images. Using a simple tracking algorithm based on mask overlap, we confirmed that frame interpolation significantly improved tracking across several datasets without requiring extensive parameter tuning and capturing complex trajectories that were difficult to resolve in the original image time series. Taken together, our findings highlight the potential of generative frame interpolation to improve tracking in time-lapse microscopy across diverse scenarios, suggesting that a generalist tracking algorithm for microscopy could be developed by combining deep learning segmentation models with generative frame interpolation.
Collapse
Affiliation(s)
- Swaraj Kaondal
- Department of Plant and Microbial Biology, North Carolina State University, Raleigh, USA
| | - Arsalan Taassob
- Department of Plant and Microbial Biology, North Carolina State University, Raleigh, USA
| | - Sara Jeon
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - Su Hyun Lee
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - Henrique L. Nuñez
- Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, USA
| | - Bukola A. Akindipe
- Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, USA
| | - Hyunsook Lee
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - So Young Joo
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - Samuel M.D. Oliveira
- Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, USA
| | | |
Collapse
|
6
|
Zhou FY, Marin Z, Yapp C, Zou Q, Nanes BA, Daetwyler S, Jamieson AR, Islam MT, Jenkins E, Gihana GM, Lin J, Borges HM, Chang BJ, Weems A, Morrison SJ, Sorger PK, Fiolka R, Dean KM, Danuser G. Universal consensus 3D segmentation of cells from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the foundation of a wide range of microscopy-based biological studies. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation, and computation. However, 3D cell segmentation, requiring dense annotation of 2D slices still poses significant challenges. Manual labeling of 3D cells to train broadly applicable segmentation models is prohibitive. Even in high-contrast images annotation is ambiguous and time-consuming. Here we develop a theory and toolbox, u-Segment3D, for 2D-to-3D segmentation, compatible with any 2D method generating pixel-based instance cell masks. u-Segment3D translates and enhances 2D instance segmentations to a 3D consensus instance segmentation without training data, as demonstrated on 11 real-life datasets, >70,000 cells, spanning single cells, cell aggregates, and tissue. Moreover, u-Segment3D is competitive with native 3D segmentation, even exceeding when cells are crowded and have complex morphologies.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Max Perutz Labs, Department of Structural and Computational Biology, University of Vienna, Vienna, Austria
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Qiongjing Zou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin A. Nanes
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Dermatology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew R. Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Jinlong Lin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Hazel M. Borges
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Sean J. Morrison
- Children’s Research Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Cell Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin M. Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
7
|
Nöltner L, Engeland K, Kohler R. CeDaD-a novel assay for simultaneous tracking of cell death and division in a single population. Cell Death Discov 2025; 11:86. [PMID: 40038265 PMCID: PMC11880512 DOI: 10.1038/s41420-025-02370-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 01/21/2025] [Accepted: 02/20/2025] [Indexed: 03/06/2025] Open
Abstract
The cell division cycle and the various forms of programmed cell death are interconnected. A prominent example is the tumor suppressor p53, which not only induces apoptosis but also plays an important role in the arrest of the cell cycle. Consequently, simultaneous analysis of cell division and cell death is frequently of significant interest in cell biology research. Traditionally, these processes require distinct assays, making concurrent analysis challenging. To address this, we present a novel combined assay, called CeDaD assay-Cell Death and Division assay-which allows for the simultaneous quantification of cell division and cell death within a single-cell population. This assay utilizes a straightforward flow cytometric approach, combining a staining based on carboxyfluorescein succinimidyl ester (CFSE) to monitor cell division with an annexin V-derived staining to assess the extent of cell death.
Collapse
Affiliation(s)
- Lukas Nöltner
- Molecular Oncology, Faculty of Medicine, University of Leipzig, Leipzig, Germany
| | - Kurt Engeland
- Molecular Oncology, Faculty of Medicine, University of Leipzig, Leipzig, Germany
| | - Robin Kohler
- Molecular Oncology, Faculty of Medicine, University of Leipzig, Leipzig, Germany.
| |
Collapse
|
8
|
Archit A, Freckmann L, Nair S, Khalid N, Hilt P, Rajashekar V, Freitag M, Teuber C, Buckley G, von Haaren S, Gupta S, Dengel A, Ahmed S, Pape C. Segment Anything for Microscopy. Nat Methods 2025; 22:579-591. [PMID: 39939717 PMCID: PMC11903314 DOI: 10.1038/s41592-024-02580-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 11/26/2024] [Indexed: 02/14/2025]
Abstract
Accurate segmentation of objects in microscopy images remains a bottleneck for many researchers despite the number of tools developed for this purpose. Here, we present Segment Anything for Microscopy (μSAM), a tool for segmentation and tracking in multidimensional microscopy data. It is based on Segment Anything, a vision foundation model for image segmentation. We extend it by fine-tuning generalist models for light and electron microscopy that clearly improve segmentation quality for a wide range of imaging conditions. We also implement interactive and automatic segmentation in a napari plugin that can speed up diverse segmentation tasks and provides a unified solution for microscopy annotation across different microscopy modalities. Our work constitutes the application of vision foundation models in microscopy, laying the groundwork for solving image analysis tasks in this domain with a small set of powerful deep learning models.
Collapse
Affiliation(s)
- Anwai Archit
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany
| | - Luca Freckmann
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany
| | - Sushmita Nair
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany
| | - Nabeel Khalid
- German Research Center for Artificial Intelligence, Kaiserslautern, Germany
- RPTU Kaiserslautern-Landau, Kaiserslautern, Germany
| | | | - Vikas Rajashekar
- German Research Center for Artificial Intelligence, Kaiserslautern, Germany
| | - Marei Freitag
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany
| | - Carolin Teuber
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany
| | - Genevieve Buckley
- Ramaciotti Centre for Cryo-Electron Microscopy, Monash University, Melbourne, Victoria, Australia
| | - Sebastian von Haaren
- Georg-August-University Göttingen, Campus Institute Data Science, Goettingen, Germany
| | - Sagnik Gupta
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany
| | - Andreas Dengel
- German Research Center for Artificial Intelligence, Kaiserslautern, Germany
- RPTU Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Sheraz Ahmed
- German Research Center for Artificial Intelligence, Kaiserslautern, Germany
| | - Constantin Pape
- Georg-August-University Göttingen, Institute of Computer Science, Goettingen, Germany.
| |
Collapse
|
9
|
Fedorchuk K, Russell SM, Zibaei K, Yassin M, Hicks DG. DeepKymoTracker: A tool for accurate construction of cell lineage trees for highly motile cells. PLoS One 2025; 20:e0315947. [PMID: 39928591 PMCID: PMC11809811 DOI: 10.1371/journal.pone.0315947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Accepted: 12/03/2024] [Indexed: 02/12/2025] Open
Abstract
Time-lapse microscopy has long been used to record cell lineage trees. Successful construction of a lineage tree requires tracking and preserving the identity of multiple cells across many images. If a single cell is misidentified the identity of all its progeny will be corrupted and inferences about heritability may be incorrect. Successfully avoiding such identity errors is challenging, however, when studying highly-motile cells such as T lymphocytes which readily change shape from one image to the next. To address this problem, we developed DeepKymoTracker, a pipeline for combined tracking and segmentation. Central to DeepKymoTracker is the use of a seed, a marker for each cell which transmits information about cell position and identity between sets of images during tracking, as well as between tracking and segmentation steps. The seed allows a 3D convolutional neural network (CNN) to detect and associate cells across several consecutive images in an integrated way, reducing the risk of a single poor image corrupting cell identity. DeepKymoTracker was trained extensively on synthetic and experimental T lymphocyte images. It was benchmarked against five publicly available, automatic analysis tools and outperformed them in almost all respects. The software is written in pure Python and is freely available. We suggest this tool is particularly suited to the tracking of cells in suspension, whose fast motion makes lineage assembly particularly difficult.
Collapse
Affiliation(s)
- Khelina Fedorchuk
- Optical Sciences Centre, Swinburne University of Technology, Hawthorn, Victoria, Australia
| | - Sarah M. Russell
- Optical Sciences Centre, Swinburne University of Technology, Hawthorn, Victoria, Australia
- Immune Signalling Laboratory, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- Sir Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne, Victoria, Australia
| | - Kajal Zibaei
- Optical Sciences Centre, Swinburne University of Technology, Hawthorn, Victoria, Australia
| | - Mohammed Yassin
- Optical Sciences Centre, Swinburne University of Technology, Hawthorn, Victoria, Australia
| | - Damien G. Hicks
- Optical Sciences Centre, Swinburne University of Technology, Hawthorn, Victoria, Australia
| |
Collapse
|
10
|
Zhou Y, Li L, Wang C, Song L, Yang G. GobletNet: Wavelet-Based High-Frequency Fusion Network for Semantic Segmentation of Electron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1058-1069. [PMID: 39365717 DOI: 10.1109/tmi.2024.3474028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2024]
Abstract
Semantic segmentation of electron microscopy (EM) images is crucial for nanoscale analysis. With the development of deep neural networks (DNNs), semantic segmentation of EM images has achieved remarkable success. However, current EM image segmentation models are usually extensions or adaptations of natural or biomedical models. They lack the full exploration and utilization of the intrinsic characteristics of EM images. Furthermore, they are often designed only for several specific segmentation objects and lack versatility. In this study, we quantitatively analyze the characteristics of EM images compared with those of natural and other biomedical images via the wavelet transform. To better utilize these characteristics, we design a high-frequency (HF) fusion network, GobletNet, which outperforms state-of-the-art models by a large margin in the semantic segmentation of EM images. We use the wavelet transform to generate HF images as extra inputs and use an extra encoding branch to extract HF information. Furthermore, we introduce a fusion-attention module (FAM) into GobletNet to facilitate better absorption and fusion of information from raw images and HF images. Extensive benchmarking on seven public EM datasets (EPFL, CREMI, SNEMI3D, UroCell, MitoEM, Nanowire and BetaSeg) demonstrates the effectiveness of our model. The code is available at https://github.com/Yanfeng-Zhou/GobletNet.
Collapse
|
11
|
Bruch R, Vitacolonna M, Nürnberg E, Sauer S, Rudolf R, Reischl M. Improving 3D deep learning segmentation with biophysically motivated cell synthesis. Commun Biol 2025; 8:43. [PMID: 39799275 PMCID: PMC11724918 DOI: 10.1038/s42003-025-07469-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 01/06/2025] [Indexed: 01/15/2025] Open
Abstract
Biomedical research increasingly relies on three-dimensional (3D) cell culture models and artificial-intelligence-based analysis can potentially facilitate a detailed and accurate feature extraction on a single-cell level. However, this requires for a precise segmentation of 3D cell datasets, which in turn demands high-quality ground truth for training. Manual annotation, the gold standard for ground truth data, is too time-consuming and thus not feasible for the generation of large 3D training datasets. To address this, we present a framework for generating 3D training data, which integrates biophysical modeling for realistic cell shape and alignment. Our approach allows the in silico generation of coherent membrane and nuclei signals, that enable the training of segmentation models utilizing both channels for improved performance. Furthermore, we present a generative adversarial network (GAN) training scheme that generates not only image data but also matching labels. Quantitative evaluation shows superior performance of biophysical motivated synthetic training data, even outperforming manual annotation and pretrained models. This underscores the potential of incorporating biophysical modeling for enhancing synthetic training data quality.
Collapse
Affiliation(s)
- Roman Bruch
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany.
| | - Mario Vitacolonna
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Elina Nürnberg
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Simeon Sauer
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- CHARISMA, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Rüdiger Rudolf
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
12
|
Kesapragada M, Sun YH, Zhu K, Recendez C, Fregoso D, Yang HY, Rolandi M, Isseroff R, Zhao M, Gomez M. A data-driven approach to establishing cell motility patterns as predictors of macrophage subtypes and their relation to cell morphology. PLoS One 2024; 19:e0315023. [PMID: 39739899 DOI: 10.1371/journal.pone.0315023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 11/18/2024] [Indexed: 01/02/2025] Open
Abstract
The motility of macrophages in response to microenvironment stimuli is a hallmark of innate immunity, where macrophages play pro-inflammatory or pro-reparatory roles depending on their activation status during wound healing. Cell size and shape have been informative in defining macrophage subtypes. Studies show pro and anti-inflammatory macrophages exhibit distinct migratory behaviors, in vitro, in 3D and in vivo but this link has not been rigorously studied. We apply both morphology and motility-based image processing approaches to analyze live cell images consisting of macrophage phenotypes. Macrophage subtypes are differentiated from primary murine bone marrow derived macrophages using a potent lipopolysaccharide (LPS) or cytokine interleukin-4 (IL-4). We show that morphology is tightly linked to motility, which leads to our hypothesis that motility analysis could be used alone or in conjunction with morphological features for improved prediction of macrophage subtypes. We train a support vector machine (SVM) classifier to predict macrophage subtypes based on morphology alone, motility alone, and both morphology and motility combined. We show that motility has comparable predictive capabilities as morphology. However, using both measures can enhance predictive capabilities. While motility and morphological features can be individually ambiguous identifiers, together they provide significantly improved prediction accuracies (75%) from a training dataset of 1000 cells tracked over time using only phase contrast time-lapse microscopy. Thus, the approach combining cell motility and cell morphology information can lead to methods that accurately assess functionally diverse macrophage phenotypes quickly and efficiently. This can support the development of cost efficient and high through-put methods for screening biochemicals targeting macrophage polarization.
Collapse
Affiliation(s)
- Manasa Kesapragada
- Department of Applied Mathematics, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| | - Yao-Hui Sun
- Department of Ophthalmology & Vision Science, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Kan Zhu
- Department of Ophthalmology & Vision Science, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Cynthia Recendez
- Department of Ophthalmology & Vision Science, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Daniel Fregoso
- Department of Dermatology, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Hsin-Ya Yang
- Department of Dermatology, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Marco Rolandi
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| | - Rivkah Isseroff
- Department of Dermatology, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Min Zhao
- Department of Ophthalmology & Vision Science, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
- Department of Dermatology, School of Medicine, University of California, Davis, Sacramento, CA, United States of America
| | - Marcella Gomez
- Department of Applied Mathematics, University of California, Santa Cruz, Santa Cruz, CA, United States of America
| |
Collapse
|
13
|
Vitacolonna M, Bruch R, Schneider R, Jabs J, Hafner M, Reischl M, Rudolf R. A spheroid whole mount drug testing pipeline with machine-learning based image analysis identifies cell-type specific differences in drug efficacy on a single-cell level. BMC Cancer 2024; 24:1542. [PMID: 39696122 DOI: 10.1186/s12885-024-13329-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 12/11/2024] [Indexed: 12/20/2024] Open
Abstract
BACKGROUND The growth and drug response of tumors are influenced by their stromal composition, both in vivo and 3D-cell culture models. Cell-type inherent features as well as mutual relationships between the different cell types in a tumor might affect drug susceptibility of the tumor as a whole and/or of its cell populations. However, a lack of single-cell procedures with sufficient detail has hampered the automated observation of cell-type-specific effects in three-dimensional stroma-tumor cell co-cultures. METHODS Here, we developed a high-content pipeline ranging from the setup of novel tumor-fibroblast spheroid co-cultures over optical tissue clearing, whole mount staining, and 3D confocal microscopy to optimized 3D-image segmentation and a 3D-deep-learning model to automate the analysis of a range of cell-type-specific processes, such as cell proliferation, apoptosis, necrosis, drug susceptibility, nuclear morphology, and cell density. RESULTS This demonstrated that co-cultures of KP-4 tumor cells with CCD-1137Sk fibroblasts exhibited a growth advantage compared to tumor cell mono-cultures, resulting in higher cell counts following cytostatic treatments with paclitaxel and doxorubicin. However, cell-type-specific single-cell analysis revealed that this apparent benefit of co-cultures was due to a higher resilience of fibroblasts against the drugs and did not indicate a higher drug resistance of the KP-4 cancer cells during co-culture. Conversely, cancer cells were partially even more susceptible in the presence of fibroblasts than in mono-cultures. CONCLUSION In summary, this underlines that a novel cell-type-specific single-cell analysis method can reveal critical insights regarding the mechanism of action of drug substances in three-dimensional cell culture models.
Collapse
Affiliation(s)
- Mario Vitacolonna
- CeMOS, Mannheim University of Applied Sciences, 68163, Mannheim, Germany.
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, 68163, Mannheim, Germany.
| | - Roman Bruch
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344, Eggen-stein-Leopoldshafen, Germany
| | | | - Julia Jabs
- Merck Healthcare KGaA, 64293, Darmstadt, Germany
| | - Mathias Hafner
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, 68163, Mannheim, Germany
- Institute of Medical Technology, Medical Faculty Mannheim of Heidelberg University, Mannheim University of Applied Sciences, 68167, Mannheim, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344, Eggen-stein-Leopoldshafen, Germany
| | - Rüdiger Rudolf
- CeMOS, Mannheim University of Applied Sciences, 68163, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, 68163, Mannheim, Germany
| |
Collapse
|
14
|
Wang H, Li X, You X, Zhao G. Harnessing the power of artificial intelligence for human living organoid research. Bioact Mater 2024; 42:140-164. [PMID: 39280585 PMCID: PMC11402070 DOI: 10.1016/j.bioactmat.2024.08.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/21/2024] [Accepted: 08/26/2024] [Indexed: 09/18/2024] Open
Abstract
As a powerful paradigm, artificial intelligence (AI) is rapidly impacting every aspect of our day-to-day life and scientific research through interdisciplinary transformations. Living human organoids (LOs) have a great potential for in vitro reshaping many aspects of in vivo true human organs, including organ development, disease occurrence, and drug responses. To date, AI has driven the revolutionary advances of human organoids in life science, precision medicine and pharmaceutical science in an unprecedented way. Herein, we provide a forward-looking review, the frontiers of LOs, covering the engineered construction strategies and multidisciplinary technologies for developing LOs, highlighting the cutting-edge achievements and the prospective applications of AI in LOs, particularly in biological study, disease occurrence, disease diagnosis and prediction and drug screening in preclinical assay. Moreover, we shed light on the new research trends harnessing the power of AI for LO research in the context of multidisciplinary technologies. The aim of this paper is to motivate researchers to explore organ function throughout the human life cycle, narrow the gap between in vitro microphysiological models and the real human body, accurately predict human-related responses to external stimuli (cues and drugs), accelerate the preclinical-to-clinical transformation, and ultimately enhance the health and well-being of patients.
Collapse
Affiliation(s)
- Hui Wang
- Master Lab for Innovative Application of Nature Products, National Center of Technology Innovation for Synthetic Biology, Tianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences (CAS), Tianjin, 300308, PR China
| | - Xiangyang Li
- Henan Engineering Research Center of Food Microbiology, College of food and bioengineering, Henan University of Science and Technology, Luoyang, 471023, PR China
- Haihe Laboratory of Synthetic Biology, Tianjin, 300308, PR China
| | - Xiaoyan You
- Master Lab for Innovative Application of Nature Products, National Center of Technology Innovation for Synthetic Biology, Tianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences (CAS), Tianjin, 300308, PR China
- Henan Engineering Research Center of Food Microbiology, College of food and bioengineering, Henan University of Science and Technology, Luoyang, 471023, PR China
| | - Guoping Zhao
- Master Lab for Innovative Application of Nature Products, National Center of Technology Innovation for Synthetic Biology, Tianjin Institute of Industrial Biotechnology, Chinese Academy of Sciences (CAS), Tianjin, 300308, PR China
- CAS-Key Laboratory of Synthetic Biology, CAS Center for Excellence in Molecular Plant Sciences, Institute of Plant Physiology and Ecology, Chinese Academy of Sciences, Shanghai, 200032, PR China
- CAS Key Laboratory of Quantitative Engineering Biology, Shenzhen Institute of Synthetic Biology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, PR China
- Engineering Laboratory for Nutrition, Shanghai Institute of Nutrition and Health, Chinese Academy of Sciences, Shanghai, 200031, PR China
| |
Collapse
|
15
|
Verma A, Yu C, Bachl S, Lopez I, Schwartz M, Moen E, Kale N, Ching C, Miller G, Dougherty T, Pao E, Graf W, Ward C, Jena S, Marson A, Carnevale J, Van Valen D, Engelhardt BE. Cellular behavior analysis from live-cell imaging of TCR T cell-cancer cell interactions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.19.624390. [PMID: 39605616 PMCID: PMC11601648 DOI: 10.1101/2024.11.19.624390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
T cell therapies, such as chimeric antigen receptor (CAR) T cells and T cell receptor (TCR) T cells, are a growing class of anti-cancer treatments. However, expansion to novel indications and beyond last-line treatment requires engineering cells' dynamic population behaviors. Here we develop the tools for cellular behavior analysis of T cells from live-cell imaging, a common and inexpensive experimental setup used to evaluate engineered T cells. We first develop a state-of-the-art segmentation and tracking pipeline, Caliban, based on human-in-the-loop deep learning. We then build the Occident pipeline to collect a catalog of phenotypes that characterize cell populations, morphology, movement, and interactions in co-cultures of modified T cells and antigen-presenting tumor cells. We use Caliban and Occident to interrogate how interactions between T cells and cancer cells differ when beneficial knock-outs of RASA2 and CUL5 are introduced into TCR T cells. We apply spatiotemporal models to quantify T cell recruitment and proliferation after interactions with cancer cells. We discover that, compared to a safe harbor knockout control, RASA2 knockout T cells have longer interaction times with cancer cells leading to greater T cell activation and killing efficacy, while CUL5 knockout T cells have increased proliferation rates leading to greater numbers of T cells for hunting. Together, segmentation and tracking from Caliban and phenotype quantification from Occident enable cellular behavior analysis to better engineer T cell therapies for improved cancer treatment.
Collapse
Affiliation(s)
- Archit Verma
- Institute of Data Science and Biotechnology, Gladstone Institutes, San Francisco, CA, USA
| | - Changhua Yu
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Stefanie Bachl
- School of Medicine, University of California, San Francisco, San Francisco,CA, USA
- Gladstone-UCSF Institute of Genomic Immunology, San Francisco, CA, USA
| | - Ivan Lopez
- School of Medicine, Stanford University, Stanford, California, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Morgan Schwartz
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Erick Moen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Nupura Kale
- School of Medicine, University of California, San Francisco, San Francisco,CA, USA
- Gladstone-UCSF Institute of Genomic Immunology, San Francisco, CA, USA
| | - Carter Ching
- School of Medicine, University of California, San Francisco, San Francisco,CA, USA
- Gladstone-UCSF Institute of Genomic Immunology, San Francisco, CA, USA
| | - Geneva Miller
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Tom Dougherty
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Ed Pao
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - William Graf
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Carl Ward
- Gladstone-UCSF Institute of Genomic Immunology, San Francisco, CA, USA
| | - Siddhartha Jena
- Stem Cell and Regenerative Biology, Harvard University, Cambridge, Massachusetts, USA
| | - Alex Marson
- School of Medicine, University of California, San Francisco, San Francisco,CA, USA
- Gladstone-UCSF Institute of Genomic Immunology, San Francisco, CA, USA
| | - Julia Carnevale
- School of Medicine, University of California, San Francisco, San Francisco,CA, USA
- Gladstone-UCSF Institute of Genomic Immunology, San Francisco, CA, USA
| | - David Van Valen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Barbara E Engelhardt
- Institute of Data Science and Biotechnology, Gladstone Institutes, San Francisco, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| |
Collapse
|
16
|
Annasamudram N, Zhao J, Prashanth A, Makrogiannis S. Scale Selection and Machine Learning-based Cell Segmentation and Tracking in Time Lapse Microscopy. RESEARCH SQUARE 2024:rs.3.rs-5228158. [PMID: 39574900 PMCID: PMC11581055 DOI: 10.21203/rs.3.rs-5228158/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2024]
Abstract
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques.
Collapse
Affiliation(s)
- Nagasoujanya Annasamudram
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Jian Zhao
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Aashish Prashanth
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| |
Collapse
|
17
|
Cimini BA, Bankhead P, D'Antuono R, Fazeli E, Fernandez-Rodriguez J, Fuster-Barceló C, Haase R, Jambor HK, Jones ML, Jug F, Klemm AH, Kreshuk A, Marcotti S, Martins GG, McArdle S, Miura K, Muñoz-Barrutia A, Murphy LC, Nelson MS, Nørrelykke SF, Paul-Gilloteaux P, Pengo T, Pylvänäinen JW, Pytowski L, Ravera A, Reinke A, Rekik Y, Strambio-De-Castillia C, Thédié D, Uhlmann V, Umney O, Wiggins L, Eliceiri KW. The crucial role of bioimage analysts in scientific research and publication. J Cell Sci 2024; 137:jcs262322. [PMID: 39475207 PMCID: PMC11698046 DOI: 10.1242/jcs.262322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2024] Open
Abstract
Bioimage analysis (BIA), a crucial discipline in biological research, overcomes the limitations of subjective analysis in microscopy through the creation and application of quantitative and reproducible methods. The establishment of dedicated BIA support within academic institutions is vital to improving research quality and efficiency and can significantly advance scientific discovery. However, a lack of training resources, limited career paths and insufficient recognition of the contributions made by bioimage analysts prevent the full realization of this potential. This Perspective - the result of the recent The Company of Biologists Workshop 'Effectively Communicating Bioimage Analysis', which aimed to summarize the global BIA landscape, categorize obstacles and offer possible solutions - proposes strategies to bring about a cultural shift towards recognizing the value of BIA by standardizing tools, improving training and encouraging formal credit for contributions. We also advocate for increased funding, standardized practices and enhanced collaboration, and we conclude with a call to action for all stakeholders to join efforts in advancing BIA.
Collapse
Affiliation(s)
- Beth A. Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Peter Bankhead
- Edinburgh Pathology, Centre for Genomic & Experimental Medicine and CRUK Scotland Centre, Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh EH4 2XU, UK
| | - Rocco D'Antuono
- Crick Advanced Light Microscopy STP, The Francis Crick Institute, London NW1 1AT, UK
- Department of Biomedical Engineering, School of Biological Sciences, University of Reading, Reading RG6 6AY, UK
| | - Elnaz Fazeli
- Biomedicum Imaging Unit, Faculty of Medicine and HiLIFE, University of Helsinki, FI-00014 Helsinki, Finland
| | - Julia Fernandez-Rodriguez
- Centre for Cellular Imaging, Sahlgrenska Academy, University of Gothenburg, SE-405 30 Gothenburg, Sweden
| | | | - Robert Haase
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Universität Leipzig, 04105 Leipzig, Germany
| | - Helena Klara Jambor
- DAViS, University of Applied Sciences of the Grisons, 7000 Chur, Switzerland
| | - Martin L. Jones
- Electron Microscopy STP, The Francis Crick Institute, London NW1 1AT, UK
| | - Florian Jug
- Fondazione Human Technopole, 20157 Milan, Italy
| | - Anna H. Klemm
- Science for Life Laboratory BioImage Informatics Facility and Department of Information Technology, Uppsala University, SE-75105 Uppsala, Sweden
| | - Anna Kreshuk
- Cell Biology and Biophysics, European Molecular Biology Laboratory, 69115 Heidelberg, Germany
| | - Stefania Marcotti
- Randall Centre for Cell and Molecular Biophysics and Research Management & Innovation Directorate, King's College London, London SE1 1UL, UK
| | - Gabriel G. Martins
- GIMM - Gulbenkian Institute for Molecular Medicine, R. Quinta Grande 6, 2780-156 Oeiras, Portugal
| | - Sara McArdle
- La Jolla Institute for Immunology,Microscopy Core Facility, San Diego, CA 92037, USA
| | - Kota Miura
- Bioimage Analysis & Research, BIO-Plaza 1062, Nishi-Furumatsu 2-26-22 Kita-ku, Okayama, 700-0927, Japan
| | | | - Laura C. Murphy
- Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh EH4 2XU, UK
| | - Michael S. Nelson
- University of Wisconsin-Madison,Biomedical Engineering, Madison, WI 53706, USA
| | | | | | - Thomas Pengo
- Minnesota Supercomputing Institute,University of Minnesota Twin Cities, Minneapolis, MN 55005, USA
| | - Joanna W. Pylvänäinen
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, 20520 Turku, Finland
| | - Lior Pytowski
- Pixel Biology Ltd, 9 South Park Court, East Avenue, Oxford OX4 1YZ, UK
| | - Arianna Ravera
- Scientific Computing and Research Support Unit, University of Lausanne, 1005 Lausanne, Switzerland
| | - Annika Reinke
- Division of Intelligent Medical Systems and Helmholtz Imaging, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany
| | - Yousr Rekik
- Université Grenoble Alpes, CNRS, CEA, IRIG, Laboratoire de chimie et de biologie des métaux, F-38000 Grenoble, France
- Université Grenoble Alpes, CEA, IRIG, Laboratoire Modélisation et Exploration des Matériaux, F-38000 Grenoble, France
| | | | - Daniel Thédié
- Institute of Cell Biology, The University of Edinburgh, Edinburgh EH9 3FF, UK
| | | | - Oliver Umney
- School of Computing, University of Leeds, Leeds LS2 9JT, UK
| | - Laura Wiggins
- University of Sheffield, Department of Materials Science and Engineering, Sheffield S10 2TN, UK
| | - Kevin W. Eliceiri
- University of Wisconsin-Madison,Biomedical Engineering, Madison, WI 53706, USA
| |
Collapse
|
18
|
Wang Y, Zhou S, Quan Y, Liu Y, Zhou B, Chen X, Ma Z, Zhou Y. Label-free spatiotemporal decoding of single-cell fate via acoustic driven 3D tomography. Mater Today Bio 2024; 28:101201. [PMID: 39221213 PMCID: PMC11364901 DOI: 10.1016/j.mtbio.2024.101201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 08/08/2024] [Accepted: 08/11/2024] [Indexed: 09/04/2024] Open
Abstract
Label-free three-dimensional imaging plays a crucial role in unraveling the complexities of cellular functions and interactions in biomedical research. Conventional single-cell optical tomography techniques offer affordability and the convenience of bypassing laborious cell labelling protocols. However, these methods are encumbered by restricted illumination scanning ranges on abaxial plane, resulting in the loss of intricate cellular imaging details. The ability to fully control cellular rotation across all angles has emerged as an optimal solution for capturing comprehensive structural details of cells. Here, we introduce a label-free, cost-effective, and readily fabricated contactless acoustic-induced vibration system, specifically designed to enable multi-degree-of-freedom rotation of cells, ultimately attaining stable in-situ rotation. Furthermore, by integrating this system with advanced deep learning technologies, we perform 3D reconstruction and morphological analysis on diverse cell types, thus validating groups of high-precision cell identification. Notably, long-term observation of cells reveals distinct features associated with drug-induced apoptosis in both cancerous and normal cells populations. This methodology, based on deep learning-enabled cell 3D reconstruction, charts a novel trajectory for groups of real-time cellular visualization, offering promising advancements in the realms of drug screening and post-single-cell analysis, thereby addressing potential clinical requisites.
Collapse
Affiliation(s)
- Yuxin Wang
- Joint Key Laboratory of the Ministry of Education, Institute of Applied Physics and Materials Engineering, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| | - Shizheng Zhou
- Joint Key Laboratory of the Ministry of Education, Institute of Applied Physics and Materials Engineering, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| | - Yue Quan
- Joint Key Laboratory of the Ministry of Education, Institute of Applied Physics and Materials Engineering, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| | - Yu Liu
- Joint Key Laboratory of the Ministry of Education, Institute of Applied Physics and Materials Engineering, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| | - Bingpu Zhou
- Joint Key Laboratory of the Ministry of Education, Institute of Applied Physics and Materials Engineering, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| | - Xiuping Chen
- State Key Laboratory of Quality Research in Chinese Medicine, Institute of Chinese Medical Sciences, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| | - Zhichao Ma
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No.800 Dongchuan Road, Shanghai, 200240, China
| | - Yinning Zhou
- Joint Key Laboratory of the Ministry of Education, Institute of Applied Physics and Materials Engineering, University of Macau, Avenida da Universidade, Taipa, Macau, 999078, China
| |
Collapse
|
19
|
Chen L, Fu S, Zhang Z. CMTT-JTracker: a fully test-time adaptive framework serving automated cell lineage construction. Brief Bioinform 2024; 25:bbae591. [PMID: 39552066 PMCID: PMC11570544 DOI: 10.1093/bib/bbae591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 10/14/2024] [Accepted: 10/31/2024] [Indexed: 11/19/2024] Open
Abstract
Cell tracking is an essential function needed in automated cellular activity monitoring. In practice, processing methods striking a balance between computational efficiency and accuracy as well as demonstrating robust generalizability across diverse cell datasets are highly desired. This paper develops a central-metric fully test-time adaptive framework for cell tracking (CMTT-JTracker). Firstly, a CMTT mechanism is designed for the pre-segmentation of cell images, which enables extracting target information at different resolutions without additional training. Next, a multi-task learning network with the spatial attention scheme is developed to simultaneously realize detection and re-identification tasks based on features extracted by CMTT. Experimental results demonstrate that the CMTT-JTracker exhibits remarkable biological and tracking performance compared with benchmarking tracking methods. It achieves a multiple object tracking accuracy (MOTA) of $0.894$ on Fluo-N2DH-SIM+ and a MOTA of $0.850$ on PhC-C2DL-PSC. Experimental results further confirm that the CMTT applied solely as a segmentation unit outperforms the SOTA segmentation benchmarks on various datasets, particularly excelling in scenarios with dense cells. The Dice coefficients of the CMTT range from a high of $0.928$ to a low of $0.758$ across different datasets.
Collapse
Affiliation(s)
- Liuyin Chen
- Department of Data Science, College of Computing, City University of Hong Kong, Hong Kong SAR, China
| | - Sanyuan Fu
- Hefei National Laboratory for Physical Sciences at the Microscale and Department of Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Zijun Zhang
- Department of Data Science, College of Computing, City University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
20
|
Bragantini J, Theodoro I, Zhao X, Huijben TAPM, Hirata-Miyasaki E, VijayKumar S, Balasubramanian A, Lao T, Agrawal R, Xiao S, Lammerding J, Mehta S, Falcão AX, Jacobo A, Lange M, Royer LA. Ultrack: pushing the limits of cell tracking across biological scales. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.02.610652. [PMID: 39282368 PMCID: PMC11398427 DOI: 10.1101/2024.09.02.610652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Tracking live cells across 2D, 3D, and multi-channel time-lapse recordings is crucial for understanding tissue-scale biological processes. Despite advancements in imaging technology, achieving accurate cell tracking remains challenging, particularly in complex and crowded tissues where cell segmentation is often ambiguous. We present Ultrack, a versatile and scalable cell-tracking method that tackles this challenge by considering candidate segmentations derived from multiple algorithms and parameter sets. Ultrack employs temporal consistency to select optimal segments, ensuring robust performance even under segmentation uncertainty. We validate our method on diverse datasets, including terabyte-scale developmental time-lapses of zebrafish, fruit fly, and nematode embryos, as well as multi-color and label-free cellular imaging. We show that Ultrack achieves state-of-the-art performance on the Cell Tracking Challenge and demonstrates superior accuracy in tracking densely packed embryonic cells over extended periods. Moreover, we propose an approach to tracking validation via dual-channel sparse labeling that enables high-fidelity ground truth generation, pushing the boundaries of long-term cell tracking assessment. Our method is freely available as a Python package with Fiji and napari plugins and can be deployed in a high-performance computing environment, facilitating widespread adoption by the research community.
Collapse
Affiliation(s)
| | - Ilan Theodoro
- Chan Zuckerberg Biohub, San Francisco, United States
- Institute of Computing - State University of Campinas, Campinas, Brazil
| | - Xiang Zhao
- Chan Zuckerberg Biohub, San Francisco, United States
| | | | | | | | | | - Tiger Lao
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Richa Agrawal
- Weill Institute for Cell and Molecular Biology - Cornell University, Ithaca, United States
| | - Sheng Xiao
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Jan Lammerding
- Weill Institute for Cell and Molecular Biology - Cornell University, Ithaca, United States
- Meinig School of Biomedical Engineering - Cornell University, Ithaca, United States
| | - Shalin Mehta
- Chan Zuckerberg Biohub, San Francisco, United States
| | | | - Adrian Jacobo
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Merlin Lange
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Loïc A Royer
- Chan Zuckerberg Biohub, San Francisco, United States
| |
Collapse
|
21
|
Engbrecht M, Grundei D, Dilger A, Wiedemann H, Aust AK, Baumgärtner S, Helfrich S, Kergl-Räpple F, Bürkle A, Mangerich A. Monitoring nucleolar-nucleoplasmic protein shuttling in living cells by high-content microscopy and automated image analysis. Nucleic Acids Res 2024; 52:e72. [PMID: 39036969 PMCID: PMC11347172 DOI: 10.1093/nar/gkae598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 05/25/2024] [Accepted: 06/26/2024] [Indexed: 07/23/2024] Open
Abstract
The nucleolus has core functions in ribosome biosynthesis, but also acts as a regulatory hub in a plethora of non-canonical processes, including cellular stress. Upon DNA damage, several DNA repair factors shuttle between the nucleolus and the nucleoplasm. Yet, the molecular mechanisms underlying such spatio-temporal protein dynamics remain to be deciphered. Here, we present a novel imaging platform to investigate nucleolar-nucleoplasmic protein shuttling in living cells. For image acquisition, we used a commercially available automated fluorescence microscope and for image analysis, we developed a KNIME workflow with implementation of machine learning-based tools. We validated the method with different nucleolar proteins, i.e., PARP1, TARG1 and APE1, by monitoring their shuttling dynamics upon oxidative stress. As a paradigm, we analyzed PARP1 shuttling upon H2O2 treatment in combination with a range of pharmacological inhibitors in a novel reporter cell line. These experiments revealed that inhibition of SIRT7 results in a loss of nucleolar PARP1 localization. Finally, we unraveled specific differences in PARP1 shuttling dynamics after co-treatment with H2O2 and different clinical PARP inhibitors. Collectively, this work delineates a highly sensitive and versatile bioimaging platform to investigate swift nucleolar-nucleoplasmic protein shuttling in living cells, which can be employed for pharmacological screening and in-depth mechanistic analyses.
Collapse
Affiliation(s)
- Marina Engbrecht
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | - David Grundei
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | - Asisa M Dilger
- Nutritional Toxicology, Institute of Nutritional Science, University of Potsdam, 14469 Potsdam, Germany
| | - Hannah Wiedemann
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | - Ann-Kristin Aust
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | - Sarah Baumgärtner
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | | | | | - Alexander Bürkle
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
| | - Aswin Mangerich
- Molecular Toxicology, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
- Nutritional Toxicology, Institute of Nutritional Science, University of Potsdam, 14469 Potsdam, Germany
| |
Collapse
|
22
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhou D, Geng T, Liu D, Ji Z. A systematic evaluation of computational methods for cell segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577670. [PMID: 38352578 PMCID: PMC10862744 DOI: 10.1101/2024.01.28.577670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Biostatistics, Harvard T.H.Chan School of Public Health, Boston, MA, USA
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Tong Geng
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
23
|
Vitacolonna M, Bruch R, Agaçi A, Nürnberg E, Cesetti T, Keller F, Padovani F, Sauer S, Schmoller KM, Reischl M, Hafner M, Rudolf R. A multiparametric analysis including single-cell and subcellular feature assessment reveals differential behavior of spheroid cultures on distinct ultra-low attachment plate types. Front Bioeng Biotechnol 2024; 12:1422235. [PMID: 39157442 PMCID: PMC11327450 DOI: 10.3389/fbioe.2024.1422235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 07/19/2024] [Indexed: 08/20/2024] Open
Abstract
Spheroids have become principal three-dimensional models to study cancer, developmental processes, and drug efficacy. Single-cell analysis techniques have emerged as ideal tools to gauge the complexity of cellular responses in these models. However, the single-cell quantitative assessment based on 3D-microscopic data of the subcellular distribution of fluorescence markers, such as the nuclear/cytoplasm ratio of transcription factors, has largely remained elusive. For spheroid generation, ultra-low attachment plates are noteworthy due to their simplicity, compatibility with automation, and experimental and commercial accessibility. However, it is unknown whether and to what degree the plate type impacts spheroid formation and biology. This study developed a novel AI-based pipeline for the analysis of 3D-confocal data of optically cleared large spheroids at the wholemount, single-cell, and sub-cellular levels. To identify relevant samples for the pipeline, automated brightfield microscopy was employed to systematically compare the size and eccentricity of spheroids formed in six different plate types using four distinct human cell lines. This showed that all plate types exhibited similar spheroid-forming capabilities and the gross patterns of growth or shrinkage during 4 days after seeding were comparable. Yet, size and eccentricity varied systematically among specific cell lines and plate types. Based on this prescreen, spheroids of HaCaT keratinocytes and HT-29 cancer cells were further assessed. In HaCaT spheroids, the in-depth analysis revealed a correlation between spheroid size, cell proliferation, and the nuclear/cytoplasm ratio of the transcriptional coactivator, YAP1, as well as an inverse correlation with respect to cell differentiation. These findings, yielded with a spheroid model and at a single-cell level, corroborate earlier concepts of the role of YAP1 in cell proliferation and differentiation of keratinocytes in human skin. Further, the results show that the plate type may influence the outcome of experimental campaigns and that it is advisable to scan different plate types for the optimal configuration during a specific investigation.
Collapse
Affiliation(s)
- Mario Vitacolonna
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Roman Bruch
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Ane Agaçi
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Elina Nürnberg
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- Faculty of Biotechnology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Tiziana Cesetti
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Florian Keller
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Francesco Padovani
- Institute of Functional Epigenetics (IFE), Molecular Targets and Therapeutics Center (MTTC), Helmholtz Center München, München-Neuherberg, Germany
| | - Simeon Sauer
- Faculty of Biotechnology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Kurt M. Schmoller
- Institute of Functional Epigenetics (IFE), Molecular Targets and Therapeutics Center (MTTC), Helmholtz Center München, München-Neuherberg, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Mathias Hafner
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Medical Technology, Medical Faculty Mannheim of Heidelberg University and Mannheim University of Applied Sciences, Mannheim, Germany
| | - Rüdiger Rudolf
- CeMOS, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
- Institute of Medical Technology, Medical Faculty Mannheim of Heidelberg University and Mannheim University of Applied Sciences, Mannheim, Germany
| |
Collapse
|
24
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhou D, Geng T, Liu D, Ji Z. A systematic evaluation of computational methods for cell segmentation. Brief Bioinform 2024; 25:bbae407. [PMID: 39154193 PMCID: PMC11330341 DOI: 10.1093/bib/bbae407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 06/28/2024] [Accepted: 08/01/2024] [Indexed: 08/19/2024] Open
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
| | - Tong Geng
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| |
Collapse
|
25
|
Ertürk A. Deep 3D histology powered by tissue clearing, omics and AI. Nat Methods 2024; 21:1153-1165. [PMID: 38997593 DOI: 10.1038/s41592-024-02327-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 05/28/2024] [Indexed: 07/14/2024]
Abstract
To comprehensively understand tissue and organism physiology and pathophysiology, it is essential to create complete three-dimensional (3D) cellular maps. These maps require structural data, such as the 3D configuration and positioning of tissues and cells, and molecular data on the constitution of each cell, spanning from the DNA sequence to protein expression. While single-cell transcriptomics is illuminating the cellular and molecular diversity across species and tissues, the 3D spatial context of these molecular data is often overlooked. Here, I discuss emerging 3D tissue histology techniques that add the missing third spatial dimension to biomedical research. Through innovations in tissue-clearing chemistry, labeling and volumetric imaging that enhance 3D reconstructions and their synergy with molecular techniques, these technologies will provide detailed blueprints of entire organs or organisms at the cellular level. Machine learning, especially deep learning, will be essential for extracting meaningful insights from the vast data. Further development of integrated structural, molecular and computational methods will unlock the full potential of next-generation 3D histology.
Collapse
Affiliation(s)
- Ali Ertürk
- Institute for Tissue Engineering and Regenerative Medicine, Helmholtz Zentrum München, Neuherberg, Germany.
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians University, Munich, Germany.
- School of Medicine, Koç University, İstanbul, Turkey.
- Deep Piction GmbH, Munich, Germany.
| |
Collapse
|
26
|
Xu B, Wu D, Shi J, Cong J, Lu M, Yang F, Nener B. Isolated Random Forest Assisted Spatio-Temporal Ant Colony Evolutionary Algorithm for Cell Tracking in Time-Lapse Sequences. IEEE J Biomed Health Inform 2024; 28:4157-4169. [PMID: 38662560 DOI: 10.1109/jbhi.2024.3393493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/03/2024]
Abstract
Multi-Object tracking in real world environments is a tough problem, especially for cell morphogenesis with division. Most cell tracking methods are hard to achieve reliable mitosis detection, efficient inter-frame matching, and accurate state estimation simultaneously within a unified tracking framework. In this paper, we propose a novel unified framework that leverages a spatio-temporal ant colony evolutionary algorithm to track cells amidst mitosis under measurement uncertainty. Each Bernoulli ant colony representing a migrating cell is able to capture the occurrence of mitosis through the proposed Isolation Random Forest (IRF)-assisted temporal mitosis detection algorithm with the assumption that mitotic cells exhibit unique spatio-temporal features different from non-mitotic ones. Guided by prediction of a division event, multiple ant colonies evolve between consecutive frames according to an augmented assignment matrix solved by the extended Hungarian method. To handle dense cell populations, an efficient group partition between cells and measurements is exploited, which enables multiple assignment tasks to be executed in parallel with a reduction in matrix dimension. After inter-frame traversing, the ant colony transitions to a foraging stage in which it begins approximating the Bernoulli parameter to estimate cell state by iteratively updating its pheromone field. Experiments on multi-cell tracking in the presence of cell mitosis and morphological changes are conducted, and the results demonstrate that the proposed method outperforms state-of-the-art approaches, striking a balance between accuracy and computational efficiency.
Collapse
|
27
|
Dillavou S, Hanlan JM, Chieco AT, Xiao H, Fulco S, Turner KT, Durian DJ. Bellybutton: accessible and customizable deep-learning image segmentation. Sci Rep 2024; 14:14281. [PMID: 38902315 PMCID: PMC11189893 DOI: 10.1038/s41598-024-63906-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 06/03/2024] [Indexed: 06/22/2024] Open
Abstract
The conversion of raw images into quantifiable data can be a major hurdle and time-sink in experimental research, and typically involves identifying region(s) of interest, a process known as segmentation. Machine learning tools for image segmentation are often specific to a set of tasks, such as tracking cells, or require substantial compute or coding knowledge to train and use. Here we introduce an easy-to-use (no coding required), image segmentation method, using a 15-layer convolutional neural network that can be trained on a laptop: Bellybutton. The algorithm trains on user-provided segmentation of example images, but, as we show, just one or even a sub-selection of one training image can be sufficient in some cases. We detail the machine learning method and give three use cases where Bellybutton correctly segments images despite substantial lighting, shape, size, focus, and/or structure variation across the regions(s) of interest. Instructions for easy download and use, with further details and the datasets used in this paper are available at pypi.org/project/Bellybuttonseg .
Collapse
Affiliation(s)
- Sam Dillavou
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| | - Jesse M Hanlan
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Anthony T Chieco
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Hongyi Xiao
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Sage Fulco
- Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Kevin T Turner
- Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Douglas J Durian
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Department of Mechanical Engineering and Applied Mechanics, University of Pennsylvania, Philadelphia, 19104, PA, USA
- Center for Computational Biology, Flatiron Institute, Simons Foundation, New York, NY, 10010, USA
| |
Collapse
|
28
|
Delgado-Rodriguez P, Sánchez RM, Rouméas-Noël E, Paris F, Munoz-Barrutia A. Automatic classification of normal and abnormal cell division using deep learning. Sci Rep 2024; 14:14241. [PMID: 38902496 PMCID: PMC11189926 DOI: 10.1038/s41598-024-64834-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 06/13/2024] [Indexed: 06/22/2024] Open
Abstract
In recent years, there has been a surge in the development of methods for cell segmentation and tracking, with initiatives like the Cell Tracking Challenge driving progress in the field. Most studies focus on regular cell population videos in which cells are segmented and followed, and parental relationships annotated. However, DNA damage induced by genotoxic drugs or ionizing radiation produces additional abnormal events since it leads to behaviors like abnormal cell divisions (resulting in a number of daughters different from two) and cell death. With this in mind, we developed an automatic mitosis classifier to categorize small mitosis image sequences centered around one cell as "Normal" or "Abnormal." These mitosis sequences were extracted from videos of cell populations exposed to varying levels of radiation that affect the cell cycle's development. We explored several deep-learning architectures and found that a network with a ResNet50 backbone and including a Long Short-Term Memory (LSTM) layer produced the best results (mean F1-score: 0.93 ± 0.06). In the future, we plan to integrate this classifier with cell segmentation and tracking to build phylogenetic trees of the population after genomic stress.
Collapse
Affiliation(s)
| | | | - Elouan Rouméas-Noël
- Centre Régional de Recherche en Cancérologie et Immunologie Intégré Nantes-Angers, Nantes, France
| | - François Paris
- Centre Régional de Recherche en Cancérologie et Immunologie Intégré Nantes-Angers, Nantes, France
- Institut de Cancérologie de L'Ouest, Saint-Herblain, France
| | | |
Collapse
|
29
|
Katoh TA, Fukai YT, Ishibashi T. Optical microscopic imaging, manipulation, and analysis methods for morphogenesis research. Microscopy (Oxf) 2024; 73:226-242. [PMID: 38102756 PMCID: PMC11154147 DOI: 10.1093/jmicro/dfad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/20/2023] [Accepted: 03/22/2024] [Indexed: 12/17/2023] Open
Abstract
Morphogenesis is a developmental process of organisms being shaped through complex and cooperative cellular movements. To understand the interplay between genetic programs and the resulting multicellular morphogenesis, it is essential to characterize the morphologies and dynamics at the single-cell level and to understand how physical forces serve as both signaling components and driving forces of tissue deformations. In recent years, advances in microscopy techniques have led to improvements in imaging speed, resolution and depth. Concurrently, the development of various software packages has supported large-scale, analyses of challenging images at the single-cell resolution. While these tools have enhanced our ability to examine dynamics of cells and mechanical processes during morphogenesis, their effective integration requires specialized expertise. With this background, this review provides a practical overview of those techniques. First, we introduce microscopic techniques for multicellular imaging and image analysis software tools with a focus on cell segmentation and tracking. Second, we provide an overview of cutting-edge techniques for mechanical manipulation of cells and tissues. Finally, we introduce recent findings on morphogenetic mechanisms and mechanosensations that have been achieved by effectively combining microscopy, image analysis tools and mechanical manipulation techniques.
Collapse
Affiliation(s)
- Takanobu A Katoh
- Department of Cell Biology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Yohsuke T Fukai
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| | - Tomoki Ishibashi
- Laboratory for Physical Biology, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| |
Collapse
|
30
|
Shroff H, Testa I, Jug F, Manley S. Live-cell imaging powered by computation. Nat Rev Mol Cell Biol 2024; 25:443-463. [PMID: 38378991 DOI: 10.1038/s41580-024-00702-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 02/22/2024]
Abstract
The proliferation of microscopy methods for live-cell imaging offers many new possibilities for users but can also be challenging to navigate. The prevailing challenge in live-cell fluorescence microscopy is capturing intra-cellular dynamics while preserving cell viability. Computational methods can help to address this challenge and are now shifting the boundaries of what is possible to capture in living systems. In this Review, we discuss these computational methods focusing on artificial intelligence-based approaches that can be layered on top of commonly used existing microscopies as well as hybrid methods that integrate computation and microscope hardware. We specifically discuss how computational approaches can improve the signal-to-noise ratio, spatial resolution, temporal resolution and multi-colour capacity of live-cell imaging.
Collapse
Affiliation(s)
- Hari Shroff
- Janelia Research Campus, Howard Hughes Medical Institute (HHMI), Ashburn, VA, USA
| | - Ilaria Testa
- Department of Applied Physics and Science for Life Laboratory, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Florian Jug
- Fondazione Human Technopole (HT), Milan, Italy
| | - Suliana Manley
- Institute of Physics, School of Basic Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne, Switzerland.
| |
Collapse
|
31
|
Ma J, Xie R, Ayyadhury S, Ge C, Gupta A, Gupta R, Gu S, Zhang Y, Lee G, Kim J, Lou W, Li H, Upschulte E, Dickscheid T, de Almeida JG, Wang Y, Han L, Yang X, Labagnara M, Gligorovski V, Scheder M, Rahi SJ, Kempster C, Pollitt A, Espinosa L, Mignot T, Middeke JM, Eckardt JN, Li W, Li Z, Cai X, Bai B, Greenwald NF, Van Valen D, Weisbart E, Cimini BA, Cheung T, Brück O, Bader GD, Wang B. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods 2024; 21:1103-1113. [PMID: 38532015 PMCID: PMC11210294 DOI: 10.1038/s41592-024-02233-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.
Collapse
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Ronald Xie
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| | - Shamini Ayyadhury
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Cheng Ge
- School of Medicine and Pharmacy, Ocean University of China, Qingdao, China
| | - Anubha Gupta
- Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India
| | - Ritu Gupta
- Laboratory Oncology Unit, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Nanjing, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Gihun Lee
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Joonkee Kim
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Wei Lou
- Shenzhen Research Institute of Big Data, Shenzhen, China
- Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Haofeng Li
- Shenzhen Research Institute of Big Data, Shenzhen, China
| | - Eric Upschulte
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
- Faculty of Mathematics and Natural Sciences - Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - José Guilherme de Almeida
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
- Champalimaud Foundation - Centre for the Unknown, Lisbon, Portugal
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | - Lin Han
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Marco Labagnara
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vojislav Gligorovski
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maxime Scheder
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - Alice Pollitt
- School of Biological Sciences, University of Reading, Reading, UK
| | - Leon Espinosa
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Tâm Mignot
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Wangkai Li
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Zhaoyang Li
- Institute of Advanced Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochen Cai
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Bizhe Bai
- School of EECS, The University of Queensland, Brisbane, Queensland, Australia
| | | | - David Van Valen
- Division of Computing and Mathematical Science, Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Erin Weisbart
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Trevor Cheung
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Oscar Brück
- Hematoscope Laboratory, Comprehensive Cancer Center & Center of Diagnostics, Helsinki University Hospital, Helsinki, Finland
- Department of Oncology, University of Helsinki, Helsinki, Finland
| | - Gary D Bader
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Ontario, Canada
- CIFAR Multiscale Human Program, CIFAR, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
- UHN AI Hub, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
32
|
Zhao Y, Chen KL, Shen XY, Li MK, Wan YJ, Yang C, Yu RJ, Long YT, Yan F, Ying YL. HFM-Tracker: a cell tracking algorithm based on hybrid feature matching. Analyst 2024; 149:2629-2636. [PMID: 38563459 DOI: 10.1039/d4an00199k] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Cell migration is known to be a fundamental biological process, playing an essential role in development, homeostasis, and diseases. This paper introduces a cell tracking algorithm named HFM-Tracker (Hybrid Feature Matching Tracker) that automatically identifies cell migration behaviours in consecutive images. It combines Contour Attention (CA) and Adaptive Confusion Matrix (ACM) modules to accurately capture cell contours in each image and track the dynamic behaviors of migrating cells in the field of view. Cells are firstly located and identified via the CA module-based cell detection network, and then associated and tracked via a cell tracking algorithm employing a hybrid feature-matching strategy. This proposed HFM-Tracker exhibits superiorities in cell detection and tracking, achieving 75% in MOTA (Multiple Object Tracking Accuracy) and 65% in IDF1 (ID F1 score). It provides quantitative analysis of the cell morphology and migration features, which could further help in understanding the complicated and diverse cell migration processes.
Collapse
Affiliation(s)
- Yan Zhao
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
| | - Ke-Le Chen
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Xin-Yu Shen
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Ming-Kang Li
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Yong-Jing Wan
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
| | - Cheng Yang
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Ru-Jia Yu
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Yi-Tao Long
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Feng Yan
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Yi-Lun Ying
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
- Chemistry and Biomedicine Innovation Center, Nanjing University, Nanjing 210023, P. R. China
| |
Collapse
|
33
|
Quinsgaard EMB, Korsnes MS, Korsnes R, Moestue SA. Single-cell tracking as a tool for studying EMT-phenotypes. Exp Cell Res 2024; 437:113993. [PMID: 38485079 DOI: 10.1016/j.yexcr.2024.113993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 02/28/2024] [Accepted: 03/06/2024] [Indexed: 03/24/2024]
Abstract
This article demonstrates that label-free single-cell video tracking is a useful approach for in vitro studies of Epithelial-Mesenchymal Transition (EMT). EMT is a highly heterogeneous process, involved in wound healing, embryogenesis and cancer. The process promotes metastasis, and increased understanding can aid development of novel therapeutic strategies. The role of EMT-associated biomarkers depends on biological context, making it challenging to compare and interpret data from different studies. We demonstrate single-cell video tracking for comprehensive phenotype analysis. In this study we performed single-cell video tracking on 72-h long recordings. We quantified several behaviours at a single-cell level during induced EMT in MDA-MB-468 cells. This revealed notable variations in migration speed, with different dose-response patterns and varying distributions of speed. By registering cell morphologies during the recording, we determined preferred paths of morphological transitions. We also found a clear association between migration speed and cell morphology. We found elevated rates of cell death, diminished proliferation, and an increase in mitotic failures followed by re-fusion of sister-cells. The method allows tracking of phenotypes in cell lineages, which can be particularly useful in epigenetic studies. Sister-cells were found to have significant similarities in their speeds and morphologies, illustrating the heritability of these traits.
Collapse
Affiliation(s)
- Ellen Marie Botne Quinsgaard
- Norwegian University of Science and Technology (NTNU), Department of Clinical and Molecular Medicine, NO-7491 Trondheim, Norway.
| | - Mónica Suárez Korsnes
- Norwegian University of Science and Technology (NTNU), Department of Clinical and Molecular Medicine, NO-7491 Trondheim, Norway; Korsnes Biocomputing (KoBio), Trondheim, Norway
| | | | - Siver Andreas Moestue
- Norwegian University of Science and Technology (NTNU), Department of Clinical and Molecular Medicine, NO-7491 Trondheim, Norway; Department of Pharmacy, Nord University, Bodø, Norway
| |
Collapse
|
34
|
Jose A, Roy R, Moreno-Andrés D, Stegmaier J. Automatic detection of cell-cycle stages using recurrent neural networks. PLoS One 2024; 19:e0297356. [PMID: 38466708 PMCID: PMC10927108 DOI: 10.1371/journal.pone.0297356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 01/02/2024] [Indexed: 03/13/2024] Open
Abstract
Mitosis is the process by which eukaryotic cells divide to produce two similar daughter cells with identical genetic material. Research into the process of mitosis is therefore of critical importance both for the basic understanding of cell biology and for the clinical approach to manifold pathologies resulting from its malfunctioning, including cancer. In this paper, we propose an approach to study mitotic progression automatically using deep learning. We used neural networks to predict different mitosis stages. We extracted video sequences of cells undergoing division and trained a Recurrent Neural Network (RNN) to extract image features. The use of RNN enabled better extraction of features. The RNN-based approach gave better performance compared to classifier based feature extraction methods which do not use time information. Evaluation of precision, recall, and F-score indicates the superiority of the proposed model compared to the baseline. To study the loss in performance due to confusion between adjacent classes, we plotted the confusion matrix as well. In addition, we visualized the feature space to understand why RNNs are better at classifying the mitosis stages than other classifier models, which indicated the formation of strong clusters for the different classes, clearly confirming the advantage of the proposed RNN-based approach.
Collapse
Affiliation(s)
- Abin Jose
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Daniel Moreno-Andrés
- Institute of Biochemistry and Molecular Cell Biology, Medical School, RWTH Aachen University, Aachen, Germany
| | - Johannes Stegmaier
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
35
|
Liu P, Li J, Chang J, Hu P, Sun Y, Jiang Y, Zhang F, Shao H. Software Tools for 2D Cell Segmentation. Cells 2024; 13:352. [PMID: 38391965 PMCID: PMC10886800 DOI: 10.3390/cells13040352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
Cell segmentation is an important task in the field of image processing, widely used in the life sciences and medical fields. Traditional methods are mainly based on pixel intensity and spatial relationships, but have limitations. In recent years, machine learning and deep learning methods have been widely used, providing more-accurate and efficient solutions for cell segmentation. The effort to develop efficient and accurate segmentation software tools has been one of the major focal points in the field of cell segmentation for years. However, each software tool has unique characteristics and adaptations, and no universal cell-segmentation software can achieve perfect results. In this review, we used three publicly available datasets containing multiple 2D cell-imaging modalities. Common segmentation metrics were used to evaluate the performance of eight segmentation tools to compare their generality and, thus, find the best-performing tool.
Collapse
Affiliation(s)
- Ping Liu
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
| | - Jun Li
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Jiaxing Chang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Pinli Hu
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yue Sun
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yanan Jiang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Fan Zhang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Haojing Shao
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| |
Collapse
|
36
|
Eschweiler D, Yilmaz R, Baumann M, Laube I, Roy R, Jose A, Brückner D, Stegmaier J. Denoising diffusion probabilistic models for generation of realistic fully-annotated microscopy image datasets. PLoS Comput Biol 2024; 20:e1011890. [PMID: 38377165 PMCID: PMC10906858 DOI: 10.1371/journal.pcbi.1011890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/01/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024] Open
Abstract
Recent advances in computer vision have led to significant progress in the generation of realistic image data, with denoising diffusion probabilistic models proving to be a particularly effective method. In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets through an unsupervised and intuitive approach, using rough sketches of desired structures as the starting point. The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches and enables the segmentation of diverse datasets without the need for human annotations. We demonstrate that segmentation models trained with a small set of synthetic image data reach accuracy levels comparable to those of generalist models trained with a large and diverse collection of manually annotated image data, thereby offering a streamlined and specialized application of segmentation models.
Collapse
Affiliation(s)
- Dennis Eschweiler
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rüveyda Yilmaz
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Matisse Baumann
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Ina Laube
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rijo Roy
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Abin Jose
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Daniel Brückner
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Johannes Stegmaier
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| |
Collapse
|
37
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
38
|
Maier-Hein L, Reinke A, Godau P, Tizabi MD, Buettner F, Christodoulou E, Glocker B, Isensee F, Kleesiek J, Kozubek M, Reyes M, Riegler MA, Wiesenfarth M, Kavur AE, Sudre CH, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Rädsch T, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Blaschko MB, Cardoso MJ, Cheplygina V, Cimini BA, Collins GS, Farahani K, Ferrer L, Galdran A, van Ginneken B, Haase R, Hashimoto DA, Hoffman MM, Huisman M, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Karthikesalingam A, Kofler F, Kopp-Schneider A, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Mattson P, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rajpoot N, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, van Smeden M, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Jäger PF. Metrics reloaded: recommendations for image analysis validation. Nat Methods 2024; 21:195-212. [PMID: 38347141 PMCID: PMC11182665 DOI: 10.1038/s41592-023-02151-z] [Citation(s) in RCA: 55] [Impact Index Per Article: 55.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. In biomedical image analysis, chosen performance metrics often do not reflect the domain interest, and thus fail to adequately measure scientific progress and hinder translation of ML techniques into practice. To overcome this, we created Metrics Reloaded, a comprehensive framework guiding researchers in the problem-aware selection of metrics. Developed by a large international consortium in a multistage Delphi process, it is based on the novel concept of a problem fingerprint-a structured representation of the given problem that captures all aspects that are relevant for metric selection, from the domain interest to the properties of the target structure(s), dataset and algorithm output. On the basis of the problem fingerprint, users are guided through the process of choosing and applying appropriate validation metrics while being made aware of potential pitfalls. Metrics Reloaded targets image analysis problems that can be interpreted as classification tasks at image, object or pixel level, namely image-level classification, object detection, semantic segmentation and instance segmentation tasks. To improve the user experience, we implemented the framework in the Metrics Reloaded online tool. Following the convergence of ML methodology across application domains, Metrics Reloaded fosters the convergence of validation methodology. Its applicability is demonstrated for various biomedical use cases.
Collapse
Affiliation(s)
- Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- Medical Faculty, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Department of Medicine, Goethe University Frankfurt, Frankfurt am Main, Germany
- Department of Informatics, Goethe University Frankfurt, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine, University Medicine Essen, Essen, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- Department of Computer Science, UiT The Arctic University of Norway, Tromsø, Norway
| | - Manuel Wiesenfarth
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Québec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology & Laboratory Medicine, Indiana University School of Medicine, IU Health Information and Translational Sciences Building, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Matthew B Blaschko
- Center for Processing Speech and Images, Department of Electrical Engineering, KU Leuven, Leuven, Belgium
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Gary S Collins
- Centre for Statistics in Medicine, University of Oxford, Nuffield Orthopaedic Centre, Oxford, UK
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- BCN Medtech, Universitat Pompeu Fabra, Barcelona, Spain
- Australian Institute for Machine Learning AIML, University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Robert Haase
- Technische Universität (TU) Dresden, DFG Cluster of Excellence 'Physics of Life', Dresden, Germany
- Center for Systems Biology, Dresden, Germany
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI), Leipzig University, Leipzig, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- Digital Engineering Faculty, University of Potsdam, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Peter Mattson
- Google, 1600 Amphitheatre Pkwy, Mountain View, CA, USA
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Québec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Maarten van Smeden
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| |
Collapse
|
39
|
Qian K, Friedman B, Takatoh J, Wang F, Kleinfeld D, Freund Y. CellBoost: A pipeline for machine assisted annotation in Neuroanatomy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.09.13.557658. [PMID: 38293051 PMCID: PMC10827062 DOI: 10.1101/2023.09.13.557658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
One of the important yet labor intensive tasks in neuroanatomy is the identification of select populations of cells. Current high-throughput techniques enable marking cells with histochemical fluorescent molecules as well as through the genetic expression of fluorescent proteins. Modern scanning microscopes allow high resolution multi-channel imaging of the mechanically or optically sectioned brain with thousands of marked cells per square millimeter. Manual identification of all marked cells is prohibitively time consuming. At the same time, simple segmentation algorithms suffer from high error rates and sensitivity to variation in fluorescent intensity and spatial distribution. We present a methodology that combines human judgement and machine learning that serves to significantly reduce the labor of the anatomist while improving the consistency of the annotation. As a demonstration, we analyzed murine brains with marked premotor neurons in the brainstem. We compared the error rate of our method to the disagreement rate among human anatomists. This comparison shows that our method can reduce the time to annotate by as much as ten-fold without significantly increasing the rate of errors. We show that our method achieves significant reduction in labor while achieving an accuracy that is similar to the level of agreement between different anatomists.
Collapse
Affiliation(s)
- Kui Qian
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA 92093, USA
| | - Beth Friedman
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA
| | - Jun Takatoh
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Fan Wang
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- McGovern Institute, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - David Kleinfeld
- Department of Physics, University of California, San Diego, La Jolla, CA 92093, USA
- Department of Neurobiology, University of California, San Diego, La Jolla, CA 92093, USA
| | - Yoav Freund
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA 92093, USA
- Halıcıoğlu Data Science Institute, University of California, San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
40
|
Wen C. Deep Learning-Based Cell Tracking in Deforming Organs and Moving Animals. Methods Mol Biol 2024; 2800:203-215. [PMID: 38709486 DOI: 10.1007/978-1-0716-3834-7_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Cell tracking is an essential step in extracting cellular signals from moving cells, which is vital for understanding the mechanisms underlying various biological functions and processes, particularly in organs such as the brain and heart. However, cells in living organisms often exhibit extensive and complex movements caused by organ deformation and whole-body motion. These movements pose a challenge in obtaining high-quality time-lapse cell images and tracking the intricate cell movements in the captured images. Recent advances in deep learning techniques provide powerful tools for detecting cells in low-quality images with densely packed cell populations, as well as estimating cell positions for cells undergoing large nonrigid movements. This chapter introduces the challenges of cell tracking in deforming organs and moving animals, outlines the solutions to these challenges, and presents a detailed protocol for data preparation, as well as for performing cell segmentation and tracking using the latest version of 3DeeCellTracker. This protocol is expected to enable researchers to gain deeper insights into organ dynamics and biological processes.
Collapse
Affiliation(s)
- Chentao Wen
- RIKEN Center for Biodynamic Research, Kobe, Japan.
| |
Collapse
|
41
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
42
|
Bondoc-Naumovitz KG, Laeverenz-Schlogelhofer H, Poon RN, Boggon AK, Bentley SA, Cortese D, Wan KY. Methods and Measures for Investigating Microscale Motility. Integr Comp Biol 2023; 63:1485-1508. [PMID: 37336589 PMCID: PMC10755196 DOI: 10.1093/icb/icad075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/21/2023] Open
Abstract
Motility is an essential factor for an organism's survival and diversification. With the advent of novel single-cell technologies, analytical frameworks, and theoretical methods, we can begin to probe the complex lives of microscopic motile organisms and answer the intertwining biological and physical questions of how these diverse lifeforms navigate their surroundings. Herein, we summarize the main mechanisms of microscale motility and give an overview of different experimental, analytical, and mathematical methods used to study them across different scales encompassing the molecular-, individual-, to population-level. We identify transferable techniques, pressing challenges, and future directions in the field. This review can serve as a starting point for researchers who are interested in exploring and quantifying the movements of organisms in the microscale world.
Collapse
Affiliation(s)
| | | | - Rebecca N Poon
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Alexander K Boggon
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Samuel A Bentley
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Dario Cortese
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| | - Kirsty Y Wan
- Living Systems Institute, University of Exeter, Stocker Road, EX4 4QD, Exeter, UK
| |
Collapse
|
43
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
44
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
45
|
Aleksandrovych M, Strassberg M, Melamed J, Xu M. Polarization differential interference contrast microscopy with physics-inspired plug-and-play denoiser for single-shot high-performance quantitative phase imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:5833-5850. [PMID: 38021115 PMCID: PMC10659786 DOI: 10.1364/boe.499316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 08/31/2023] [Accepted: 09/15/2023] [Indexed: 12/01/2023]
Abstract
We present single-shot high-performance quantitative phase imaging with a physics-inspired plug-and-play denoiser for polarization differential interference contrast (PDIC) microscopy. The quantitative phase is recovered by the alternating direction method of multipliers (ADMM), balancing total variance regularization and a pre-trained dense residual U-net (DRUNet) denoiser. The custom DRUNet uses the Tanh activation function to guarantee the symmetry requirement for phase retrieval. In addition, we introduce an adaptive strategy accelerating convergence and explicitly incorporating measurement noise. After validating this deep denoiser-enhanced PDIC microscopy on simulated data and phantom experiments, we demonstrated high-performance phase imaging of histological tissue sections. The phase retrieval by the denoiser-enhanced PDIC microscopy achieves significantly higher quality and accuracy than the solution based on Fourier transforms or the iterative solution with total variance regularization alone.
Collapse
Affiliation(s)
- Mariia Aleksandrovych
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| | - Mark Strassberg
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| | - Jonathan Melamed
- Department of Pathology, New York University Langone School of Medicine, New York, NY 10016, USA
| | - Min Xu
- Dept. of Physics and Astronomy, Hunter College and the Graduate Center, The City University of New York, 695 Park Ave, New York, NY 10065, USA
| |
Collapse
|
46
|
Lindwall G, Gerlee P. Bayesian inference on the Allee effect in cancer cell line populations using time-lapse microscopy images. J Theor Biol 2023; 574:111624. [PMID: 37769802 DOI: 10.1016/j.jtbi.2023.111624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 09/08/2023] [Accepted: 09/13/2023] [Indexed: 10/03/2023]
Abstract
The Allee effect describes the phenomenon that the per capita reproduction rate increases along with the population density at low densities. Allee effects have been observed at all scales, including in microscopic environments where individual cells are taken into account. This is great interest to cancer research, as understanding critical tumour density thresholds can inform treatment plans for patients. In this paper, we introduce a simple model for cell division in the case where the cancer cell population is modelled as an interacting particle system. The rate of the cell division is dependent on the local cell density, introducing an Allee effect. We perform parameter inference of the key model parameters through Markov Chain Monte Carlo, and apply our procedure to two image sequences from a cervical cancer cell line. The inference method is verified on in silico data to accurately identify the key parameters, and results on the in vitro data strongly suggest an Allee effect.
Collapse
|
47
|
Antonelli L, Polverino F, Albu A, Hada A, Asteriti IA, Degrassi F, Guarguaglini G, Maddalena L, Guarracino MR. ALFI: Cell cycle phenotype annotations of label-free time-lapse imaging data from cultured human cells. Sci Data 2023; 10:677. [PMID: 37794110 PMCID: PMC10551030 DOI: 10.1038/s41597-023-02540-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 09/05/2023] [Indexed: 10/06/2023] Open
Abstract
Detecting and tracking multiple moving objects in a video is a challenging task. For living cells, the task becomes even more arduous as cells change their morphology over time, can partially overlap, and mitosis leads to new cells. Differently from fluorescence microscopy, label-free techniques can be easily applied to almost all cell lines, reducing sample preparation complexity and phototoxicity. In this study, we present ALFI, a dataset of images and annotations for label-free microscopy, made publicly available to the scientific community, that notably extends the current panorama of expertly labeled data for detection and tracking of cultured living nontransformed and cancer human cells. It consists of 29 time-lapse image sequences from HeLa, U2OS, and hTERT RPE-1 cells under different experimental conditions, acquired by differential interference contrast microscopy, for a total of 237.9 hours. It contains various annotations (pixel-wise segmentation masks, object-wise bounding boxes, tracking information). The dataset is useful for testing and comparing methods for identifying interphase and mitotic events and reconstructing their lineage, and for discriminating different cellular phenotypes.
Collapse
Affiliation(s)
- Laura Antonelli
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy
| | - Federica Polverino
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Alexandra Albu
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Aroj Hada
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
| | - Italia A Asteriti
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Francesca Degrassi
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy
| | - Giulia Guarguaglini
- IBPM, Institute of Molecular Biology and Pathology, National Research Council, Rome, Italy.
| | - Lucia Maddalena
- ICAR, Institute for High-Performance Computing and Networking, National Research Council, Naples, Italy.
| | - Mario R Guarracino
- Department of Economics and Law, University of Cassino and Southern Lazio, Cassino, Italy
- Laboratory of Algorithms and Technologies for Networks Analysis, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
48
|
Bouchard C, Bernatchez R, Lavoie-Cardinal F. Addressing annotation and data scarcity when designing machine learning strategies for neurophotonics. NEUROPHOTONICS 2023; 10:044405. [PMID: 37636490 PMCID: PMC10447257 DOI: 10.1117/1.nph.10.4.044405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 08/29/2023]
Abstract
Machine learning has revolutionized the way data are processed, allowing information to be extracted in a fraction of the time it would take an expert. In the field of neurophotonics, machine learning approaches are used to automatically detect and classify features of interest in complex images. One of the key challenges in applying machine learning methods to the field of neurophotonics is the scarcity of available data and the complexity associated with labeling them, which can limit the performance of data-driven algorithms. We present an overview of various strategies, such as weakly supervised learning, active learning, and domain adaptation that can be used to address the problem of labeled data scarcity in neurophotonics. We provide a comprehensive overview of the strengths and limitations of each approach and discuss their potential applications to bioimaging datasets. In addition, we highlight how different strategies can be combined to increase model performance on those datasets. The approaches we describe can help to improve the accessibility of machine learning-based analysis with limited number of annotated images for training and can enable researchers to extract more meaningful insights from small datasets.
Collapse
Affiliation(s)
- Catherine Bouchard
- CERVO Brain Research Centre, Québec, Québec, Canada
- Université Laval, Institute Intelligence and Data, Québec, Québec, Canada
| | - Renaud Bernatchez
- CERVO Brain Research Centre, Québec, Québec, Canada
- Université Laval, Institute Intelligence and Data, Québec, Québec, Canada
| | - Flavie Lavoie-Cardinal
- CERVO Brain Research Centre, Québec, Québec, Canada
- Université Laval, Institute Intelligence and Data, Québec, Québec, Canada
- Université Laval, Département de psychiatrie et de neurosciences, Québec, Québec, Canada
| |
Collapse
|
49
|
Petkidis A, Andriasyan V, Greber UF. Machine learning for cross-scale microscopy of viruses. CELL REPORTS METHODS 2023; 3:100557. [PMID: 37751685 PMCID: PMC10545915 DOI: 10.1016/j.crmeth.2023.100557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/28/2023]
Abstract
Despite advances in virological sciences and antiviral research, viruses continue to emerge, circulate, and threaten public health. We still lack a comprehensive understanding of how cells and individuals remain susceptible to infectious agents. This deficiency is in part due to the complexity of viruses, including the cell states controlling virus-host interactions. Microscopy samples distinct cellular infection stages in a multi-parametric, time-resolved manner at molecular resolution and is increasingly enhanced by machine learning and deep learning. Here we discuss how state-of-the-art artificial intelligence (AI) augments light and electron microscopy and advances virological research of cells. We describe current procedures for image denoising, object segmentation, tracking, classification, and super-resolution and showcase examples of how AI has improved the acquisition and analyses of microscopy data. The power of AI-enhanced microscopy will continue to help unravel virus infection mechanisms, develop antiviral agents, and improve viral vectors.
Collapse
Affiliation(s)
- Anthony Petkidis
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| | - Vardan Andriasyan
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
| | - Urs F Greber
- Department of Molecular Life Sciences, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland.
| |
Collapse
|
50
|
Gao G, Walter NG. Critical Assessment of Condensate Boundaries in Dual-Color Single Particle Tracking. J Phys Chem B 2023; 127:7694-7707. [PMID: 37669232 DOI: 10.1021/acs.jpcb.3c03776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2023]
Abstract
Biomolecular condensates are membraneless cellular compartments generated by phase separation that regulate a broad variety of cellular functions by enriching some biomolecules while excluding others. Live-cell single particle tracking of individual fluorophore-labeled condensate components has provided insights into a condensate's mesoscopic organization and biological functions, such as revealing the recruitment, translation, and decay of RNAs within ribonucleoprotein (RNP) granules. Specifically, during dual-color tracking, one imaging channel provides a time series of individual biomolecule locations, while the other channel monitors the location of the condensate relative to these molecules. Therefore, an accurate assessment of a condensate's boundary is critical for combined live-cell single particle-condensate tracking. Despite its importance, a quantitative benchmarking and objective comparison of the various available boundary detection methods is missing due to the lack of an absolute ground truth for condensate images. Here, we use synthetic data of defined ground truth to generate noise-overlaid images of condensates with realistic phase separation parameters to benchmark the most commonly used methods for condensate boundary detection, including an emerging machine-learning method. We find that it is critical to carefully choose an optimal boundary detection method for a given dataset to obtain accurate measurements of single particle-condensate interactions. The criteria proposed in this study to guide the selection of an optimal boundary detection method can be broadly applied to imaging-based studies of condensates.
Collapse
Affiliation(s)
- Guoming Gao
- Biophysics Graduate Program, University of Michigan, Ann Arbor, Michigan 48109, United States
- Center for RNA Biomedicine, University of Michigan, Ann Arbor, Michigan 48109, United States
| | - Nils G Walter
- Center for RNA Biomedicine, University of Michigan, Ann Arbor, Michigan 48109, United States
- Department of Chemistry, University of Michigan, Ann Arbor, Michigan 48109, United States
| |
Collapse
|