1
|
Eschweiler D, Yilmaz R, Baumann M, Laube I, Roy R, Jose A, Brückner D, Stegmaier J. Denoising diffusion probabilistic models for generation of realistic fully-annotated microscopy image datasets. PLoS Comput Biol 2024; 20:e1011890. [PMID: 38377165 PMCID: PMC10906858 DOI: 10.1371/journal.pcbi.1011890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/01/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024] Open
Abstract
Recent advances in computer vision have led to significant progress in the generation of realistic image data, with denoising diffusion probabilistic models proving to be a particularly effective method. In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets through an unsupervised and intuitive approach, using rough sketches of desired structures as the starting point. The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches and enables the segmentation of diverse datasets without the need for human annotations. We demonstrate that segmentation models trained with a small set of synthetic image data reach accuracy levels comparable to those of generalist models trained with a large and diverse collection of manually annotated image data, thereby offering a streamlined and specialized application of segmentation models.
Collapse
Affiliation(s)
- Dennis Eschweiler
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rüveyda Yilmaz
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Matisse Baumann
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Ina Laube
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rijo Roy
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Abin Jose
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Daniel Brückner
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Johannes Stegmaier
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| |
Collapse
|
2
|
Wiesner D, Suk J, Dummer S, Nečasová T, Ulman V, Svoboda D, Wolterink JM. Generative modeling of living cells with SO(3)-equivariant implicit neural representations. Med Image Anal 2024; 91:102991. [PMID: 37839341 DOI: 10.1016/j.media.2023.102991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 08/20/2023] [Accepted: 10/02/2023] [Indexed: 10/17/2023]
Abstract
Data-driven cell tracking and segmentation methods in biomedical imaging require diverse and information-rich training data. In cases where the number of training samples is limited, synthetic computer-generated data sets can be used to improve these methods. This requires the synthesis of cell shapes as well as corresponding microscopy images using generative models. To synthesize realistic living cell shapes, the shape representation used by the generative model should be able to accurately represent fine details and changes in topology, which are common in cells. These requirements are not met by 3D voxel masks, which are restricted in resolution, and polygon meshes, which do not easily model processes like cell growth and mitosis. In this work, we propose to represent living cell shapes as level sets of signed distance functions (SDFs) which are estimated by neural networks. We optimize a fully-connected neural network to provide an implicit representation of the SDF value at any point in a 3D+time domain, conditioned on a learned latent code that is disentangled from the rotation of the cell shape. We demonstrate the effectiveness of this approach on cells that exhibit rapid deformations (Platynereis dumerilii), cells that grow and divide (C. elegans), and cells that have growing and branching filopodial protrusions (A549 human lung carcinoma cells). A quantitative evaluation using shape features and Dice similarity coefficients of real and synthetic cell shapes shows that our model can generate topologically plausible complex cell shapes in 3D+time with high similarity to real living cell shapes. Finally, we show how microscopy images of living cells that correspond to our generated cell shapes can be synthesized using an image-to-image model.
Collapse
Affiliation(s)
- David Wiesner
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic.
| | - Julian Suk
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Sven Dummer
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- IT4Innovations, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Jelmer M Wolterink
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands.
| |
Collapse
|
3
|
Wagner R, Lopez CF, Stiller C. Self-supervised pseudo-colorizing of masked cells. PLoS One 2023; 18:e0290561. [PMID: 37616272 PMCID: PMC10449109 DOI: 10.1371/journal.pone.0290561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 08/09/2023] [Indexed: 08/26/2023] Open
Abstract
Self-supervised learning, which is strikingly referred to as the dark matter of intelligence, is gaining more attention in biomedical applications of deep learning. In this work, we introduce a novel self-supervision objective for the analysis of cells in biomedical microscopy images. We propose training deep learning models to pseudo-colorize masked cells. We use a physics-informed pseudo-spectral colormap that is well suited for colorizing cell topology. Our experiments reveal that approximating semantic segmentation by pseudo-colorization is beneficial for subsequent fine-tuning on cell detection. Inspired by the recent success of masked image modeling, we additionally mask out cell parts and train to reconstruct these parts to further enrich the learned representations. We compare our pre-training method with self-supervised frameworks including contrastive learning (SimCLR), masked autoencoders (MAEs), and edge-based self-supervision. We build upon our previous work and train hybrid models for cell detection, which contain both convolutional and vision transformer modules. Our pre-training method can outperform SimCLR, MAE-like masked image modeling, and edge-based self-supervision when pre-training on a diverse set of six fluorescence microscopy datasets. Code is available at: https://github.com/roydenwa/pseudo-colorize-masked-cells.
Collapse
Affiliation(s)
- Royden Wagner
- Karlsruhe Institute of Technology (KIT), Karlsruhe, BW, Germany
| | | | | |
Collapse
|
4
|
Körber N. MIA is an open-source standalone deep learning application for microscopic image analysis. CELL REPORTS METHODS 2023; 3:100517. [PMID: 37533647 PMCID: PMC10391334 DOI: 10.1016/j.crmeth.2023.100517] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 02/10/2023] [Accepted: 06/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, the amount of data generated by imaging techniques has grown rapidly, along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, the Microscopic Image Analyzer (MIA) was developed. MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep-learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and uses open data formats, which are compatible with commonly used open-source software packages. The software provides a unified interface for easy image labeling, model training, and inference. Furthermore, the software was evaluated in a public competition and performed among the top three for all tested datasets.
Collapse
Affiliation(s)
- Nils Körber
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany
| |
Collapse
|
5
|
Maška M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Nečasová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, Mikut R, Guo T, Wang Y, Allebach JP, Bao R, Al-Shakarji NM, Rahmon G, Toubal IE, Palaniappan K, Lux F, Matula P, Sugawara K, Magnusson KEG, Aho L, Cohen AR, Arbelle A, Ben-Haim T, Raviv TR, Isensee F, Jäger PF, Maier-Hein KH, Zhu Y, Ederra C, Urbiola A, Meijering E, Cunha A, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solórzano C. The Cell Tracking Challenge: 10 years of objective benchmarking. Nat Methods 2023:10.1038/s41592-023-01879-y. [PMID: 37202537 PMCID: PMC10333123 DOI: 10.1038/s41592-023-01879-y] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 04/13/2023] [Indexed: 05/20/2023]
Abstract
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
Collapse
Affiliation(s)
- Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Estibaliz Gómez-de-Mariscal
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Fidel A Guerrero Peña
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tsang Ing Ren
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
| | - Elliot M Meyerowitz
- Division of Biology and Biological Engineering and Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Tianqi Guo
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Yin Wang
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Jan P Allebach
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Rina Bao
- Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Noor M Al-Shakarji
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Gani Rahmon
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Imad Eddine Toubal
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Kannappan Palaniappan
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de Lyon, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | | | - Layton Aho
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Andrew R Cohen
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Assaf Arbelle
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tal Ben-Haim
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tammy Riklin Raviv
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Paul F Jäger
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
- Griffith University, Nathan, Queensland, Australia
| | - Cristina Ederra
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Ainhoa Urbiola
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandre Cunha
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Arrate Muñoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic.
| | - Carlos Ortiz-de-Solórzano
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain.
| |
Collapse
|
6
|
Jun BH, Ahmadzadegan A, Ardekani AM, Solorio L, Vlachos PP. Multi-feature-Based Robust Cell Tracking. Ann Biomed Eng 2023; 51:604-617. [PMID: 36103061 DOI: 10.1007/s10439-022-03073-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 09/02/2022] [Indexed: 11/29/2022]
Abstract
Cell tracking algorithms have been used to extract cell counts and motility information from time-lapse images of migrating cells. However, these algorithms often fail when the collected images have cells with spatially and temporally varying features, such as morphology, position, and signal-to-noise ratio. Consequently, state-of-the-art algorithms are not robust or reliable because they require manual inputs to overcome the cell feature changes. To address these issues, we present a fully automated, adaptive, and robust feature-based cell tracking algorithm for the accurate detection and tracking of cells in time-lapse images. Our algorithm tackles measurement limitations twofold. First, we use Hessian filtering and adaptive thresholding to detect the cells in images, overcoming spatial feature variations among the existing cells without manually changing the input thresholds. Second, cell feature parameters are measured, including position, diameter, mean intensity, area, and orientation, and these parameters are simultaneously used to accurately track the cells between subsequent frames, even under poor temporal resolution. Our technique achieved a minimum of 92% detection and tracking accuracy, compared to 16% from Mosaic and Trackmate. Our improved method allows for extended tracking and characterization of heterogeneous cell behavior that are of particular interest for intravital imaging users.
Collapse
Affiliation(s)
- Brian H Jun
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Adib Ahmadzadegan
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Arezoo M Ardekani
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Luis Solorio
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
- Purdue Center for Cancer Research, Purdue University, West Lafayette, IN, USA.
| | - Pavlos P Vlachos
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
7
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
8
|
Hradecka L, Wiesner D, Sumbal J, Koledova ZS, Maska M. Segmentation and Tracking of Mammary Epithelial Organoids in Brightfield Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:281-290. [PMID: 36170389 DOI: 10.1109/tmi.2022.3210714] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present an automated and deep-learning-based workflow to quantitatively analyze the spatiotemporal development of mammary epithelial organoids in two-dimensional time-lapse (2D+t) sequences acquired using a brightfield microscope at high resolution. It involves a convolutional neural network (U-Net), purposely trained using computer-generated bioimage data created by a conditional generative adversarial network (pix2pixHD), to infer semantic segmentation, adaptive morphological filtering to identify organoid instances, and a shape-similarity-constrained, instance-segmentation-correcting tracking procedure to reliably cherry-pick the organoid instances of interest in time. By validating it using real 2D+t sequences of mouse mammary epithelial organoids of morphologically different phenotypes, we clearly demonstrate that the workflow achieves reliable segmentation and tracking performance, providing a reproducible and laborless alternative to manual analyses of the acquired bioimage data.
Collapse
|
9
|
Qureshi MH, Ozlu N, Bayraktar H. Adaptive tracking algorithm for trajectory analysis of cells and layer-by-layer assessment of motility dynamics. Comput Biol Med 2022; 150:106193. [PMID: 37859286 DOI: 10.1016/j.compbiomed.2022.106193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/26/2022] [Accepted: 10/08/2022] [Indexed: 11/03/2022]
Abstract
Tracking biological objects such as cells or subcellular components imaged with time-lapse microscopy enables us to understand the molecular principles about the dynamics of cell behaviors. However, automatic object detection, segmentation and extracting trajectories remain as a rate-limiting step due to intrinsic challenges of video processing. This paper presents an adaptive tracking algorithm (Adtari) that automatically finds the optimum search radius and cell linkages to determine trajectories in consecutive frames. A critical assumption in most tracking studies is that displacement remains unchanged throughout the movie and cells in a few frames are usually analyzed to determine its magnitude. Tracking errors and inaccurate association of cells may occur if the user does not correctly evaluate the value or prior knowledge is not present on cell movement. The key novelty of our method is that minimum intercellular distance and maximum displacement of cells between frames are dynamically computed and used to determine the threshold distance. Since the space between cells is highly variable in a given frame, our software recursively alters the magnitude to determine all plausible matches in the trajectory analysis. Our method therefore eliminates a major preprocessing step where a constant distance was used to determine the neighbor cells in tracking methods. Cells having multiple overlaps and splitting events were further evaluated by using the shape attributes including perimeter, area, ellipticity and distance. The features were applied to determine the closest matches by minimizing the difference in their magnitudes. Finally, reporting section of our software were used to generate instant maps by overlaying cell features and trajectories. Adtari was validated by using videos with variable signal-to-noise, contrast ratio and cell density. We compared the adaptive tracking with constant distance and other methods to evaluate performance and its efficiency. Our algorithm yields reduced mismatch ratio, increased ratio of whole cell track, higher frame tracking efficiency and allows layer-by-layer assessment of motility to characterize single-cells. Adaptive tracking provides a reliable, accurate, time efficient and user-friendly open source software that is well suited for analysis of 2D fluorescence microscopy video datasets.
Collapse
Affiliation(s)
- Mohammad Haroon Qureshi
- Department of Molecular Biology and Genetics, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey; Center for Translational Research, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Nurhan Ozlu
- Department of Molecular Biology and Genetics, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Halil Bayraktar
- Department of Molecular Biology and Genetics, Istanbul Technical University, Maslak, Sariyer, 34467, Istanbul, Turkey.
| |
Collapse
|
10
|
Sachs CC, Ruzaeva K, Seiffarth J, Wiechert W, Berkels B, Nöh K. CellSium: versatile cell simulator for microcolony ground truth generation. BIOINFORMATICS ADVANCES 2022; 2:vbac053. [PMID: 36699390 PMCID: PMC9710621 DOI: 10.1093/bioadv/vbac053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/25/2022] [Accepted: 07/28/2022] [Indexed: 01/28/2023]
Abstract
Summary To train deep learning-based segmentation models, large ground truth datasets are needed. To address this need in microfluidic live-cell imaging, we present CellSium, a flexibly configurable cell simulator built to synthesize realistic image sequences of bacterial microcolonies growing in monolayers. We illustrate that the simulated images are suitable for training neural networks. Synthetic time-lapse videos with and without fluorescence, using programmable cell growth models, and simulation-ready 3D colony geometries for computational fluid dynamics are also supported. Availability and implementation CellSium is free and open source software under the BSD license, implemented in Python, available at github.com/modsim/cellsium (DOI: 10.5281/zenodo.6193033), along with documentation, usage examples and Docker images. Supplementary information Supplementary data are available at Bioinformatics Advances online.
Collapse
Affiliation(s)
- Christian Carsten Sachs
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
| | | | | | - Wolfgang Wiechert
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, 52425 Jülich, Germany,Computational Systems Biotechnology (AVT.CSB), RWTH Aachen University, 52074 Aachen, Germany
| | - Benjamin Berkels
- Aachen Institute for Advanced Study in Computational Engineering Science (AICES), RWTH Aachen University, 52062 Aachen, Germany
| | | |
Collapse
|
11
|
Eschweiler D, Rethwisch M, Jarchow M, Koppers S, Stegmaier J. 3D fluorescence microscopy data synthesis for segmentation and benchmarking. PLoS One 2021; 16:e0260509. [PMID: 34855812 PMCID: PMC8639001 DOI: 10.1371/journal.pone.0260509] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 11/10/2021] [Indexed: 11/19/2022] Open
Abstract
Automated image processing approaches are indispensable for many biomedical experiments and help to cope with the increasing amount of microscopy image data in a fast and reproducible way. Especially state-of-the-art deep learning-based approaches most often require large amounts of annotated training data to produce accurate and generalist outputs, but they are often compromised by the general lack of those annotated data sets. In this work, we propose how conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy from annotation masks of 3D cellular structures. In combination with mask simulation approaches, we demonstrate the generation of fully-annotated 3D microscopy data sets that we make publicly available for training or benchmarking. An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics and allows to generate image data of different quality levels. A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms. We present this as a proof-of-concept for the automated generation of fully-annotated training data sets requiring only a minimum of manual interaction to alleviate the need of manual annotations.
Collapse
Affiliation(s)
- Dennis Eschweiler
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Malte Rethwisch
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Mareike Jarchow
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Simon Koppers
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Johannes Stegmaier
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
12
|
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2021. [DOI: 10.1038/s41592-020-01008-z 10.1038/s41592-020-01008-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
13
|
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2020; 18:203-211. [PMID: 33288961 DOI: 10.1038/s41592-020-01008-z] [Citation(s) in RCA: 2368] [Impact Index Per Article: 473.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 10/17/2020] [Accepted: 10/29/2020] [Indexed: 11/09/2022]
Abstract
Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.
Collapse
Affiliation(s)
- Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.,Faculty of Biosciences, University of Heidelberg, Heidelberg, Germany
| | - Paul F Jaeger
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Simon A A Kohl
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.,DeepMind, London, UK
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.,Faculty of Physics & Astronomy, University of Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany. .,Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
14
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
15
|
Kozubek M. When Deep Learning Meets Cell Image Synthesis. Cytometry A 2019; 97:222-225. [PMID: 31889406 DOI: 10.1002/cyto.a.23957] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 12/03/2019] [Indexed: 02/03/2023]
Affiliation(s)
- Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Czech Republic
| |
Collapse
|
16
|
Baniukiewicz P, Lutton EJ, Collier S, Bretschneider T. Generative Adversarial Networks for Augmenting Training Data of Microscopic Cell Images. FRONTIERS IN COMPUTER SCIENCE 2019. [DOI: 10.3389/fcomp.2019.00010] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
17
|
Wiesner D, Svoboda D, Maška M, Kozubek M. CytoPacq: a web-interface for simulating multi-dimensional cell imaging. Bioinformatics 2019; 35:4531-4533. [PMID: 31114843 PMCID: PMC6821329 DOI: 10.1093/bioinformatics/btz417] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 04/17/2019] [Accepted: 05/15/2019] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Objective assessment of bioimage analysis methods is an essential step towards understanding their robustness and parameter sensitivity, calling for the availability of heterogeneous bioimage datasets accompanied by their reference annotations. Because manual annotations are known to be arduous, highly subjective and barely reproducible, numerous simulators have emerged over past decades, generating synthetic bioimage datasets complemented with inherent reference annotations. However, the installation and configuration of these tools generally constitutes a barrier to their widespread use. RESULTS We present a modern, modular web-interface, CytoPacq, to facilitate the generation of synthetic benchmark datasets relevant for multi-dimensional cell imaging. CytoPacq poses a user-friendly graphical interface with contextual tooltips and currently allows a comfortable access to various cell simulation systems of fluorescence microscopy, which have already been recognized and used by the scientific community, in a straightforward and self-contained form. AVAILABILITY AND IMPLEMENTATION CytoPacq is a publicly available online service running at https://cbia.fi.muni.cz/simulator. More information about it as well as examples of generated bioimage datasets are available directly through the web-interface. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- David Wiesner
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| | - Martin Maška
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| |
Collapse
|
18
|
Gilad T, Reyes J, Chen JY, Lahav G, Riklin Raviv T. Fully unsupervised symmetry-based mitosis detection in time-lapse cell microscopy. Bioinformatics 2019; 35:2644-2653. [PMID: 30590471 PMCID: PMC6662301 DOI: 10.1093/bioinformatics/bty1034] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 11/30/2018] [Accepted: 12/20/2018] [Indexed: 11/14/2022] Open
Abstract
MOTIVATION Cell microscopy datasets have great diversity due to variability in cell types, imaging techniques and protocols. Existing methods are either tailored to specific datasets or are based on supervised learning, which requires comprehensive manual annotations. Using the latter approach, however, poses a significant difficulty due to the imbalance between the number of mitotic cells with respect to the entire cell population in a time-lapse microscopy sequence. RESULTS We present a fully unsupervised framework for both mitosis detection and mother-daughters association in fluorescence microscopy data. The proposed method accommodates the difficulty of the different cell appearances and dynamics. Addressing symmetric cell divisions, a key concept is utilizing daughters' similarity. Association is accomplished by defining cell neighborhood via a stochastic version of the Delaunay triangulation and optimization by dynamic programing. Our framework presents promising detection results for a variety of fluorescence microscopy datasets of different sources, including 2D and 3D sequences from the Cell Tracking Challenge. AVAILABILITY AND IMPLEMENTATION Code is available in github (github.com/topazgl/mitodix). SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Topaz Gilad
- Department of Electrical and Computer Engineering and the Zlotwoski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Jose Reyes
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Jia-Yun Chen
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Galit Lahav
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Tammy Riklin Raviv
- Department of Electrical and Computer Engineering and the Zlotwoski Center for Neuroscience, Ben-Gurion University of the Negev, Beersheba, Israel
| |
Collapse
|
19
|
Blin G, Sadurska D, Portero Migueles R, Chen N, Watson JA, Lowell S. Nessys: A new set of tools for the automated detection of nuclei within intact tissues and dense 3D cultures. PLoS Biol 2019; 17:e3000388. [PMID: 31398189 PMCID: PMC6703695 DOI: 10.1371/journal.pbio.3000388] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 08/21/2019] [Accepted: 07/02/2019] [Indexed: 12/17/2022] Open
Abstract
Methods for measuring the properties of individual cells within their native 3D environment will enable a deeper understanding of embryonic development, tissue regeneration, and tumorigenesis. However, current methods for segmenting nuclei in 3D tissues are not designed for situations in which nuclei are densely packed, nonspherical, or heterogeneous in shape, size, or texture, all of which are true of many embryonic and adult tissue types as well as in many cases for cells differentiating in culture. Here, we overcome this bottleneck by devising a novel method based on labelling the nuclear envelope (NE) and automatically distinguishing individual nuclei using a tree-structured ridge-tracing method followed by shape ranking according to a trained classifier. The method is fast and makes it possible to process images that are larger than the computer's memory. We consistently obtain accurate segmentation rates of >90%, even for challenging images such as mid-gestation embryos or 3D cultures. We provide a 3D editor and inspector for the manual curation of the segmentation results as well as a program to assess the accuracy of the segmentation. We have also generated a live reporter of the NE that can be used to track live cells in 3 dimensions over time. We use this to monitor the history of cell interactions and occurrences of neighbour exchange within cultures of pluripotent cells during differentiation. We provide these tools in an open-access user-friendly format.
Collapse
Affiliation(s)
- Guillaume Blin
- MRC Centre for Regenerative Medicine, Institute for Stem Cell Research, School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Daina Sadurska
- MRC Centre for Regenerative Medicine, Institute for Stem Cell Research, School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Rosa Portero Migueles
- MRC Centre for Regenerative Medicine, Institute for Stem Cell Research, School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Naiming Chen
- MRC Centre for Regenerative Medicine, Institute for Stem Cell Research, School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Julia A. Watson
- MRC Centre for Regenerative Medicine, Institute for Stem Cell Research, School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Sally Lowell
- MRC Centre for Regenerative Medicine, Institute for Stem Cell Research, School of Biological Sciences, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
20
|
Sorokin DV, Peterlik I, Ulman V, Svoboda D, Necasova T, Morgaenko K, Eiselleova L, Tesarova L, Maska M. FiloGen: A Model-Based Generator of Synthetic 3-D Time-Lapse Sequences of Single Motile Cells With Growing and Branching Filopodia. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2630-2641. [PMID: 29994200 DOI: 10.1109/tmi.2018.2845884] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The existence of diverse image datasets accompanied by reference annotations is a crucial prerequisite for an objective benchmarking of bioimage analysis methods. Nevertheless, such a prerequisite is hard to satisfy for time lapse, multidimensional fluorescence microscopy image data, manual annotations of which are laborious and often impracticable. In this paper, we present a simulation system capable of generating 3-D time-lapse sequences of single motile cells with filopodial protrusions of user-controlled structural and temporal attributes, such as the number, thickness, length, level of branching, and lifetime of filopodia, accompanied by inherently generated reference annotations. The proposed simulation system involves three globally synchronized modules, each being responsible for a separate task: the evolution of filopodia on a molecular level, linear elastic deformation of the entire cell with filopodia, and the synthesis of realistic, time-coherent cell texture. Its flexibility is demonstrated by generating multiple synthetic 3-D time-lapse sequences of single lung cancer cells of two different phenotypes, qualitatively and quantitatively resembling their real counterparts acquired using a confocal fluorescence microscope.
Collapse
|
21
|
McQuin C, Goodman A, Chernyshev V, Kamentsky L, Cimini BA, Karhohs KW, Doan M, Ding L, Rafelski SM, Thirstrup D, Wiegraebe W, Singh S, Becker T, Caicedo JC, Carpenter AE. CellProfiler 3.0: Next-generation image processing for biology. PLoS Biol 2018; 16:e2005970. [PMID: 29969450 PMCID: PMC6029841 DOI: 10.1371/journal.pbio.2005970] [Citation(s) in RCA: 1218] [Impact Index Per Article: 174.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 05/25/2018] [Indexed: 02/07/2023] Open
Abstract
CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler's infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.
Collapse
Affiliation(s)
- Claire McQuin
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Allen Goodman
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Vasiliy Chernyshev
- Skolkovo Institute of Science and Technology, Skolkovo, Moscow Region, Russia
- Moscow Institute of Physics and Technology, Dolgoprudny, Moscow Region, Russia
| | - Lee Kamentsky
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Beth A. Cimini
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Kyle W. Karhohs
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Minh Doan
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Liya Ding
- Allen Institute for Cell Science, Seattle, Washington, United States of America
| | - Susanne M. Rafelski
- Allen Institute for Cell Science, Seattle, Washington, United States of America
| | - Derek Thirstrup
- Allen Institute for Cell Science, Seattle, Washington, United States of America
| | - Winfried Wiegraebe
- Allen Institute for Cell Science, Seattle, Washington, United States of America
| | - Shantanu Singh
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Tim Becker
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Juan C. Caicedo
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| | - Anne E. Carpenter
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, Massachusetts, United States of America
| |
Collapse
|
22
|
Ulman V, Maška M, Magnusson KEG, Ronneberger O, Haubold C, Harder N, Matula P, Matula P, Svoboda D, Radojevic M, Smal I, Rohr K, Jaldén J, Blau HM, Dzyubachyk O, Lelieveldt B, Xiao P, Li Y, Cho SY, Dufour AC, Olivo-Marin JC, Reyes-Aldasoro CC, Solis-Lemus JA, Bensch R, Brox T, Stegmaier J, Mikut R, Wolf S, Hamprecht FA, Esteves T, Quelhas P, Demirel Ö, Malmström L, Jug F, Tomancak P, Meijering E, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solorzano C. An objective comparison of cell-tracking algorithms. Nat Methods 2017; 14:1141-1152. [PMID: 29083403 PMCID: PMC5777536 DOI: 10.1038/nmeth.4473] [Citation(s) in RCA: 236] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2017] [Accepted: 09/23/2017] [Indexed: 01/17/2023]
Abstract
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.
Collapse
Affiliation(s)
- Vladimír Ulman
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Martin Maška
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Klas E G Magnusson
- ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Olaf Ronneberger
- Computer Science Department and BIOSS Centre for Biological Signaling Studies University of Freiburg, Frieburg, Germany
| | - Carsten Haubold
- Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany
| | - Nathalie Harder
- Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, BIOQUANT, IPMB, University of Heidelberg and DKFZ, Heidelberg, Germany
| | - Pavel Matula
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Miroslav Radojevic
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Ihor Smal
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Karl Rohr
- Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, BIOQUANT, IPMB, University of Heidelberg and DKFZ, Heidelberg, Germany
| | - Joakim Jaldén
- ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Helen M Blau
- Baxter Laboratory for Stem Cell Biology, Department of Microbiology and Immunology, and Institute for Stem Cell Biology and Regenerative Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Oleh Dzyubachyk
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Boudewijn Lelieveldt
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Intelligent Systems Department, Delft University of Technology, Delft, the Netherlands
| | - Pengdong Xiao
- Institute of Molecular and Cell Biology, A*Star, Singapore
| | - Yuexiang Li
- Department of Engineering, University of Nottingham, Nottingham, UK
| | - Siu-Yeung Cho
- Faculty of Engineering, University of Nottingham, Ningbo, China
| | | | | | - Constantino C Reyes-Aldasoro
- Research Centre in Biomedical Engineering, School of Mathematics, Computer Science and Engineering, City University of London, London, UK
| | - Jose A Solis-Lemus
- Research Centre in Biomedical Engineering, School of Mathematics, Computer Science and Engineering, City University of London, London, UK
| | - Robert Bensch
- Computer Science Department and BIOSS Centre for Biological Signaling Studies University of Freiburg, Frieburg, Germany
| | - Thomas Brox
- Computer Science Department and BIOSS Centre for Biological Signaling Studies University of Freiburg, Frieburg, Germany
| | - Johannes Stegmaier
- Group for Automated Image and Data Analysis, Institute for Applied Computer Science, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Group for Automated Image and Data Analysis, Institute for Applied Computer Science, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Steffen Wolf
- Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany
| | - Fred A Hamprecht
- Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany
| | - Tiago Esteves
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal.,Facultade de Engenharia, Universidade do Porto, Porto, Portugal
| | - Pedro Quelhas
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal
| | | | | | - Florian Jug
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Pavel Tomancak
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Erik Meijering
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Arrate Muñoz-Barrutia
- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Getafe, Spain.,Instituto de Investigación Sanitaria Gregorio Marañon, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Carlos Ortiz-de-Solorzano
- CIBERONC, IDISNA and Program of Solid Tumors and Biomarkers, Center for Applied Medical Research, University of Navarra, Pamplona, Spain.,Bioengineering Department, TECNUN School of Engineering, University of Navarra, San Sebastián, Spain
| |
Collapse
|