1
|
Huang Z, Wu Z, Yan H. A convex-hull based method with manifold projections for detecting cell protrusions. Comput Biol Med 2024; 173:108350. [PMID: 38555705 DOI: 10.1016/j.compbiomed.2024.108350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 02/25/2024] [Accepted: 03/17/2024] [Indexed: 04/02/2024]
Abstract
Cell protrusions play an important role in a variety of cell physiological processes. In this paper, we propose a convex-hull based method, combined with manifold projections, to detect cell protrusions. A convex hull is generated based on the cell surface. We consider the cell surface and the boundary of its convex hull as two manifolds, which are diffeomorphic, and define a depth function based on the distance between the cell surface and its convex hull boundary. The extreme points of the depth function represent the positions of cell protrusions. To find the extreme points easily, we project the points on the cell surface onto the boundary of the convex hull and expand them in spherical polar coordinates. We conducted experiments on three types of cell protrusions. The proposed method achieved the average precision of 98.9%, 95.6%, and 94.7% on blebs, filopodia, and lamellipodia, respectively. Experiments on three datasets show that the proposed method has a robust performance.
Collapse
Affiliation(s)
- Zhaoke Huang
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong Special Administrative Region of China.
| | - Zihan Wu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong Special Administrative Region of China
| | - Hong Yan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong Special Administrative Region of China
| |
Collapse
|
2
|
Wiesner D, Suk J, Dummer S, Nečasová T, Ulman V, Svoboda D, Wolterink JM. Generative modeling of living cells with SO(3)-equivariant implicit neural representations. Med Image Anal 2024; 91:102991. [PMID: 37839341 DOI: 10.1016/j.media.2023.102991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 08/20/2023] [Accepted: 10/02/2023] [Indexed: 10/17/2023]
Abstract
Data-driven cell tracking and segmentation methods in biomedical imaging require diverse and information-rich training data. In cases where the number of training samples is limited, synthetic computer-generated data sets can be used to improve these methods. This requires the synthesis of cell shapes as well as corresponding microscopy images using generative models. To synthesize realistic living cell shapes, the shape representation used by the generative model should be able to accurately represent fine details and changes in topology, which are common in cells. These requirements are not met by 3D voxel masks, which are restricted in resolution, and polygon meshes, which do not easily model processes like cell growth and mitosis. In this work, we propose to represent living cell shapes as level sets of signed distance functions (SDFs) which are estimated by neural networks. We optimize a fully-connected neural network to provide an implicit representation of the SDF value at any point in a 3D+time domain, conditioned on a learned latent code that is disentangled from the rotation of the cell shape. We demonstrate the effectiveness of this approach on cells that exhibit rapid deformations (Platynereis dumerilii), cells that grow and divide (C. elegans), and cells that have growing and branching filopodial protrusions (A549 human lung carcinoma cells). A quantitative evaluation using shape features and Dice similarity coefficients of real and synthetic cell shapes shows that our model can generate topologically plausible complex cell shapes in 3D+time with high similarity to real living cell shapes. Finally, we show how microscopy images of living cells that correspond to our generated cell shapes can be synthesized using an image-to-image model.
Collapse
Affiliation(s)
- David Wiesner
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic.
| | - Julian Suk
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Sven Dummer
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- IT4Innovations, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Jelmer M Wolterink
- Department of Applied Mathematics & Technical Medical Centre, University of Twente, Enschede, The Netherlands.
| |
Collapse
|
3
|
Wagner R, Lopez CF, Stiller C. Self-supervised pseudo-colorizing of masked cells. PLoS One 2023; 18:e0290561. [PMID: 37616272 PMCID: PMC10449109 DOI: 10.1371/journal.pone.0290561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 08/09/2023] [Indexed: 08/26/2023] Open
Abstract
Self-supervised learning, which is strikingly referred to as the dark matter of intelligence, is gaining more attention in biomedical applications of deep learning. In this work, we introduce a novel self-supervision objective for the analysis of cells in biomedical microscopy images. We propose training deep learning models to pseudo-colorize masked cells. We use a physics-informed pseudo-spectral colormap that is well suited for colorizing cell topology. Our experiments reveal that approximating semantic segmentation by pseudo-colorization is beneficial for subsequent fine-tuning on cell detection. Inspired by the recent success of masked image modeling, we additionally mask out cell parts and train to reconstruct these parts to further enrich the learned representations. We compare our pre-training method with self-supervised frameworks including contrastive learning (SimCLR), masked autoencoders (MAEs), and edge-based self-supervision. We build upon our previous work and train hybrid models for cell detection, which contain both convolutional and vision transformer modules. Our pre-training method can outperform SimCLR, MAE-like masked image modeling, and edge-based self-supervision when pre-training on a diverse set of six fluorescence microscopy datasets. Code is available at: https://github.com/roydenwa/pseudo-colorize-masked-cells.
Collapse
Affiliation(s)
- Royden Wagner
- Karlsruhe Institute of Technology (KIT), Karlsruhe, BW, Germany
| | | | | |
Collapse
|
4
|
Maška M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Nečasová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, Mikut R, Guo T, Wang Y, Allebach JP, Bao R, Al-Shakarji NM, Rahmon G, Toubal IE, Palaniappan K, Lux F, Matula P, Sugawara K, Magnusson KEG, Aho L, Cohen AR, Arbelle A, Ben-Haim T, Raviv TR, Isensee F, Jäger PF, Maier-Hein KH, Zhu Y, Ederra C, Urbiola A, Meijering E, Cunha A, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solórzano C. The Cell Tracking Challenge: 10 years of objective benchmarking. Nat Methods 2023:10.1038/s41592-023-01879-y. [PMID: 37202537 PMCID: PMC10333123 DOI: 10.1038/s41592-023-01879-y] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 04/13/2023] [Indexed: 05/20/2023]
Abstract
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
Collapse
Affiliation(s)
- Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Estibaliz Gómez-de-Mariscal
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Fidel A Guerrero Peña
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tsang Ing Ren
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
| | - Elliot M Meyerowitz
- Division of Biology and Biological Engineering and Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Tianqi Guo
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Yin Wang
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Jan P Allebach
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Rina Bao
- Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Noor M Al-Shakarji
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Gani Rahmon
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Imad Eddine Toubal
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Kannappan Palaniappan
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de Lyon, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | | | - Layton Aho
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Andrew R Cohen
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Assaf Arbelle
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tal Ben-Haim
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tammy Riklin Raviv
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Paul F Jäger
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
- Griffith University, Nathan, Queensland, Australia
| | - Cristina Ederra
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Ainhoa Urbiola
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandre Cunha
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Arrate Muñoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic.
| | - Carlos Ortiz-de-Solórzano
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain.
| |
Collapse
|
5
|
Jun BH, Ahmadzadegan A, Ardekani AM, Solorio L, Vlachos PP. Multi-feature-Based Robust Cell Tracking. Ann Biomed Eng 2023; 51:604-617. [PMID: 36103061 DOI: 10.1007/s10439-022-03073-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 09/02/2022] [Indexed: 11/29/2022]
Abstract
Cell tracking algorithms have been used to extract cell counts and motility information from time-lapse images of migrating cells. However, these algorithms often fail when the collected images have cells with spatially and temporally varying features, such as morphology, position, and signal-to-noise ratio. Consequently, state-of-the-art algorithms are not robust or reliable because they require manual inputs to overcome the cell feature changes. To address these issues, we present a fully automated, adaptive, and robust feature-based cell tracking algorithm for the accurate detection and tracking of cells in time-lapse images. Our algorithm tackles measurement limitations twofold. First, we use Hessian filtering and adaptive thresholding to detect the cells in images, overcoming spatial feature variations among the existing cells without manually changing the input thresholds. Second, cell feature parameters are measured, including position, diameter, mean intensity, area, and orientation, and these parameters are simultaneously used to accurately track the cells between subsequent frames, even under poor temporal resolution. Our technique achieved a minimum of 92% detection and tracking accuracy, compared to 16% from Mosaic and Trackmate. Our improved method allows for extended tracking and characterization of heterogeneous cell behavior that are of particular interest for intravital imaging users.
Collapse
Affiliation(s)
- Brian H Jun
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Adib Ahmadzadegan
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Arezoo M Ardekani
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Luis Solorio
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
- Purdue Center for Cancer Research, Purdue University, West Lafayette, IN, USA.
| | - Pavlos P Vlachos
- School of Mechanical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
6
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
7
|
Hradecka L, Wiesner D, Sumbal J, Koledova ZS, Maska M. Segmentation and Tracking of Mammary Epithelial Organoids in Brightfield Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:281-290. [PMID: 36170389 DOI: 10.1109/tmi.2022.3210714] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present an automated and deep-learning-based workflow to quantitatively analyze the spatiotemporal development of mammary epithelial organoids in two-dimensional time-lapse (2D+t) sequences acquired using a brightfield microscope at high resolution. It involves a convolutional neural network (U-Net), purposely trained using computer-generated bioimage data created by a conditional generative adversarial network (pix2pixHD), to infer semantic segmentation, adaptive morphological filtering to identify organoid instances, and a shape-similarity-constrained, instance-segmentation-correcting tracking procedure to reliably cherry-pick the organoid instances of interest in time. By validating it using real 2D+t sequences of mouse mammary epithelial organoids of morphologically different phenotypes, we clearly demonstrate that the workflow achieves reliable segmentation and tracking performance, providing a reproducible and laborless alternative to manual analyses of the acquired bioimage data.
Collapse
|
8
|
Benfenati A. upU-Net Approaches for Background Emission Removal in Fluorescence Microscopy. J Imaging 2022; 8:142. [PMID: 35621906 PMCID: PMC9146274 DOI: 10.3390/jimaging8050142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/17/2022] [Accepted: 05/18/2022] [Indexed: 11/29/2022] Open
Abstract
The physical process underlying microscopy imaging suffers from several issues: some of them include the blurring effect due to the Point Spread Function, the presence of Gaussian or Poisson noise, or even a mixture of these two types of perturbation. Among them, auto-fluorescence presents other artifacts in the registered image, and such fluorescence may be an important obstacle in correctly recognizing objects and organisms in the image. For example, particle tracking may suffer from the presence of this kind of perturbation. The objective of this work is to employ Deep Learning techniques, in the form of U-Nets like architectures, for background emission removal. Such fluorescence is modeled by Perlin noise, which reveals to be a suitable candidate for simulating such a phenomenon. The proposed architecture succeeds in removing the fluorescence, and at the same time, it acts as a denoiser for both Gaussian and Poisson noise. The performance of this approach is furthermore assessed on actual microscopy images and by employing the restored images for particle recognition.
Collapse
Affiliation(s)
- Alessandro Benfenati
- Environmental and Science Policy Department, Università degli Studi di Milano, Via Celoria 2, 20133 Milan, Italy;
- Gruppo Nazionale Calcolo Scientifico, Istituto Nazionale di Alta Matematica, P.le Aldo Moro 5, 00185 Rome, Italy
| |
Collapse
|
9
|
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2021. [DOI: 10.1038/s41592-020-01008-z 10.1038/s41592-020-01008-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
10
|
Lutton EJ, Collier S, Bretschneider T. A Curvature-Enhanced Random Walker Segmentation Method for Detailed Capture of 3D Cell Surface Membranes. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:514-526. [PMID: 33052849 DOI: 10.1109/tmi.2020.3031029] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
High-resolution 3D microscopy is a fast advancing field and requires new techniques in image analysis to handle these new datasets. In this work, we focus on detailed 3D segmentation of Dictyostelium cells undergoing macropinocytosis captured on an iSPIM microscope. We propose a novel random walker-based method with a curvature-based enhancement term, with the aim of capturing fine protrusions, such as filopodia and deep invaginations, such as macropinocytotic cups, on the cell surface. We tested our method on both real and synthetic 3D image volumes, demonstrating that the inclusion of the curvature enhancement term can improve the segmentation of the aforementioned features. We show that our method performs better than other state of the art segmentation methods in 3D images of Dictyostelium cells, and performs competitively against CNN-based methods in two Cell Tracking Challenge datasets, demonstrating the ability to obtain accurate segmentations without the requirement of large training datasets. We also present an automated seeding method for microscopy data, which, combined with the curvature-enhanced random walker method, enables the segmentation of large time series with minimal input from the experimenter.
Collapse
|
11
|
Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods 2020; 18:203-211. [PMID: 33288961 DOI: 10.1038/s41592-020-01008-z] [Citation(s) in RCA: 2368] [Impact Index Per Article: 473.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 10/17/2020] [Accepted: 10/29/2020] [Indexed: 11/09/2022]
Abstract
Biomedical imaging is a driver of scientific discovery and a core component of medical care and is being stimulated by the field of deep learning. While semantic segmentation algorithms enable image analysis and quantification in many applications, the design of respective specialized solutions is non-trivial and highly dependent on dataset properties and hardware conditions. We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training and post-processing for any new task. The key design choices in this process are modeled as a set of fixed parameters, interdependent rules and empirical decisions. Without manual intervention, nnU-Net surpasses most existing approaches, including highly specialized solutions on 23 public datasets used in international biomedical segmentation competitions. We make nnU-Net publicly available as an out-of-the-box tool, rendering state-of-the-art segmentation accessible to a broad audience by requiring neither expert knowledge nor computing resources beyond standard network training.
Collapse
Affiliation(s)
- Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.,Faculty of Biosciences, University of Heidelberg, Heidelberg, Germany
| | - Paul F Jaeger
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Simon A A Kohl
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.,DeepMind, London, UK
| | - Jens Petersen
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany.,Faculty of Physics & Astronomy, University of Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany. .,Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
12
|
Baniukiewicz P, Lutton EJ, Collier S, Bretschneider T. Generative Adversarial Networks for Augmenting Training Data of Microscopic Cell Images. FRONTIERS IN COMPUTER SCIENCE 2019. [DOI: 10.3389/fcomp.2019.00010] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
13
|
Wiesner D, Svoboda D, Maška M, Kozubek M. CytoPacq: a web-interface for simulating multi-dimensional cell imaging. Bioinformatics 2019; 35:4531-4533. [PMID: 31114843 PMCID: PMC6821329 DOI: 10.1093/bioinformatics/btz417] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 04/17/2019] [Accepted: 05/15/2019] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Objective assessment of bioimage analysis methods is an essential step towards understanding their robustness and parameter sensitivity, calling for the availability of heterogeneous bioimage datasets accompanied by their reference annotations. Because manual annotations are known to be arduous, highly subjective and barely reproducible, numerous simulators have emerged over past decades, generating synthetic bioimage datasets complemented with inherent reference annotations. However, the installation and configuration of these tools generally constitutes a barrier to their widespread use. RESULTS We present a modern, modular web-interface, CytoPacq, to facilitate the generation of synthetic benchmark datasets relevant for multi-dimensional cell imaging. CytoPacq poses a user-friendly graphical interface with contextual tooltips and currently allows a comfortable access to various cell simulation systems of fluorescence microscopy, which have already been recognized and used by the scientific community, in a straightforward and self-contained form. AVAILABILITY AND IMPLEMENTATION CytoPacq is a publicly available online service running at https://cbia.fi.muni.cz/simulator. More information about it as well as examples of generated bioimage datasets are available directly through the web-interface. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- David Wiesner
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| | - Martin Maška
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, Brno CZ-60200, Czech Republic
| |
Collapse
|
14
|
Krewcun C, Sarry L, Combaret N, Pery E. Fast simulation of stent deployment with plastic beam elements. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2019:6968-6974. [PMID: 31947442 DOI: 10.1109/embc.2019.8857179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Coronary stent deployment is a reference cardiology intervention, used to treat atherosclerosis and prevent heart attacks. The outcomes of the intervention highly depend on the accuracy of the stent apposition, which could benefit from per-operative prediction tools. In this paper, we propose a fast and mechanically realistic 3D simulation of a coronary stent expansion. Our simulation relies on the finite element method and involves serially linked beam elements to model the slender geometry of a stent. The elements are implemented with a non-linear elasto-plastic behavior, describing realistically the complex deformation of a balloon-expandable stent. As a proof of concept, we simulated the free expansion of a coronary stent. The simulation output was compared with micro-CT data, acquired experimentally during the device expansion. Results show that the plastic beam model is able to reproduce successfully the final geometry of the stent. In addition, the use of 1D elements allows to achieve a significantly lower computational time than for equivalent literature simulations, based on 3D elements. This preliminary work highlights the compatibility of our method with clinical routine in terms of execution time. Further developments include the application of the method to more advanced simulation scenarios, with the addition of a personalized artery model.
Collapse
|
15
|
Castilla C, Maska M, Sorokin DV, Meijering E, Ortiz-de-Solorzano C. 3-D Quantification of Filopodia in Motile Cancer Cells. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:862-872. [PMID: 30296215 DOI: 10.1109/tmi.2018.2873842] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a 3D bioimage analysis workflow to quantitatively analyze single, actin-stained cells with filopodial protrusions of diverse structural and temporal attributes, such as number, length, thickness, level of branching, and lifetime, in time-lapse confocal microscopy image data. Our workflow makes use of convolutional neural networks trained using real as well as synthetic image data, to segment the cell volumes with highly heterogeneous fluorescence intensity levels and to detect individual filopodial protrusions, followed by a constrained nearest-neighbor tracking algorithm to obtain valuable information about the spatio-temporal evolution of individual filopodia. We validated the workflow using real and synthetic 3-D time-lapse sequences of lung adenocarcinoma cells of three morphologically distinct filopodial phenotypes and show that it achieves reliable segmentation and tracking performance, providing a robust, reproducible and less time-consuming alternative to manual analysis of the 3D+t image data.
Collapse
|