1
|
Zhao Y, Chen KL, Shen XY, Li MK, Wan YJ, Yang C, Yu RJ, Long YT, Yan F, Ying YL. HFM-Tracker: a cell tracking algorithm based on hybrid feature matching. Analyst 2024; 149:2629-2636. [PMID: 38563459 DOI: 10.1039/d4an00199k] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Cell migration is known to be a fundamental biological process, playing an essential role in development, homeostasis, and diseases. This paper introduces a cell tracking algorithm named HFM-Tracker (Hybrid Feature Matching Tracker) that automatically identifies cell migration behaviours in consecutive images. It combines Contour Attention (CA) and Adaptive Confusion Matrix (ACM) modules to accurately capture cell contours in each image and track the dynamic behaviors of migrating cells in the field of view. Cells are firstly located and identified via the CA module-based cell detection network, and then associated and tracked via a cell tracking algorithm employing a hybrid feature-matching strategy. This proposed HFM-Tracker exhibits superiorities in cell detection and tracking, achieving 75% in MOTA (Multiple Object Tracking Accuracy) and 65% in IDF1 (ID F1 score). It provides quantitative analysis of the cell morphology and migration features, which could further help in understanding the complicated and diverse cell migration processes.
Collapse
Affiliation(s)
- Yan Zhao
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
| | - Ke-Le Chen
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Xin-Yu Shen
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Ming-Kang Li
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Yong-Jing Wan
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
| | - Cheng Yang
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Ru-Jia Yu
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Yi-Tao Long
- School of Chemistry and Chemical Engineering, Molecular Sensing and Imaging Center (MSIC), Nanjing University, Nanjing 210023, P. R. China.
| | - Feng Yan
- School of Electronic Sciences and Engineering, Nanjing University, Nanjing, 210023, China
| | - Yi-Lun Ying
- School of Information Science and Engineering, East China University of Science and Technology, 130 Meilong Road, 200237 Shanghai, P. R. China.
- Chemistry and Biomedicine Innovation Center, Nanjing University, Nanjing 210023, P. R. China
| |
Collapse
|
2
|
Ma J, Xie R, Ayyadhury S, Ge C, Gupta A, Gupta R, Gu S, Zhang Y, Lee G, Kim J, Lou W, Li H, Upschulte E, Dickscheid T, de Almeida JG, Wang Y, Han L, Yang X, Labagnara M, Gligorovski V, Scheder M, Rahi SJ, Kempster C, Pollitt A, Espinosa L, Mignot T, Middeke JM, Eckardt JN, Li W, Li Z, Cai X, Bai B, Greenwald NF, Van Valen D, Weisbart E, Cimini BA, Cheung T, Brück O, Bader GD, Wang B. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods 2024:10.1038/s41592-024-02233-6. [PMID: 38532015 DOI: 10.1038/s41592-024-02233-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.
Collapse
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Ronald Xie
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| | - Shamini Ayyadhury
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Cheng Ge
- School of Medicine and Pharmacy, Ocean University of China, Qingdao, China
| | - Anubha Gupta
- Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India
| | - Ritu Gupta
- Laboratory Oncology Unit, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Nanjing, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Gihun Lee
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Joonkee Kim
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Wei Lou
- Shenzhen Research Institute of Big Data, Shenzhen, China
- Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Haofeng Li
- Shenzhen Research Institute of Big Data, Shenzhen, China
| | - Eric Upschulte
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
- Faculty of Mathematics and Natural Sciences - Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - José Guilherme de Almeida
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
- Champalimaud Foundation - Centre for the Unknown, Lisbon, Portugal
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | - Lin Han
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Marco Labagnara
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vojislav Gligorovski
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maxime Scheder
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - Alice Pollitt
- School of Biological Sciences, University of Reading, Reading, UK
| | - Leon Espinosa
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Tâm Mignot
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Wangkai Li
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Zhaoyang Li
- Institute of Advanced Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochen Cai
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Bizhe Bai
- School of EECS, The University of Queensland, Brisbane, Queensland, Australia
| | | | - David Van Valen
- Division of Computing and Mathematical Science, Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Erin Weisbart
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Trevor Cheung
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Oscar Brück
- Hematoscope Laboratory, Comprehensive Cancer Center & Center of Diagnostics, Helsinki University Hospital, Helsinki, Finland
- Department of Oncology, University of Helsinki, Helsinki, Finland
| | - Gary D Bader
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Ontario, Canada
- CIFAR Multiscale Human Program, CIFAR, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
- UHN AI Hub, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
3
|
Duque-Vazquez EF, Sanchez-Yanez RE, Saldaña-Robles N, León-Galván MF, Cepeda-Negrete J. HeLa cell segmentation using digital image processing. Heliyon 2024; 10:e26520. [PMID: 38434298 PMCID: PMC10907640 DOI: 10.1016/j.heliyon.2024.e26520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 11/28/2023] [Accepted: 02/14/2024] [Indexed: 03/05/2024] Open
Abstract
Computational cell segmentation is a vital area of research, particularly in the analysis of images of cancer cells. The use of cell lines, such as the widely utilized HeLa cell line, is crucial for studying cancer. While deep learning algorithms have been commonly employed for cell segmentation, their resource and data requirements can be impractical for many laboratories. In contrast, image processing algorithms provide a promising alternative due to their effectiveness and minimal resource demands. This article presents the development of an algorithm utilizing digital image processing to segment the nucleus and shape of HeLa cells. The research aims to segment the cell shape in the image center and accurately identify the nucleus. The study uses and processes 300 images obtained from Serial Block-Face Scanning Electron Microscopy (SBF-SEM). For cell segmentation, the morphological operation of erosion was used to separate the cells, and through distance calculation, the cell located at the center of the image was selected. Subsequently, the eroded shape was employed to restore the original cell shape. The nucleus segmentation uses parameters such as distances and sizes, along with the implementation of verification stages to ensure accurate detection. The accuracy of the algorithm is demonstrated by comparing it with another algorithm meeting the same conditions, using four segmentation similarity metrics. The evaluation results rank the proposed algorithm as the superior choice, highlighting significant outcomes. The algorithm developed represents a crucial initial step towards more accurate disease analysis. In addition, it enables the measurement of shapes and the identification of morphological alterations, damages, and changes in organelles within the cell, which can be vital for diagnostic purposes.
Collapse
Affiliation(s)
- Edgar F Duque-Vazquez
- Universidad de Guanajuato DICIVA, Ex Hacienda El Copal km 9; carretera Irapuato-Silao; A.P. 311, Irapuato, 36500 Guanajuato, Mexico
| | - Raul E Sanchez-Yanez
- Universidad de Guanajuato DICIS, Carretera Salamanca - Valle de Santiago km 3.5 + 1.8 Comunidad de Palo Blanco, Salamanca, 36885 Guanajuato, Mexico
| | - Noe Saldaña-Robles
- Universidad de Guanajuato DICIVA, Ex Hacienda El Copal km 9; carretera Irapuato-Silao; A.P. 311, Irapuato, 36500 Guanajuato, Mexico
| | - Ma Fabiola León-Galván
- Universidad de Guanajuato DICIVA, Ex Hacienda El Copal km 9; carretera Irapuato-Silao; A.P. 311, Irapuato, 36500 Guanajuato, Mexico
| | - Jonathan Cepeda-Negrete
- Universidad de Guanajuato DICIVA, Ex Hacienda El Copal km 9; carretera Irapuato-Silao; A.P. 311, Irapuato, 36500 Guanajuato, Mexico
| |
Collapse
|
4
|
Toma TT, Wang Y, Gahlmann A, Acton ST. DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
Affiliation(s)
- Tanjin Taher Toma
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Yibo Wang
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Andreas Gahlmann
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
- Department of Molecular Physiology and Biological Physics, University of Virginia, Charlottesville, 22903, Virginia, USA
| | - Scott T. Acton
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| |
Collapse
|
5
|
Wan Z, Li M, Wang Z, Tan H, Li W, Yu L, Samuel DJ. CellT-Net: A Composite Transformer Method for 2-D Cell Instance Segmentation. IEEE J Biomed Health Inform 2024; 28:730-741. [PMID: 37023158 DOI: 10.1109/jbhi.2023.3265006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Abstract
Cell instance segmentation (CIS) via light microscopy and artificial intelligence (AI) is essential to cell and gene therapy-based health care management, which offers the hope of revolutionary health care. An effective CIS method can help clinicians to diagnose neurological disorders and quantify how well these deadly disorders respond to treatment. To address the CIS task challenged by dataset characteristics such as irregular morphology, variation in sizes, cell adhesion, and obscure contours, we propose a novel deep learning model named CellT-Net to actualize effective cell instance segmentation. In particular, the Swin transformer (Swin-T) is used as the basic model to construct the CellT-Net backbone, as the self-attention mechanism can adaptively focus on useful image regions while suppressing irrelevant background information. Moreover, CellT-Net incorporating Swin-T constructs a hierarchical representation and generates multi-scale feature maps that are suitable for detecting and segmenting cells at different scales. A novel composite style named cross-level composition (CLC) is proposed to build composite connections between identical Swin-T models in the CellT-Net backbone and generate more representational features. The earth mover's distance (EMD) loss and binary cross entropy loss are used to train CellT-Net and actualize the precise segmentation of overlapped cells. The LiveCELL and Sartorius datasets are utilized to validate the model effectiveness, and the results demonstrate that CellT-Net can achieve better model performance for dealing with the challenges arising from the characteristics of cell datasets than state-of-the-art models.
Collapse
|
6
|
Gogoberidze N, Cimini BA. Defining the boundaries: challenges and advances in identifying cells in microscopy images. Curr Opin Biotechnol 2024; 85:103055. [PMID: 38142646 DOI: 10.1016/j.copbio.2023.103055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Segmentation, or the outlining of objects within images, is a critical step in the measurement and analysis of cells within microscopy images. While improvements continue to be made in tools that rely on classical methods for segmentation, deep learning-based tools increasingly dominate advances in the technology. Specialist models such as Cellpose continue to improve in accuracy and user-friendliness, and segmentation challenges such as the Multi-Modality Cell Segmentation Challenge continue to push innovation in accuracy across widely varying test data as well as efficiency and usability. Increased attention on documentation, sharing, and evaluation standards is leading to increased user-friendliness and acceleration toward the goal of a truly universal method.
Collapse
Affiliation(s)
| | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA 02142, USA.
| |
Collapse
|
7
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
8
|
Eddy CZ, Naylor A, Cunningham CT, Sun B. Facilitating cell segmentation with the projection-enhancement network. Phys Biol 2023; 20:10.1088/1478-3975/acfe53. [PMID: 37769666 PMCID: PMC10586931 DOI: 10.1088/1478-3975/acfe53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/28/2023] [Indexed: 10/03/2023]
Abstract
Contemporary approaches to instance segmentation in cell science use 2D or 3D convolutional networks depending on the experiment and data structures. However, limitations in microscopy systems or efforts to prevent phototoxicity commonly require recording sub-optimally sampled data that greatly reduces the utility of such 3D data, especially in crowded sample space with significant axial overlap between objects. In such regimes, 2D segmentations are both more reliable for cell morphology and easier to annotate. In this work, we propose the projection enhancement network (PEN), a novel convolutional module which processes the sub-sampled 3D data and produces a 2D RGB semantic compression, and is trained in conjunction with an instance segmentation network of choice to produce 2D segmentations. Our approach combines augmentation to increase cell density using a low-density cell image dataset to train PEN, and curated datasets to evaluate PEN. We show that with PEN, the learned semantic representation in CellPose encodes depth and greatly improves segmentation performance in comparison to maximum intensity projection images as input, but does not similarly aid segmentation in region-based networks like Mask-RCNN. Finally, we dissect the segmentation strength against cell density of PEN with CellPose on disseminated cells from side-by-side spheroids. We present PEN as a data-driven solution to form compressed representations of 3D data that improve 2D segmentations from instance segmentation networks.
Collapse
Affiliation(s)
| | - Austin Naylor
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| | | | - Bo Sun
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| |
Collapse
|
9
|
Nebbioso G, Yosief R, Koshkin V, Qiu Y, Peng C, Elisseev V, Krylov SN. Automated identification and tracking of cells in Cytometry of Reaction Rate Constant (CRRC). PLoS One 2023; 18:e0282990. [PMID: 37399195 DOI: 10.1371/journal.pone.0282990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/28/2023] [Indexed: 07/05/2023] Open
Abstract
Cytometry of Reaction Rate Constant (CRRC) is a method for studying cell-population heterogeneity using time-lapse fluorescence microscopy, which allows one to follow reaction kinetics in individual cells. The current and only CRRC workflow utilizes a single fluorescence image to manually identify cell contours which are then used to determine fluorescence intensity of individual cells in the entire time-stack of images. This workflow is only reliable if cells maintain their positions during the time-lapse measurements. If the cells move, the original cell contours become unsuitable for evaluating intracellular fluorescence and the CRRC experiment will be inaccurate. The requirement of invariant cell positions during a prolonged imaging is impossible to satisfy for motile cells. Here we report a CRRC workflow developed to be applicable to motile cells. The new workflow combines fluorescence microscopy with transmitted-light microscopy and utilizes a new automated tool for cell identification and tracking. A transmitted-light image is taken right before every fluorescence image to determine cell contours, and cell contours are tracked through the time-stack of transmitted-light images to account for cell movement. Each unique contour is used to determine fluorescence intensity of cells in the associated fluorescence image. Next, time dependencies of the intracellular fluorescence intensities are used to determine each cell's rate constant and construct a kinetic histogram "number of cells vs rate constant." The new workflow's robustness to cell movement was confirmed experimentally by conducting a CRRC study of cross-membrane transport in motile cells. The new workflow makes CRRC applicable to a wide range of cell types and eliminates the influence of cell motility on the accuracy of results. Additionally, the workflow could potentially monitor kinetics of varying biological processes at the single-cell level for sizable cell populations. Although our workflow was designed ad hoc for CRRC, this cell-segmentation/cell-tracking strategy also represents an entry-level, user-friendly option for a variety of biological assays (i.e., migration, proliferation assays, etc.). Importantly, no prior knowledge of informatics (i.e., training a model for deep learning) is required.
Collapse
Affiliation(s)
- Giammarco Nebbioso
- Department of Chemistry, York University, Toronto, Ontario, Canada
- Centre for Research on Biomolecular Interactions, York University, Toronto, Ontario, Canada
| | - Robel Yosief
- Department of Chemistry, York University, Toronto, Ontario, Canada
- Centre for Research on Biomolecular Interactions, York University, Toronto, Ontario, Canada
| | - Vasilij Koshkin
- Department of Chemistry, York University, Toronto, Ontario, Canada
- Centre for Research on Biomolecular Interactions, York University, Toronto, Ontario, Canada
| | - Yumin Qiu
- Centre for Research on Biomolecular Interactions, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Chun Peng
- Centre for Research on Biomolecular Interactions, York University, Toronto, Ontario, Canada
- Department of Biology, York University, Toronto, Ontario, Canada
| | - Vadim Elisseev
- IBM Research Europe, The Hartree Centre, Daresbury Laboratory, Warrington, United Kingdom
- Wrexham Glyndwr University, Wrexham, United Kingdom
| | - Sergey N Krylov
- Department of Chemistry, York University, Toronto, Ontario, Canada
- Centre for Research on Biomolecular Interactions, York University, Toronto, Ontario, Canada
| |
Collapse
|
10
|
Schilling MP, Klinger L, Schumacher U, Schmelzer S, Lopez MB, Nestler B, Reischl M. AI 2Seg: A Method and Tool for AI-based Annotation Inspection of Biomedical Instance Segmentation Datasets. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-6. [PMID: 38083322 DOI: 10.1109/embc40787.2023.10341074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In biomedical engineering, deep neural networks are commonly used for the diagnosis and assessment of diseases through the interpretation of medical images. The effectiveness of these networks relies heavily on the availability of annotated datasets for training. However, obtaining noise-free and consistent annotations from experts, such as pathologists, radiologists, and biologists, remains a significant challenge. One common task in clinical practice and biological imaging applications is instance segmentation. Though, there is currently a lack of methods and open-source tools for the automated inspection of biomedical instance segmentation datasets concerning noisy annotations. To address this issue, we propose a novel deep learning-based approach for inspecting noisy annotations and provide an accompanying software implementation, AI2Seg, to facilitate its use by domain experts. The performance of the proposed algorithm is demonstrated on the medical MoNuSeg dataset and the biological LIVECell dataset.
Collapse
|
11
|
Wu L, Chen A, Salama P, Winfree S, Dunn KW, Delp EJ. NISNet3D: three-dimensional nuclear synthesis and instance segmentation for fluorescence microscopy images. Sci Rep 2023; 13:9533. [PMID: 37308499 DOI: 10.1038/s41598-023-36243-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 05/31/2023] [Indexed: 06/14/2023] Open
Abstract
The primary step in tissue cytometry is the automated distinction of individual cells (segmentation). Since cell borders are seldom labeled, cells are generally segmented by their nuclei. While tools have been developed for segmenting nuclei in two dimensions, segmentation of nuclei in three-dimensional volumes remains a challenging task. The lack of effective methods for three-dimensional segmentation represents a bottleneck in the realization of the potential of tissue cytometry, particularly as methods of tissue clearing present the opportunity to characterize entire organs. Methods based on deep learning have shown enormous promise, but their implementation is hampered by the need for large amounts of manually annotated training data. In this paper, we describe 3D Nuclei Instance Segmentation Network (NISNet3D) that directly segments 3D volumes through the use of a modified 3D U-Net, 3D marker-controlled watershed transform, and a nuclei instance segmentation system for separating touching nuclei. NISNet3D is unique in that it provides accurate segmentation of even challenging image volumes using a network trained on large amounts of synthetic nuclei derived from relatively few annotated volumes, or on synthetic data obtained without annotated volumes. We present a quantitative comparison of results obtained from NISNet3D with results obtained from a variety of existing nuclei segmentation techniques. We also examine the performance of the methods when no ground truth is available and only synthetic volumes were used for training.
Collapse
Affiliation(s)
- Liming Wu
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Alain Chen
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA
| | - Paul Salama
- Department of Electrical and Computer Engineering, Indiana University-Purdue University Indianapolis, Indianapolis, IN, 46202, USA
| | - Seth Winfree
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, 68198, USA
| | - Kenneth W Dunn
- School of Medicine, Indiana University, Indianapolis, IN, 46202, USA
| | - Edward J Delp
- Video and Image Processing Laboratory, School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, 47907, USA.
| |
Collapse
|
12
|
Jang J, Lee K, Kim TK. Unsupervised Contour Tracking of Live Cells by Mechanical and Cycle Consistency Losses. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2023; 2023:227-236. [PMID: 38250674 PMCID: PMC10798679 DOI: 10.1109/cvpr52729.2023.00030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Analyzing the dynamic changes of cellular morphology is important for understanding the various functions and characteristics of live cells, including stem cells and metastatic cancer cells. To this end, we need to track all points on the highly deformable cellular contour in every frame of live cell video. Local shapes and textures on the contour are not evident, and their motions are complex, often with expansion and contraction of local contour features. The prior arts for optical flow or deep point set tracking are unsuited due to the fluidity of cells, and previous deep contour tracking does not consider point correspondence. We propose the first deep learning-based tracking of cellular (or more generally viscoelastic materials) contours with point correspondence by fusing dense representation between two contours with cross attention. Since it is impractical to manually label dense tracking points on the contour, unsupervised learning comprised of the mechanical and cyclical consistency losses is proposed to train our contour tracker. The mechanical loss forcing the points to move perpendicular to the contour effectively helps out. For quantitative evaluation, we labeled sparse tracking points along the contour of live cells from two live cell datasets taken with phase contrast and confocal fluorescence microscopes. Our contour tracker quantitatively outperforms compared methods and produces qualitatively more favorable results. Our code and data are publicly available at https://github.com/JunbongJang/contour-tracking/.
Collapse
Affiliation(s)
| | - Kwonmoo Lee
- Boston Children’s Hospital, Harvard Medical School
| | | |
Collapse
|
13
|
Maška M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Nečasová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, Mikut R, Guo T, Wang Y, Allebach JP, Bao R, Al-Shakarji NM, Rahmon G, Toubal IE, Palaniappan K, Lux F, Matula P, Sugawara K, Magnusson KEG, Aho L, Cohen AR, Arbelle A, Ben-Haim T, Raviv TR, Isensee F, Jäger PF, Maier-Hein KH, Zhu Y, Ederra C, Urbiola A, Meijering E, Cunha A, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solórzano C. The Cell Tracking Challenge: 10 years of objective benchmarking. Nat Methods 2023:10.1038/s41592-023-01879-y. [PMID: 37202537 PMCID: PMC10333123 DOI: 10.1038/s41592-023-01879-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 04/13/2023] [Indexed: 05/20/2023]
Abstract
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
Collapse
Affiliation(s)
- Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Estibaliz Gómez-de-Mariscal
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Fidel A Guerrero Peña
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tsang Ing Ren
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
| | - Elliot M Meyerowitz
- Division of Biology and Biological Engineering and Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Tianqi Guo
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Yin Wang
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Jan P Allebach
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Rina Bao
- Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Noor M Al-Shakarji
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Gani Rahmon
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Imad Eddine Toubal
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Kannappan Palaniappan
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de Lyon, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | | | - Layton Aho
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Andrew R Cohen
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Assaf Arbelle
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tal Ben-Haim
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tammy Riklin Raviv
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Paul F Jäger
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
- Griffith University, Nathan, Queensland, Australia
| | - Cristina Ederra
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Ainhoa Urbiola
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandre Cunha
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Arrate Muñoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic.
| | - Carlos Ortiz-de-Solórzano
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain.
| |
Collapse
|
14
|
Schilling MP, El Khaled El Faraj R, Urrutia Gómez JE, Sonnentag SJ, Wang F, Nestler B, Orian-Rousseau V, Popova AA, Levkin PA, Reischl M. Automated high-throughput image processing as part of the screening platform for personalized oncology. Sci Rep 2023; 13:5107. [PMID: 36991084 PMCID: PMC10060403 DOI: 10.1038/s41598-023-32144-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Cancer is a devastating disease and the second leading cause of death worldwide. However, the development of resistance to current therapies is making cancer treatment more difficult. Combining the multi-omics data of individual tumors with information on their in-vitro Drug Sensitivity and Resistance Test (DSRT) can help to determine the appropriate therapy for each patient. Miniaturized high-throughput technologies, such as the droplet microarray, enable personalized oncology. We are developing a platform that incorporates DSRT profiling workflows from minute amounts of cellular material and reagents. Experimental results often rely on image-based readout techniques, where images are often constructed in grid-like structures with heterogeneous image processing targets. However, manual image analysis is time-consuming, not reproducible, and impossible for high-throughput experiments due to the amount of data generated. Therefore, automated image processing solutions are an essential component of a screening platform for personalized oncology. We present our comprehensive concept that considers assisted image annotation, algorithms for image processing of grid-like high-throughput experiments, and enhanced learning processes. In addition, the concept includes the deployment of processing pipelines. Details of the computation and implementation are presented. In particular, we outline solutions for linking automated image processing for personalized oncology with high-performance computing. Finally, we demonstrate the advantages of our proposal, using image data from heterogeneous practical experiments and challenges.
Collapse
Affiliation(s)
- Marcel P Schilling
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany.
| | - Razan El Khaled El Faraj
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Joaquín Eduardo Urrutia Gómez
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Steffen J Sonnentag
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Fei Wang
- Institute for Applied Materials, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
| | - Britta Nestler
- Institute for Applied Materials, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
| | - Véronique Orian-Rousseau
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Anna A Popova
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Pavel A Levkin
- Institute of Biological and Chemical Systems - Functional Molecular Systems, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, 76344, Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
15
|
Panconi L, Makarova M, Lambert ER, May RC, Owen DM. Topology-based fluorescence image analysis for automated cell identification and segmentation. JOURNAL OF BIOPHOTONICS 2023; 16:e202200199. [PMID: 36349740 DOI: 10.1002/jbio.202200199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/22/2022] [Accepted: 11/06/2022] [Indexed: 06/16/2023]
Abstract
Cell segmentation refers to the body of techniques used to identify cells in images and extract biologically relevant information from them; however, manual segmentation is laborious and subjective. We present Topological Boundary Line Estimation using Recurrence Of Neighbouring Emissions (TOBLERONE), a topological image analysis tool which identifies persistent homological image features as opposed to the geometric analysis commonly employed. We demonstrate that topological data analysis can provide accurate segmentation of arbitrarily-shaped cells, offering a means for automatic and objective data extraction. One cellular feature of particular interest in biology is the plasma membrane, which has been shown to present varying degrees of lipid packing, or membrane order, depending on the function and morphology of the cell type. With the use of environmentally-sensitive dyes, images derived from confocal microscopy can be used to quantify the degree of membrane order. We demonstrate that TOBLERONE is capable of automating this task.
Collapse
Affiliation(s)
- Luca Panconi
- Institute of Immunology and Immunotherapy, School of Mathematics and Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Maria Makarova
- Institute of Metabolism and Systems Research, University of Birmingham, Birmingham, UK
| | - Eleanor R Lambert
- Institute of Immunology and Immunotherapy, School of Mathematics and Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Robin C May
- School of Biosciences and Institute of Microbiology and Infection, University of Birmingham, Birmingham, UK
| | - Dylan M Owen
- Institute of Immunology and Immunotherapy, School of Mathematics and Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| |
Collapse
|
16
|
BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations. NPJ Biofilms Microbiomes 2022; 8:99. [PMID: 36529755 PMCID: PMC9760640 DOI: 10.1038/s41522-022-00362-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
Collapse
|
17
|
Schutera M, Rettenberger L, Reischl M. Automated Zebrafish Phenotype Pattern Recognition: 6 Years Ago, and Now. Zebrafish 2022; 19:213-217. [PMID: 36067119 DOI: 10.1089/zeb.2022.0027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
The article assesses the developments in automated phenotype pattern recognition: Potential spikes in classification performance, even when facing the common small-scale biomedical data set, and as a reader, you will find out about changes in the development effort and complexity for researchers and practitioners. After reading, you will be aware of the benefits and unreasonable effectiveness and ease of use of an automated end-to-end deep learning pipeline for classification tasks of biomedical perception systems.
Collapse
Affiliation(s)
- Mark Schutera
- Institute for Automation and Applied Informatics (IAI), Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, Germany
| | - Luca Rettenberger
- Institute for Automation and Applied Informatics (IAI), Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics (IAI), Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
18
|
Scherr T, Seiffarth J, Wollenhaupt B, Neumann O, Schilling MP, Kohlheyer D, Scharr H, Nöh K, Mikut R. microbeSEG: A deep learning software tool with OMERO data management for efficient and accurate cell segmentation. PLoS One 2022; 17:e0277601. [PMID: 36445903 PMCID: PMC9707790 DOI: 10.1371/journal.pone.0277601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 11/01/2022] [Indexed: 12/02/2022] Open
Abstract
In biotechnology, cell growth is one of the most important properties for the characterization and optimization of microbial cultures. Novel live-cell imaging methods are leading to an ever better understanding of cell cultures and their development. The key to analyzing acquired data is accurate and automated cell segmentation at the single-cell level. Therefore, we present microbeSEG, a user-friendly Python-based cell segmentation tool with a graphical user interface and OMERO data management. microbeSEG utilizes a state-of-the-art deep learning-based segmentation method and can be used for instance segmentation of a wide range of cell morphologies and imaging techniques, e.g., phase contrast or fluorescence microscopy. The main focus of microbeSEG is a comprehensible, easy, efficient, and complete workflow from the creation of training data to the final application of the trained segmentation model. We demonstrate that accurate cell segmentation results can be obtained within 45 minutes of user time. Utilizing public segmentation datasets or pre-labeling further accelerates the microbeSEG workflow. This opens the door for accurate and efficient data analysis of microbial cultures.
Collapse
Affiliation(s)
- Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- * E-mail: (TS); (KN); (RM)
| | - Johannes Seiffarth
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, Jülich, Germany
- Computational Systems Biology (AVT.CSB), RWTH Aachen University, Aachen, Germany
| | - Bastian Wollenhaupt
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Oliver Neumann
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Marcel P. Schilling
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Dietrich Kohlheyer
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Hanno Scharr
- Institute of Bio- and Geosciences, IBG-2: Plant Sciences, Forschungszentrum Jülich GmbH, Jülich, Germany
- Institute for Advanced Simulation, IAS-8: Data Analytics and Machine Learning, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Katharina Nöh
- Institute of Bio- and Geosciences, IBG-1: Biotechnology, Forschungszentrum Jülich GmbH, Jülich, Germany
- * E-mail: (TS); (KN); (RM)
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- * E-mail: (TS); (KN); (RM)
| |
Collapse
|
19
|
Gojić G, Petrović VB, Dragan D, Gajić DB, Mišković D, Džinić V, Grgić Z, Pantelić J, Oros A. Comparing the Clinical Viability of Automated Fundus Image Segmentation Methods. SENSORS (BASEL, SWITZERLAND) 2022; 22:9101. [PMID: 36501801 PMCID: PMC9735987 DOI: 10.3390/s22239101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/16/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
Recent methods for automatic blood vessel segmentation from fundus images have been commonly implemented as convolutional neural networks. While these networks report high values for objective metrics, the clinical viability of recovered segmentation masks remains unexplored. In this paper, we perform a pilot study to assess the clinical viability of automatically generated segmentation masks in the diagnosis of diseases affecting retinal vascularization. Five ophthalmologists with clinical experience were asked to participate in the study. The results demonstrate low classification accuracy, inferring that generated segmentation masks cannot be used as a standalone resource in general clinical practice. The results also hint at possible clinical infeasibility in experimental design. In the follow-up experiment, we evaluate the clinical quality of masks by having ophthalmologists rank generation methods. The ranking is established with high intra-observer consistency, indicating better subjective performance for a subset of tested networks. The study also demonstrates that objective metrics are not correlated with subjective metrics in retinal segmentation tasks for the methods involved, suggesting that objective metrics commonly used in scientific papers to measure the method's performance are not plausible criteria for choosing clinically robust solutions.
Collapse
Affiliation(s)
- Gorana Gojić
- The Institute for Artificial Intelligence Research and Development of Serbia, 21102 Novi Sad, Serbia
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Veljko B. Petrović
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dinu Dragan
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dušan B. Gajić
- Faculty of Technical Sciences, University of Novi Sad, 21102 Novi Sad, Serbia
| | - Dragiša Mišković
- The Institute for Artificial Intelligence Research and Development of Serbia, 21102 Novi Sad, Serbia
| | | | | | - Jelica Pantelić
- Institute of Eye Diseases, University Clinical Center of Serbia, 11000 Belgrade, Serbia
| | - Ana Oros
- Eye Clinic Džinić, 21107 Novi Sad, Serbia
- Institute of Neonatology, 11000 Belgrade, Serbia
| |
Collapse
|
20
|
Devulapally A, Parekh V, Pazhayidam George C, Balakrishnan S. On the Variability in Cell and Nucleus Shapes. Cells Tissues Organs 2022; 213:96-107. [PMID: 36315993 DOI: 10.1159/000527825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 10/26/2022] [Indexed: 02/17/2024] Open
Abstract
Cell morphology is an important regulator of cell function. Many abnormalities in cellular behavior can be discerned from changes in the shape of the cell and its organelles, typically the nucleus. Two major challenges for developing such phenotypic assays are reconstructing 3D surfaces of individual cells and nuclei from confocal images and developing characterizations of these surfaces for comparisons. We demonstrate two algorithms - 3D active contours and 3D condensed-attention UNet - to segment cells and nuclei from confocal images. The cell and nuclear surfaces are then converted into vectors using a reversible, spherical transform - i.e., shapes can be recovered from the vectors. Typical methods for characterizing shapes using size, shape, and image parameters such as area, volume, shape factor, solidity, and pixel intensities are not amenable to such reverse transformation. Our vector representation's principal component analysis shows that the significant modes of variability among cell and nucleus shapes are scaling and flattening. We benchmark these modes using a known mechanical model for nucleus morphology. Subsequent modes alter the eccentricity of the nucleus and translate and rotate it with respect to the cell. Our vector-space representation of cell and nucleus shape helps physically interpret the variability sources. It may further help to guide mechanical models and identify molecular mechanisms driving cell and nuclear shape changes.
Collapse
Affiliation(s)
- Anusha Devulapally
- School of Mathematics and Computer Science, Indian Institute of Technology Goa, Veling, India
| | - Varun Parekh
- School of Mathematics and Computer Science, Indian Institute of Technology Goa, Veling, India
| | - Clint Pazhayidam George
- School of Mathematics and Computer Science, Indian Institute of Technology Goa, Veling, India
- School of Interdisciplinary Life Sciences, Indian Institute of Technology Goa, Veling, India
| | - Sreenath Balakrishnan
- School of Interdisciplinary Life Sciences, Indian Institute of Technology Goa, Veling, India
- School of Mechanical Sciences, Indian Institute of Technology Goa, Veling, India
| |
Collapse
|
21
|
EmbedSeg: Embedding-based Instance Segmentation for Biomedical Microscopy Data. Med Image Anal 2022; 81:102523. [DOI: 10.1016/j.media.2022.102523] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 05/02/2022] [Accepted: 06/24/2022] [Indexed: 11/20/2022]
|
22
|
Tahk MJ, Torp J, Ali MAS, Fishman D, Parts L, Grätz L, Müller C, Keller M, Veiksina S, Laasfeld T, Rinken A. Live-cell microscopy or fluorescence anisotropy with budded baculoviruses-which way to go with measuring ligand binding to M 4 muscarinic receptors? Open Biol 2022; 12:220019. [PMID: 35674179 PMCID: PMC9175271 DOI: 10.1098/rsob.220019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 04/27/2022] [Indexed: 01/04/2023] Open
Abstract
M4 muscarinic acetylcholine receptor is a G protein-coupled receptor (GPCR) that has been associated with alcohol and cocaine abuse, Alzheimer's disease, and schizophrenia which makes it an interesting drug target. For many GPCRs, the high-affinity fluorescence ligands have expanded the options for high-throughput screening of drug candidates and serve as useful tools in fundamental receptor research. Here, we explored two TAMRA-labelled fluorescence ligands, UR-MK342 and UR-CG072, for development of assays for studying ligand-binding properties to M4 receptor. Using budded baculovirus particles as M4 receptor preparation and fluorescence anisotropy method, we measured the affinities and binding kinetics of both fluorescence ligands. Using the fluorescence ligands as reporter probes, the binding affinities of unlabelled ligands could be determined. Based on these results, we took a step towards a more natural system and developed a method using live CHO-K1-hM4R cells and automated fluorescence microscopy suitable for the routine determination of unlabelled ligand affinities. For quantitative image analysis, we developed random forest and deep learning-based pipelines for cell segmentation. The pipelines were integrated into the user-friendly open-source Aparecium software. Both image analysis methods were suitable for measuring fluorescence ligand saturation binding and kinetics as well as for screening binding affinities of unlabelled ligands.
Collapse
Affiliation(s)
- Maris-Johanna Tahk
- Institute of Chemistry, University of Tartu, Ravila 14a, 50411 Tartu, Estonia
| | - Jane Torp
- Institute of Chemistry, University of Tartu, Ravila 14a, 50411 Tartu, Estonia
| | - Mohammed A. S. Ali
- Department of Computer Science, University of Tartu, Narva Street 20, 51009 Tartu, Estonia
| | - Dmytro Fishman
- Department of Computer Science, University of Tartu, Narva Street 20, 51009 Tartu, Estonia
| | - Leopold Parts
- Department of Computer Science, University of Tartu, Narva Street 20, 51009 Tartu, Estonia
- Wellcome Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridgeshire, UK
| | - Lukas Grätz
- Institute of Pharmacy, Faculty of Chemistry and Pharmacy, University of Regensburg, Universitätsstrasse 31, 93053 Regensburg, Germany
| | - Christoph Müller
- Institute of Pharmacy, Faculty of Chemistry and Pharmacy, University of Regensburg, Universitätsstrasse 31, 93053 Regensburg, Germany
| | - Max Keller
- Institute of Pharmacy, Faculty of Chemistry and Pharmacy, University of Regensburg, Universitätsstrasse 31, 93053 Regensburg, Germany
| | - Santa Veiksina
- Institute of Chemistry, University of Tartu, Ravila 14a, 50411 Tartu, Estonia
| | - Tõnis Laasfeld
- Institute of Chemistry, University of Tartu, Ravila 14a, 50411 Tartu, Estonia
- Department of Computer Science, University of Tartu, Narva Street 20, 51009 Tartu, Estonia
| | - Ago Rinken
- Institute of Chemistry, University of Tartu, Ravila 14a, 50411 Tartu, Estonia
| |
Collapse
|
23
|
Wang A, Zhang Q, Han Y, Megason S, Hormoz S, Mosaliganti KR, Lam JCK, Li VOK. A novel deep learning-based 3D cell segmentation framework for future image-based disease detection. Sci Rep 2022; 12:342. [PMID: 35013443 PMCID: PMC8748745 DOI: 10.1038/s41598-021-04048-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 12/09/2021] [Indexed: 11/12/2022] Open
Abstract
Cell segmentation plays a crucial role in understanding, diagnosing, and treating diseases. Despite the recent success of deep learning-based cell segmentation methods, it remains challenging to accurately segment densely packed cells in 3D cell membrane images. Existing approaches also require fine-tuning multiple manually selected hyperparameters on the new datasets. We develop a deep learning-based 3D cell segmentation pipeline, 3DCellSeg, to address these challenges. Compared to the existing methods, our approach carries the following novelties: (1) a robust two-stage pipeline, requiring only one hyperparameter; (2) a light-weight deep convolutional neural network (3DCellSegNet) to efficiently output voxel-wise masks; (3) a custom loss function (3DCellSeg Loss) to tackle the clumped cell problem; and (4) an efficient touching area-based clustering algorithm (TASCAN) to separate 3D cells from the foreground masks. Cell segmentation experiments conducted on four different cell datasets show that 3DCellSeg outperforms the baseline models on the ATAS (plant), HMS (animal), and LRP (plant) datasets with an overall accuracy of 95.6%, 76.4%, and 74.7%, respectively, while achieving an accuracy comparable to the baselines on the Ovules (plant) dataset with an overall accuracy of 82.2%. Ablation studies show that the individual improvements in accuracy is attributable to 3DCellSegNet, 3DCellSeg Loss, and TASCAN, with the 3DCellSeg demonstrating robustness across different datasets and cell shapes. Our results suggest that 3DCellSeg can serve a powerful biomedical and clinical tool, such as histo-pathological image analysis, for cancer diagnosis and grading.
Collapse
Affiliation(s)
- Andong Wang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Qi Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Yang Han
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| | - Sean Megason
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Sahand Hormoz
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | | | - Jacqueline C K Lam
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
| | - Victor O K Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China.
| |
Collapse
|
24
|
Sugawara K, Çevrim Ç, Averof M. Tracking cell lineages in 3D by incremental deep learning. eLife 2022; 11:e69380. [PMID: 34989675 PMCID: PMC8741210 DOI: 10.7554/elife.69380] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 12/07/2021] [Indexed: 11/13/2022] Open
Abstract
Deep learning is emerging as a powerful approach for bioimage analysis. Its use in cell tracking is limited by the scarcity of annotated data for the training of deep-learning models. Moreover, annotation, training, prediction, and proofreading currently lack a unified user interface. We present ELEPHANT, an interactive platform for 3D cell tracking that addresses these challenges by taking an incremental approach to deep learning. ELEPHANT provides an interface that seamlessly integrates cell track annotation, deep learning, prediction, and proofreading. This enables users to implement cycles of incremental learning starting from a few annotated nuclei. Successive prediction-validation cycles enrich the training data, leading to rapid improvements in tracking performance. We test the software's performance against state-of-the-art methods and track lineages spanning the entire course of leg regeneration in a crustacean over 1 week (504 timepoints). ELEPHANT yields accurate, fully-validated cell lineages with a modest investment in time and effort.
Collapse
Affiliation(s)
- Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| | - Çağrı Çevrim
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| | - Michalis Averof
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| |
Collapse
|
25
|
Bao R, Al-Shakarji NM, Bunyak F, Palaniappan K. DMNet: Dual-Stream Marker Guided Deep Network for Dense Cell Segmentation and Lineage Tracking. ... IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3354-3363. [PMID: 35386855 DOI: 10.1109/iccvw54120.2021.00375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate segmentation and tracking of cells in microscopy image sequences is extremely beneficial in clinical diagnostic applications and biomedical research. A continuing challenge is the segmentation of dense touching cells and deforming cells with indistinct boundaries, in low signal-to-noise-ratio images. In this paper, we present a dual-stream marker-guided network (DMNet) for segmentation of touching cells in microscopy videos of many cell types. DMNet uses an explicit cell marker-detection stream, with a separate mask-prediction stream using a distance map penalty function, which enables supervised training to focus attention on touching and nearby cells. For multi-object cell tracking we use M2Track tracking-by-detection approach with multi-step data association. Our M2Track with mask overlap includes short term track-to-cell association followed by track-to-track association to re-link tracklets with missing segmentation masks over a short sequence of frames. Our combined detection, segmentation and tracking algorithm has proven its potential on the IEEE ISBI 2021 6th Cell Tracking Challenge (CTC-6) where we achieved multiple top three rankings for diverse cell types. Our team name is MU-Ba-US, and the implementation of DMNet is available at, http://celltrackingchallenge.net/participants/MU-Ba-US/.
Collapse
Affiliation(s)
- Rina Bao
- University of Missouri-Columbia, MO 65211, USA
| | | | | | | |
Collapse
|
26
|
Vicar T, Chmelik J, Jakubicek R, Chmelikova L, Gumulec J, Balvan J, Provaznik I, Kolar R. Self-supervised pretraining for transferable quantitative phase image cell segmentation. BIOMEDICAL OPTICS EXPRESS 2021; 12:6514-6528. [PMID: 34745753 PMCID: PMC8547997 DOI: 10.1364/boe.433212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/03/2021] [Accepted: 08/24/2021] [Indexed: 06/13/2023]
Abstract
In this paper, a novel U-Net-based method for robust adherent cell segmentation for quantitative phase microscopy image is designed and optimised. We designed and evaluated four specific post-processing pipelines. To increase the transferability to different cell types, non-deep learning transfer with adjustable parameters is used in the post-processing step. Additionally, we proposed a self-supervised pretraining technique using nonlabelled data, which is trained to reconstruct multiple image distortions and improved the segmentation performance from 0.67 to 0.70 of object-wise intersection over union. Moreover, we publish a new dataset of manually labelled images suitable for this task together with the unlabelled data for self-supervised pretraining.
Collapse
Affiliation(s)
- Tomas Vicar
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
- Department of Pathological Physiology, Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Jiri Chmelik
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Roman Jakubicek
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Larisa Chmelikova
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Jaromir Gumulec
- Department of Pathological Physiology, Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Jan Balvan
- Department of Pathological Physiology, Faculty of Medicine, Masaryk University, Brno, Czech Republic
| | - Ivo Provaznik
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Radim Kolar
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| |
Collapse
|
27
|
Machine learning methods for automated classification of tumors with papillary thyroid carcinoma-like nuclei: A quantitative analysis. PLoS One 2021; 16:e0257635. [PMID: 34550999 PMCID: PMC8457451 DOI: 10.1371/journal.pone.0257635] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 09/04/2021] [Indexed: 11/19/2022] Open
Abstract
When approaching thyroid gland tumor classification, the differentiation between samples with and without "papillary thyroid carcinoma-like" nuclei is a daunting task with high inter-observer variability among pathologists. Thus, there is increasing interest in the use of machine learning approaches to provide pathologists real-time decision support. In this paper, we optimize and quantitatively compare two automated machine learning methods for thyroid gland tumor classification on two datasets to assist pathologists in decision-making regarding these methods and their parameters. The first method is a feature-based classification originating from common image processing and consists of cell nucleus segmentation, feature extraction, and subsequent thyroid gland tumor classification utilizing different classifiers. The second method is a deep learning-based classification which directly classifies the input images with a convolutional neural network without the need for cell nucleus segmentation. On the Tharun and Thompson dataset, the feature-based classification achieves an accuracy of 89.7% (Cohen's Kappa 0.79), compared to the deep learning-based classification of 89.1% (Cohen's Kappa 0.78). On the Nikiforov dataset, the feature-based classification achieves an accuracy of 83.5% (Cohen's Kappa 0.46) compared to the deep learning-based classification 77.4% (Cohen's Kappa 0.35). Thus, both automated thyroid tumor classification methods can reach the classification level of an expert pathologist. To our knowledge, this is the first study comparing feature-based and deep learning-based classification regarding their ability to classify samples with and without papillary thyroid carcinoma-like nuclei on two large-scale datasets.
Collapse
|
28
|
Löffler K, Scherr T, Mikut R. A graph-based cell tracking algorithm with few manually tunable parameters and automated segmentation error correction. PLoS One 2021; 16:e0249257. [PMID: 34492015 PMCID: PMC8423278 DOI: 10.1371/journal.pone.0249257] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 08/03/2021] [Indexed: 11/29/2022] Open
Abstract
Automatic cell segmentation and tracking enables to gain quantitative insights into the processes driving cell migration. To investigate new data with minimal manual effort, cell tracking algorithms should be easy to apply and reduce manual curation time by providing automatic correction of segmentation errors. Current cell tracking algorithms, however, are either easy to apply to new data sets but lack automatic segmentation error correction, or have a vast set of parameters that needs either manual tuning or annotated data for parameter tuning. In this work, we propose a tracking algorithm with only few manually tunable parameters and automatic segmentation error correction. Moreover, no training data is needed. We compare the performance of our approach to three well-performing tracking algorithms from the Cell Tracking Challenge on data sets with simulated, degraded segmentation—including false negatives, over- and under-segmentation errors. Our tracking algorithm can correct false negatives, over- and under-segmentation errors as well as a mixture of the aforementioned segmentation errors. On data sets with under-segmentation errors or a mixture of segmentation errors our approach performs best. Moreover, without requiring additional manual tuning, our approach ranks several times in the top 3 on the 6th edition of the Cell Tracking Challenge.
Collapse
Affiliation(s)
- Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- Institute of Biological and Chemical Systems - Biological Information Processing, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- * E-mail:
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
29
|
Bruch R, Scheikl PM, Mikut R, Loosli F, Reischl M. epiTracker: A Framework for Highly Reliable Particle Tracking for the Quantitative Analysis of Fish Movements in Tanks. SLAS Technol 2020; 26:367-376. [PMID: 33345677 DOI: 10.1177/2472630320977454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Behavioral analysis of moving animals relies on a faithful recording and track analysis to extract relevant parameters of movement. To study group behavior and social interactions, often simultaneous analyses of individuals are required. To detect social interactions, for example to identify the leader of a group as opposed to followers, one needs an error-free segmentation of individual tracks throughout time. While automated tracking algorithms exist that are quick and easy to use, inevitable errors will occur during tracking. To solve this problem, we introduce a robust algorithm called epiTracker for segmentation and tracking of multiple animals in two-dimensional (2D) videos along with an easy-to-use correction method that allows one to obtain error-free segmentation. We have implemented two graphical user interfaces to allow user-friendly control of the functions. Using six labeled 2D datasets, the effort to obtain accurate labels is quantified and compared to alternative available software solutions. Both the labeled datasets and the software are publicly available.
Collapse
Affiliation(s)
- Roman Bruch
- Institute of Molecular and Cell Biology, Mannheim University of Applied Sciences, Mannheim, Germany
| | - Paul M Scheikl
- Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Baden-Württemberg, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Baden-Württemberg, Germany
| | - Felix Loosli
- Institute for Toxicology and Genetics, Karlsruhe Institute of Technology, Baden-Württemberg, Germany
| | - Markus Reischl
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Baden-Württemberg, Germany
| |
Collapse
|