1
|
DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
|
2
|
Computer vision meets microfluidics: a label-free method for high-throughput cell analysis. MICROSYSTEMS & NANOENGINEERING 2023; 9:116. [PMID: 37744264 PMCID: PMC10511704 DOI: 10.1038/s41378-023-00562-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 03/21/2023] [Accepted: 04/10/2023] [Indexed: 09/26/2023]
Abstract
In this paper, we review the integration of microfluidic chips and computer vision, which has great potential to advance research in the life sciences and biology, particularly in the analysis of cell imaging data. Microfluidic chips enable the generation of large amounts of visual data at the single-cell level, while computer vision techniques can rapidly process and analyze these data to extract valuable information about cellular health and function. One of the key advantages of this integrative approach is that it allows for noninvasive and low-damage cellular characterization, which is important for studying delicate or fragile microbial cells. The use of microfluidic chips provides a highly controlled environment for cell growth and manipulation, minimizes experimental variability and improves the accuracy of data analysis. Computer vision can be used to recognize and analyze target species within heterogeneous microbial populations, which is important for understanding the physiological status of cells in complex biological systems. As hardware and artificial intelligence algorithms continue to improve, computer vision is expected to become an increasingly powerful tool for in situ cell analysis. The use of microelectromechanical devices in combination with microfluidic chips and computer vision could enable the development of label-free, automatic, low-cost, and fast cellular information recognition and the high-throughput analysis of cellular responses to different compounds, for broad applications in fields such as drug discovery, diagnostics, and personalized medicine.
Collapse
|
3
|
Whole-Slide Imaging, Mutual Information Registration for Multiplex Immunohistochemistry and Immunofluorescence. J Transl Med 2023; 103:100175. [PMID: 37196983 PMCID: PMC10527458 DOI: 10.1016/j.labinv.2023.100175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 03/24/2023] [Accepted: 05/08/2023] [Indexed: 05/19/2023] Open
Abstract
Multiplex immunohistochemistry/immunofluorescence (mIHC/mIF) is a developing technology that facilitates the evaluation of multiple, simultaneous protein expressions at single-cell resolution while preserving tissue architecture. These approaches have shown great potential for biomarker discovery, yet many challenges remain. Importantly, streamlined cross-registration of multiplex immunofluorescence images with additional imaging modalities and immunohistochemistry (IHC) can help increase the plex and/or improve the quality of the data generated by potentiating downstream processes such as cell segmentation. To address this problem, a fully automated process was designed to perform a hierarchical, parallelizable, and deformable registration of multiplexed digital whole-slide images (WSIs). We generalized the calculation of mutual information as a registration criterion to an arbitrary number of dimensions, making it well suited for multiplexed imaging. We also used the self-information of a given IF channel as a criterion to select the optimal channels to use for registration. Additionally, as precise labeling of cellular membranes in situ is essential for robust cell segmentation, a pan-membrane immunohistochemical staining method was developed for incorporation into mIF panels or for use as an IHC followed by cross-registration. In this study, we demonstrate this process by registering whole-slide 6-plex/7-color mIF images with whole-slide brightfield mIHC images, including a CD3 and a pan-membrane stain. Our algorithm, WSI, mutual information registration (WSIMIR), performed highly accurate registration allowing the retrospective generation of an 8-plex/9-color, WSI, and outperformed 2 alternative automated methods for cross-registration by Jaccard index and Dice similarity coefficient (WSIMIR vs automated WARPY, P < .01 and P < .01, respectively, vs HALO + transformix, P = .083 and P = .049, respectively). Furthermore, the addition of a pan-membrane IHC stain cross-registered to an mIF panel facilitated improved automated cell segmentation across mIF WSIs, as measured by significantly increased correct detections, Jaccard index (0.78 vs 0.65), and Dice similarity coefficient (0.88 vs 0.79).
Collapse
|
4
|
Review of research on the instance segmentation of cell images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107211. [PMID: 36356384 DOI: 10.1016/j.cmpb.2022.107211] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/27/2022] [Accepted: 10/30/2022] [Indexed: 06/16/2023]
Abstract
The instance segmentation of cell images is the basis for conducting cell research and is of great importance for the study and diagnosis of pathologies. To analyze current situations and future developments in the field of cell image instance segmentation, this paper first systematically reviews image segmentation methods based on traditional and deep learning methods. Then, from the three aspects of cell image weak label extraction, cell image instance segmentation, and cell internal structure segmentation, deep-learning-based cell image segmentation methods are analyzed and summarized. Finally, cell image instance segmentation is summarized, and challenges and future developments are discussed.
Collapse
|
5
|
Marker-controlled watershed with deep edge emphasis and optimized H-minima transform for automatic segmentation of densely cultivated 3D cell nuclei. BMC Bioinformatics 2022; 23:289. [PMID: 35864453 PMCID: PMC9306214 DOI: 10.1186/s12859-022-04827-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The segmentation of 3D cell nuclei is essential in many tasks, such as targeted molecular radiotherapies (MRT) for metastatic tumours, toxicity screening, and the observation of proliferating cells. In recent years, one popular method for automatic segmentation of nuclei has been deep learning enhanced marker-controlled watershed transform. In this method, convolutional neural networks (CNNs) have been used to create nuclei masks and markers, and the watershed algorithm for the instance segmentation. We studied whether this method could be improved for the segmentation of densely cultivated 3D nuclei via developing multiple system configurations in which we studied the effect of edge emphasizing CNNs, and optimized H-minima transform for mask and marker generation, respectively. RESULTS The dataset used for training and evaluation consisted of twelve in vitro cultivated densely packed 3D human carcinoma cell spheroids imaged using a confocal microscope. With this dataset, the evaluation was performed using a cross-validation scheme. In addition, four independent datasets were used for evaluation. The datasets were resampled near isotropic for our experiments. The baseline deep learning enhanced marker-controlled watershed obtained an average of 0.69 Panoptic Quality (PQ) and 0.66 Aggregated Jaccard Index (AJI) over the twelve spheroids. Using a system configuration, which was otherwise the same but used 3D-based edge emphasizing CNNs and optimized H-minima transform, the scores increased to 0.76 and 0.77, respectively. When using the independent datasets for evaluation, the best performing system configuration was shown to outperform or equal the baseline and a set of well-known cell segmentation approaches. CONCLUSIONS The use of edge emphasizing U-Nets and optimized H-minima transform can improve the marker-controlled watershed transform for segmentation of densely cultivated 3D cell nuclei. A novel dataset of twelve spheroids was introduced to the public.
Collapse
|
6
|
An Efficient Galactic Swarm Optimization Based Fractal Neural Network Model with DWT for Malignant Melanoma Prediction. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10847-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
7
|
Robust Blood Cell Image Segmentation Method Based on Neural Ordinary Differential Equations. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5590180. [PMID: 34413897 PMCID: PMC8369191 DOI: 10.1155/2021/5590180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 05/10/2021] [Accepted: 07/27/2021] [Indexed: 11/17/2022]
Abstract
For the analysis of medical images, one of the most basic methods is to diagnose diseases by examining blood smears through a microscope to check the morphology, number, and ratio of red blood cells and white blood cells. Therefore, accurate segmentation of blood cell images is essential for cell counting and identification. The aim of this paper is to perform blood smear image segmentation by combining neural ordinary differential equations (NODEs) with U-Net networks to improve the accuracy of image segmentation. In order to study the effect of ODE-solve on the speed and accuracy of the network, the ODE-block module was added to the nine convolutional layers in the U-Net network. Firstly, blood cell images are preprocessed to enhance the contrast between the regions to be segmented; secondly, the same dataset was used for the training set and testing set to test segmentation results. According to the experimental results, we select the location where the ordinary differential equation block (ODE-block) module is added, select the appropriate error tolerance, and balance the calculation time and the segmentation accuracy, in order to exert the best performance; finally, the error tolerance of the ODE-block is adjusted to increase the network depth, and the training NODEs-UNet network model is used for cell image segmentation. Using our proposed network model to segment blood cell images in the testing set, it can achieve 95.3% pixel accuracy and 90.61% mean intersection over union. By comparing the U-Net and ResNet networks, the pixel accuracy of our network model is increased by 0.88% and 0.46%, respectively, and the mean intersection over union is increased by 2.18% and 1.13%, respectively. Our proposed network model improves the accuracy of blood cell image segmentation and reduces the computational cost of the network.
Collapse
|
8
|
Evaluation of Deep Learning Architectures for Complex Immunofluorescence Nuclear Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1934-1949. [PMID: 33784615 DOI: 10.1109/tmi.2021.3069558] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Separating and labeling each nuclear instance (instance-aware segmentation) is the key challenge in nuclear image segmentation. Deep Convolutional Neural Networks have been demonstrated to solve nuclear image segmentation tasks across different imaging modalities, but a systematic comparison on complex immunofluorescence images has not been performed. Deep learning based segmentation requires annotated datasets for training, but annotated fluorescence nuclear image datasets are rare and of limited size and complexity. In this work, we evaluate and compare the segmentation effectiveness of multiple deep learning architectures (U-Net, U-Net ResNet, Cellpose, Mask R-CNN, KG instance segmentation) and two conventional algorithms (Iterative h-min based watershed, Attributed relational graphs) on complex fluorescence nuclear images of various types. We propose and evaluate a novel strategy to create artificial images to extend the training set. Results show that instance-aware segmentation architectures and Cellpose outperform the U-Net architectures and conventional methods on complex images in terms of F1 scores, while the U-Net architectures achieve overall higher mean Dice scores. Training with additional artificially generated images improves recall and F1 scores for complex images, thereby leading to top F1 scores for three out of five sample preparation types. Mask R-CNN trained on artificial images achieves the overall highest F1 score on complex images of similar conditions to the training set images while Cellpose achieves the overall highest F1 score on complex images of new imaging conditions. We provide quantitative results demonstrating that images annotated by under-graduates are sufficient for training instance-aware segmentation architectures to efficiently segment complex fluorescence nuclear images.
Collapse
|
9
|
Automated mesenchymal stem cell segmentation and machine learning-based phenotype classification using morphometric and textural analysis. J Med Imaging (Bellingham) 2021; 8:014503. [PMID: 33542945 PMCID: PMC7849042 DOI: 10.1117/1.jmi.8.1.014503] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 01/11/2021] [Indexed: 01/22/2023] Open
Abstract
Purpose: Mesenchymal stem cells (MSCs) have demonstrated clinically relevant therapeutic effects for treatment of trauma and chronic diseases. The proliferative potential, immunomodulatory characteristics, and multipotentiality of MSCs in monolayer culture is reflected by their morphological phenotype. Standard techniques to evaluate culture viability are subjective, destructive, or time-consuming. We present an image analysis approach to objectively determine morphological phenotype of MSCs for prediction of culture efficacy. Approach: The algorithm was trained using phase-contrast micrographs acquired during the early and mid-logarithmic stages of MSC expansion. Cell regions are localized using edge detection, thresholding, and morphological operations, followed by cell marker identification using H-minima transform within each region to differentiate individual cells from cell clusters. Clusters are segmented using marker-controlled watershed to obtain single cells. Morphometric and textural features are extracted to classify cells based on phenotype using machine learning. Results: Algorithm performance was validated using an independent test dataset of 186 MSCs in 36 culture images. Results show 88% sensitivity and 86% precision for overall cell detection and a mean Sorensen-Dice coefficient of 0.849 ± 0.106 for segmentation per image. The algorithm exhibited an area under the curve of 0.816 (CI 95 = 0.769 to 0.886) and 0.787 (CI 95 = 0.716 to 0.851) for classifying MSCs according to their phenotype at early and mid-logarithmic expansion, respectively. Conclusions: The proposed method shows potential to segment and classify low and moderately dense MSCs based on phenotype with high accuracy and robustness. It enables quantifiable and consistent morphology-based quality assessment for various culture protocols to facilitate cytotherapy development.
Collapse
|
10
|
|
11
|
DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
|
12
|
Abstract
Morphometric analysis of nuclei is crucial in cytological examinations. Unfortunately, nuclei segmentation presents many challenges because they usually create complex clusters in cytological samples. To deal with this problem, we are proposing an approach, which combines convolutional neural network and watershed transform to segment nuclei in cytological images of breast cancer. The method initially is preprocessing images using color deconvolution to highlight hematoxylin-stained objects (nuclei). Next, convolutional neural network is applied to perform semantic segmentation of preprocessed image. It finds nuclei areas, cytoplasm areas, edges of nuclei, and background. All connected components in the binary mask of nuclei are treated as potential nuclei. However, some objects actually are clusters of overlapping nuclei. They are detected by their outlying values of morphometric features. Then an attempt is made to separate them using the seeded watershed segmentation. If the attempt is successful, they are included in the nuclei set. The accuracy of this approach is evaluated with the help of referenced, manually segmented images. The degree of matching between reference nuclei and discovered objects is measured with the help of Jaccard distance and Hausdorff distance. As part of the study, we verified how the use of a convolutional neural network instead of the intensity thresholding to generate a topographical map for the watershed improves segmentation outcomes. Our results show that convolutional neural network outperforms Otsu thresholding and adaptive thresholding in most cases, especially in scenarios with many overlapping nuclei.
Collapse
|
13
|
Object-Oriented Segmentation of Cell Nuclei in Fluorescence Microscopy Images. Cytometry A 2018; 93:1019-1028. [PMID: 30211975 DOI: 10.1002/cyto.a.23594] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 06/14/2018] [Accepted: 07/30/2018] [Indexed: 12/17/2022]
Abstract
Cell nucleus segmentation remains an open and challenging problem especially to segment nuclei in cell clumps. Splitting a cell clump would be straightforward if the gradients of boundary pixels in-between the nuclei were always higher than the others. However, imperfections may exist: inhomogeneities of pixel intensities in a nucleus may cause to define spurious boundaries whereas insufficient pixel intensity differences at the border of overlapping nuclei may cause to miss some true boundary pixels. In contrast, these imperfections are typically observed at the pixel-level, causing local changes in pixel values without changing the semantics on a large scale. In response to these issues, this article introduces a new nucleus segmentation method that relies on using gradient information not at the pixel level but at the object level. To this end, it proposes to decompose an image into smaller homogeneous subregions, define edge-objects at four different orientations to encode the gradient information at the object level, and devise a merging algorithm, in which the edge-objects vote for subregion pairs along their orientations and the pairs are iteratively merged if they get sufficient votes from multiple orientations. Our experiments on fluorescence microscopy images reveal that this high-level representation and the design of a merging algorithm using edge-objects (gradients at the object level) improve the segmentation results.
Collapse
|
14
|
Abstract
CAS (Cell Annotation Software) is a novel tool for analysis of microscopic images and selection of the cell soma or nucleus, depending on the research objectives in medicine, biology, bioinformatics, etc. It replaces time-consuming and tiresome manual analysis of single images not only with automatic methods for object segmentation based on the Statistical Dominance Algorithm, but also semi-automatic tools for object selection within a marked region of interest. For each image, a broad set of object parameters is computed, including shape features and optical and topographic characteristics, thus giving additional insight into data. Our solution for cell detection and analysis has been verified by microscopic data and its application in the annotation of the lateral geniculate nucleus has been examined in a case study.
Collapse
|