1
|
Lapierre-Landry M, Liu Z, Ling S, Bayat M, Wilson DL, Jenkins MW. Nuclei Detection for 3D Microscopy With a Fully Convolutional Regression Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:60396-60408. [PMID: 35024261 PMCID: PMC8751907 DOI: 10.1109/access.2021.3073894] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Advances in three-dimensional microscopy and tissue clearing are enabling whole-organ imaging with single-cell resolution. Fast and reliable image processing tools are needed to analyze the resulting image volumes, including automated cell detection, cell counting and cell analytics. Deep learning approaches have shown promising results in two- and three-dimensional nuclei detection tasks, however detecting overlapping or non-spherical nuclei of different sizes and shapes in the presence of a blurring point spread function remains challenging and often leads to incorrect nuclei merging and splitting. Here we present a new regression-based fully convolutional network that located a thousand nuclei centroids with high accuracy in under a minute when combined with V-net, a popular three-dimensional semantic-segmentation architecture. High nuclei detection F1-scores of 95.3% and 92.5% were obtained in two different whole quail embryonic hearts, a tissue type difficult to segment because of its high cell density, and heterogeneous and elliptical nuclei. Similar high scores were obtained in the mouse brain stem, demonstrating that this approach is highly transferable to nuclei of different shapes and intensities. Finally, spatial statistics were performed on the resulting centroids. The spatial distribution of nuclei obtained by our approach most resembles the spatial distribution of manually identified nuclei, indicating that this approach could serve in future spatial analyses of cell organization.
Collapse
Affiliation(s)
- Maryse Lapierre-Landry
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Zexuan Liu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Shan Ling
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Mahdi Bayat
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Radiology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Michael W Jenkins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Pediatrics, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
2
|
Cui Y, Zhang G, Liu Z, Xiong Z, Hu J. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med Biol Eng Comput 2019; 57:2027-2043. [PMID: 31346949 DOI: 10.1007/s11517-019-02008-8] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 06/24/2019] [Indexed: 12/12/2022]
Abstract
This paper addresses the task of nuclei segmentation in high-resolution histopathology images. We propose an automatic end-to-end deep neural network algorithm for segmentation of individual nuclei. A nucleus-boundary model is introduced to predict nuclei and their boundaries simultaneously using a fully convolutional neural network. Given a color-normalized image, the model directly outputs an estimated nuclei map and a boundary map. A simple, fast, and parameter-free post-processing procedure is performed on the estimated nuclei map to produce the final segmented nuclei. An overlapped patch extraction and assembling method is also designed for seamless prediction of nuclei in large whole-slide images. We also show the effectiveness of data augmentation methods for nuclei segmentation task. Our experiments showed our method outperforms prior state-of-the-art methods. Moreover, it is efficient that one 1000×1000 image can be segmented in less than 5 s. This makes it possible to precisely segment the whole-slide image in acceptable time. The source code is available at https://github.com/easycui/nuclei_segmentation . Graphical Abstract The neural network for nuclei segmentation.
Collapse
Affiliation(s)
- Yuxin Cui
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA
| | - Guiying Zhang
- Department of Medical Information Engineering, Zunyi Medical University, Zunyi, China
| | - Zhonghao Liu
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA
| | - Zheng Xiong
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA
| | - Jianjun Hu
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA.
| |
Collapse
|
3
|
Hu B, Tang Y, Chang EIC, Fan Y, Lai M, Xu Y. Unsupervised Learning for Cell-Level Visual Representation in Histopathology Images With Generative Adversarial Networks. IEEE J Biomed Health Inform 2018; 23:1316-1328. [PMID: 29994411 DOI: 10.1109/jbhi.2018.2852639] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.
Collapse
|
4
|
Su H, Xing F, Yang L. Robust Cell Detection of Histopathological Brain Tumor Images Using Sparse Reconstruction and Adaptive Dictionary Selection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1575-1586. [PMID: 26812706 PMCID: PMC4922900 DOI: 10.1109/tmi.2016.2520502] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Successful diagnostic and prognostic stratification, treatment outcome prediction, and therapy planning depend on reproducible and accurate pathology analysis. Computer aided diagnosis (CAD) is a useful tool to help doctors make better decisions in cancer diagnosis and treatment. Accurate cell detection is often an essential prerequisite for subsequent cellular analysis. The major challenge of robust brain tumor nuclei/cell detection is to handle significant variations in cell appearance and to split touching cells. In this paper, we present an automatic cell detection framework using sparse reconstruction and adaptive dictionary learning. The main contributions of our method are: 1) A sparse reconstruction based approach to split touching cells; 2) An adaptive dictionary learning method used to handle cell appearance variations. The proposed method has been extensively tested on a data set with more than 2000 cells extracted from 32 whole slide scanned images. The automatic cell detection results are compared with the manually annotated ground truth and other state-of-the-art cell detection algorithms. The proposed method achieves the best cell detection accuracy with a F1 score = 0.96.
Collapse
Affiliation(s)
- Hai Su
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, FL 32611, USA
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, FL 32611, USA
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, FL 32611, USA
| |
Collapse
|
5
|
Xing F, Yang L. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review. IEEE Rev Biomed Eng 2016; 9:234-63. [PMID: 26742143 PMCID: PMC5233461 DOI: 10.1109/rbme.2016.2515127] [Citation(s) in RCA: 219] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.
Collapse
|
6
|
Guo Y, Xu X, Wang Y, Yang Z, Wang Y, Xia S. A computational approach to detect and segment cytoplasm in muscle fiber images. Microsc Res Tech 2015; 78:508-18. [PMID: 25900156 DOI: 10.1002/jemt.22502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Revised: 03/11/2015] [Accepted: 03/17/2015] [Indexed: 11/09/2022]
Abstract
We developed a computational approach to detect and segment cytoplasm in microscopic images of skeletal muscle fibers. The computational approach provides computer-aided analysis of cytoplasm objects in muscle fiber images to facilitate biomedical research. Cytoplasm in muscle fibers plays an important role in maintaining the functioning and health of muscular tissues. Therefore, cytoplasm is often used as a marker in broad applications of musculoskeletal research, including our search on treatment of muscular disorders such as Duchenne muscular dystrophy, a disease that has no available treatment. However, it is often challenging to analyze cytoplasm and quantify it given the large number of images typically generated in experiments and the large number of muscle fibers contained in each image. Manual analysis is not only time consuming but also prone to human errors. In this work we developed a computational approach to detect and segment the longitudinal sections of cytoplasm based on a modified graph cuts technique and iterative splitting method to extract cytoplasm objects from the background. First, cytoplasm objects are extracted from the background using the modified graph cuts technique which is designed to optimize an energy function. Second, an iterative splitting method is designed to separate the touching or adjacent cytoplasm objects from the results of graph cuts. We tested the computational approach on real data from in vitro experiments and found that it can achieve satisfactory performance in terms of precision and recall rates.
Collapse
Affiliation(s)
- Yanen Guo
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.,Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Yuanyuan Wang
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.,Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| | - Zhong Yang
- Department of Clinical Hematology, Southwestern Hospital, Third Military Medical University, Chongqing, China
| | - Yaming Wang
- Department of Anesthesia, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Shunren Xia
- Key Laboratory of Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.,Zhejiang Provincial Key Laboratory of Cardio-Cerebral Vascular Detection Technology and Medicinal Effectiveness Appraisal, Zhejiang University, Hangzhou, China
| |
Collapse
|
7
|
Li R, Zhang W, Ji S. Automated identification of cell-type-specific genes in the mouse brain by image computing of expression patterns. BMC Bioinformatics 2014; 15:209. [PMID: 24947138 PMCID: PMC4078975 DOI: 10.1186/1471-2105-15-209] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 05/29/2014] [Indexed: 02/07/2023] Open
Abstract
Background Differential gene expression patterns in cells of the mammalian brain result in the morphological, connectional, and functional diversity of cells. A wide variety of studies have shown that certain genes are expressed only in specific cell-types. Analysis of cell-type-specific gene expression patterns can provide insights into the relationship between genes, connectivity, brain regions, and cell-types. However, automated methods for identifying cell-type-specific genes are lacking to date. Results Here, we describe a set of computational methods for identifying cell-type-specific genes in the mouse brain by automated image computing of in situ hybridization (ISH) expression patterns. We applied invariant image feature descriptors to capture local gene expression information from cellular-resolution ISH images. We then built image-level representations by applying vector quantization on the image descriptors. We employed regularized learning methods for classifying genes specifically expressed in different brain cell-types. These methods can also rank image features based on their discriminative power. We used a data set of 2,872 genes from the Allen Brain Atlas in the experiments. Results showed that our methods are predictive of cell-type-specificity of genes. Our classifiers achieved AUC values of approximately 87% when the enrichment level is set to 20. In addition, we showed that the highly-ranked image features captured the relationship between cell-types. Conclusions Overall, our results showed that automated image computing methods could potentially be used to identify cell-type-specific genes in the mouse brain.
Collapse
Affiliation(s)
| | | | - Shuiwang Ji
- Department of Computer Science, Old Dominion University, 23529 Norfolk, VA, USA.
| |
Collapse
|