1
|
Sreekumari AB, Yesudasan Paulsy AT. Hybrid deep learning based stroke detection using CT images with routing in an IoT environment. NETWORK (BRISTOL, ENGLAND) 2025:1-40. [PMID: 39893512 DOI: 10.1080/0954898x.2025.2452280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 11/22/2024] [Accepted: 01/07/2025] [Indexed: 02/04/2025]
Abstract
Stroke remains a leading global health concern and early diagnosis and accurate identification of stroke lesions are essential for improving treatment outcomes and reducing long-term disabilities. Computed Tomography (CT) imaging is widely used in clinical settings for diagnosing stroke, assessing lesion size, and determining the severity. However, the accurate segmentation and early detection of stroke lesions in CT images remain challenging. Thus, a Jaccard_Residual SqueezeNet is proposed for predicting stroke from CT images with the integration of the Internet of Things (IoT). The Jaccard_Residual SqueezeNet is the integration of the Jaccard index in Residual SqueezeNet. Firstly, the brain CT image is routed to the Base Station (BS) using the Fractional Jellyfish Search Pelican Optimization Algorithm (FJSPOA) and preprocessing is accomplished by median filter. Then, the skull segmentation is accomplished by ENet and then feature extraction is done. Lastly, Stroke is detected using the Jaccard_Residual SqueezeNet. The values of throughput, energy, distance, trust, and delay determined in terms of routing are 72.172 Mbps, 0.580J, 22.243 m, 0.915, and 0.083S. Also, the accuracy, sensitivity, precision, and F1-score for stroke detection are 0.902, 0.896, 0.916, and 0.906. These findings suggest that Jaccard_Residual SqueezeNet offers a robust and efficient platform for stroke detection.
Collapse
Affiliation(s)
| | - Arul Teen Yesudasan Paulsy
- Department of Electronics and Communication Engineering, University College of Engineering, Nagercoil, India
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Winkelmaier G, Koch B, Bogardus S, Borowsky AD, Parvin B. Biomarkers of Tumor Heterogeneity in Glioblastoma Multiforme Cohort of TCGA. Cancers (Basel) 2023; 15:cancers15082387. [PMID: 37190318 DOI: 10.3390/cancers15082387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 04/06/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Tumor Whole Slide Images (WSI) are often heterogeneous, which hinders the discovery of biomarkers in the presence of confounding clinical factors. In this study, we present a pipeline for identifying biomarkers from the Glioblastoma Multiforme (GBM) cohort of WSIs from TCGA archive. The GBM cohort endures many technical artifacts while the discovery of GBM biomarkers is challenged because "age" is the single most confounding factor for predicting outcomes. The proposed approach relies on interpretable features (e.g., nuclear morphometric indices), effective similarity metrics for heterogeneity analysis, and robust statistics for identifying biomarkers. The pipeline first removes artifacts (e.g., pen marks) and partitions each WSI into patches for nuclear segmentation via an extended U-Net for subsequent quantitative representation. Given the variations in fixation and staining that can artificially modulate hematoxylin optical density (HOD), we extended Navab's Lab method to normalize images and reduce the impact of batch effects. The heterogeneity of each WSI is then represented either as probability density functions (PDF) per patient or as the composition of a dictionary predicted from the entire cohort of WSIs. For PDF- or dictionary-based methods, morphometric subtypes are constructed based on distances computed from optimal transport and linkage analysis or consensus clustering with Euclidean distances, respectively. For each inferred subtype, Kaplan-Meier and/or the Cox regression model are used to regress the survival time. Since age is the single most important confounder for predicting survival in GBM and there is an observed violation of the proportionality assumption in the Cox model, we use both age and age-squared coupled with the Likelihood ratio test and forest plots for evaluating competing statistics. Next, the PDF- and dictionary-based methods are combined to identify biomarkers that are predictive of survival. The combined model has the advantage of integrating global (e.g., cohort scale) and local (e.g., patient scale) attributes of morphometric heterogeneity, coupled with robust statistics, to reveal stable biomarkers. The results indicate that, after normalization of the GBM cohort, mean HOD, eccentricity, and cellularity are predictive of survival. Finally, we also stratified the GBM cohort as a function of EGFR expression and published genomic subtypes to reveal genomic-dependent morphometric biomarkers.
Collapse
Affiliation(s)
- Garrett Winkelmaier
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
| | - Brandon Koch
- Department of Biostatics, College of Public Health, Ohio State University, 281 W. Lane Ave., Columbus, OH 43210, USA
| | - Skylar Bogardus
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
| | - Alexander D Borowsky
- Department of Pathology, UC Davis Comprehensive Cancer Center, University of California Davis, 1 Shields Ave, Davis, CA 95616, USA
| | - Bahram Parvin
- Department of Electrical and Biomedical Engineering, College of Engineering, University of Nevada Reno, 1664 N. Virginia St., Reno, NV 89509, USA
- Pennington Cancer Institute, Renown Health, Reno, NV 89502, USA
| |
Collapse
|
4
|
Saednia K, Tran WT, Sadeghi-Naini A. A Cascaded Deep Learning Framework for Segmentation of Nuclei in Digital Histology Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4764-4767. [PMID: 36086360 DOI: 10.1109/embc48229.2022.9871996] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Accurate segmentation of nuclei is an essential step in analysis of digital histology images for diagnostic and prognostic applications. Despite recent advances in automated frameworks for nuclei segmentation, this task is still challenging. Specifically, detecting small nuclei in large-scale histology images and delineating the border of touching nuclei accurately is a complicated task even for advanced deep neural networks. In this study, a cascaded deep learning framework is proposed to segment nuclei accurately in digitized microscopy images of histology slides. A U-Net based model with customized pixel-wised weighted loss function is adapted in the proposed framework, followed by a U-Net based model with VGG16 backbone and a soft Dice loss function. The model was pretrained on the Post-NAT-BRCA public dataset before training and independent evaluation on the MoNuSeg dataset. The cascaded model could outperform the other state-of-the-art models with an AJI of 0.72 and a F1-score of 0.83 on the MoNuSeg test set.
Collapse
|
5
|
Götz T, Göb S, Sawant S, Erick X, Wittenberg T, Schmidkonz C, Tomé A, Lang E, Ramming A. Number of necessary training examples for Neural Networks with different number of trainable parameters. J Pathol Inform 2022; 13:100114. [PMID: 36268092 PMCID: PMC9577052 DOI: 10.1016/j.jpi.2022.100114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2021] [Indexed: 11/03/2022] Open
Abstract
In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin (H&E) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model’s complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30% pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40%, we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN).
Collapse
|
6
|
Chen Q, Zhao Y, Liu Y, Sun Y, Yang C, Li P, Zhang L, Gao C. MSLPNet: multi-scale location perception network for dental panoramic X-ray image segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05790-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
7
|
Hu T, Xu X, Chen S, Liu Q. Accurate Neuronal Soma Segmentation Using 3D Multi-Task Learning U-Shaped Fully Convolutional Neural Networks. Front Neuroanat 2021; 14:592806. [PMID: 33551758 PMCID: PMC7860594 DOI: 10.3389/fnana.2020.592806] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 12/02/2020] [Indexed: 12/12/2022] Open
Abstract
Neuronal soma segmentation is a crucial step for the quantitative analysis of neuronal morphology. Automated neuronal soma segmentation methods have opened up the opportunity to improve the time-consuming manual labeling required during the neuronal soma morphology reconstruction for large-scale images. However, the presence of touching neuronal somata and variable soma shapes in images brings challenges for automated algorithms. This study proposes a neuronal soma segmentation method combining 3D U-shaped fully convolutional neural networks with multi-task learning. Compared to existing methods, this technique applies multi-task learning to predict the soma boundary to split touching somata, and adopts U-shaped architecture convolutional neural network which is effective for a limited dataset. The contour-aware multi-task learning framework is applied to the proposed method to predict the masks of neuronal somata and boundaries simultaneously. In addition, a spatial attention module is embedded into the multi-task model to improve neuronal soma segmentation results. The Nissl-stained dataset captured by the micro-optical sectioning tomography system is used to validate the proposed method. Following comparison to four existing segmentation models, the proposed method outperforms the others notably in both localization and segmentation. The novel method has potential for high-throughput neuronal soma segmentation in large-scale optical imaging data for neuron morphology quantitative analysis.
Collapse
Affiliation(s)
- Tianyu Hu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaofeng Xu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shangbin Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Qian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,School of Biomedical Engineering, Hainan University, Haikou, China
| |
Collapse
|
8
|
Mahmood F, Borders D, Chen RJ, Mckay GN, Salimian KJ, Baras A, Durr NJ. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3257-3267. [PMID: 31283474 PMCID: PMC8588951 DOI: 10.1109/tmi.2019.2927182] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.
Collapse
|
9
|
Khoshdeli M, Winkelmaier G, Parvin B. Deep fusion of contextual and object-based representations for delineation of multiple nuclear phenotypes. Bioinformatics 2020; 35:4860-4861. [PMID: 31135022 DOI: 10.1093/bioinformatics/btz430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 04/23/2019] [Accepted: 05/23/2019] [Indexed: 11/12/2022] Open
Abstract
MOTIVATION Nuclear delineation and phenotypic profiling are important steps in the automated analysis of histology sections. However, these are challenging problems due to (i) technical variations (e.g. fixation, staining) that originate as a result of sample preparation; (ii) biological heterogeneity (e.g. vesicular versus high chromatin phenotypes, nuclear atypia) and (iii) overlapping nuclei. This Application-Note couples contextual information about the cellular organization with the individual signature of nuclei to improve performance. As a result, routine delineation of nuclei in H&E stained histology sections is enabled for either computer-aided pathology or integration with genome-wide molecular data. RESULTS The method has been evaluated on two independent datasets. One dataset originates from our lab and includes H&E stained sections of brain and breast samples. The second dataset is publicly available through IEEE with a focus on gland-based tissue architecture. We report an approximate AJI of 0.592 and an F1-score 0.93 on both datasets. AVAILABILITY AND IMPLEMENTATION The code-base, modified dataset and results are publicly available. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Mina Khoshdeli
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557-0260, USA
| | - Garrett Winkelmaier
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557-0260, USA
| | - Bahram Parvin
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557-0260, USA
| |
Collapse
|
10
|
Moen E, Bannon D, Kudo T, Graf W, Covert M, Van Valen D. Deep learning for cellular image analysis. Nat Methods 2019; 16:1233-1246. [PMID: 31133758 PMCID: PMC8759575 DOI: 10.1038/s41592-019-0403-1] [Citation(s) in RCA: 587] [Impact Index Per Article: 97.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 04/03/2019] [Indexed: 12/21/2022]
Abstract
Recent advances in computer vision and machine learning underpin a collection of algorithms with an impressive ability to decipher the content of images. These deep learning algorithms are being applied to biological images and are transforming the analysis and interpretation of imaging data. These advances are positioned to render difficult analyses routine and to enable researchers to carry out new, previously impossible experiments. Here we review the intersection between deep learning and cellular image analysis and provide an overview of both the mathematical mechanics and the programming frameworks of deep learning that are pertinent to life scientists. We survey the field's progress in four key applications: image classification, image segmentation, object tracking, and augmented microscopy. Last, we relay our labs' experience with three key aspects of implementing deep learning in the laboratory: annotating training data, selecting and training a range of neural network architectures, and deploying solutions. We also highlight existing datasets and implementations for each surveyed application.
Collapse
Affiliation(s)
- Erick Moen
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Dylan Bannon
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Takamasa Kudo
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - William Graf
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA
| | - Markus Covert
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - David Van Valen
- Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
11
|
Xing F, Xie Y, Shi X, Chen P, Zhang Z, Yang L. Towards pixel-to-pixel deep nucleus detection in microscopy images. BMC Bioinformatics 2019; 20:472. [PMID: 31521104 PMCID: PMC6744696 DOI: 10.1186/s12859-019-3037-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 08/21/2019] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Nucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. RESULTS We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. CONCLUSIONS We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, and the Data Science to Patient Value initiative, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, Colorado 80045, United States
| | - Yuanpu Xie
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Pingjun Chen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Zizhao Zhang
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida 32611, United States
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida 32611, United States
| |
Collapse
|