1
|
Erden MB, Cansiz S, Caki O, Khattak H, Etiz D, Yakar MC, Duruer K, Barut B, Gunduz-Demir C. FourierLoss: Shape-aware loss function with Fourier descriptors. Neurocomputing 2025; 638:130155. [DOI: 10.1016/j.neucom.2025.130155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2025]
|
2
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
3
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
4
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
5
|
Zhang Z, Hu Y, Yu G, Dai J. DeepTag: A General Framework for Fiducial Marker Design and Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:2931-2944. [PMID: 35552151 DOI: 10.1109/tpami.2022.3174603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A fiducial marker system usually consists of markers, a detection algorithm, and a coding system. The appearance of markers and the detection robustness are generally limited by the existing detection algorithms, which are hand-crafted with traditional low-level image processing techniques. Furthermore, a sophisticatedly designed coding system is required to overcome the shortcomings of both markers and detection algorithms. To improve the flexibility and robustness in various applications, we propose a general deep learning based framework, DeepTag, for fiducial marker design and detection. DeepTag not only supports detection of a wide variety of existing marker families, but also makes it possible to design new marker families with customized local patterns. Moreover, we propose an effective procedure to synthesize training data on the fly without manual annotations. Thus, DeepTag can easily adapt to existing and newly-designed marker families. To validate DeepTag and existing methods, beside existing datasets, we further collect a new large and challenging dataset where markers are placed in different view distances and angles. Experiments show that DeepTag well supports different marker families and greatly outperforms the existing methods in terms of both detection robustness and pose accuracy. Both code and dataset are available at https://herohuyongtao.github.io/research/publications/deep-tag/.
Collapse
|
6
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
7
|
ELMGAN: A GAN-based efficient lightweight multi-scale-feature-fusion multi-task model. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
8
|
Wang Z, Zhu X, Li A, Wang Y, Meng G, Wang M. Global and local attentional feature alignment for domain adaptive nuclei detection in histopathology images. Artif Intell Med 2022; 132:102341. [DOI: 10.1016/j.artmed.2022.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 06/08/2022] [Accepted: 06/27/2022] [Indexed: 11/02/2022]
|
9
|
Graph-Embedded Online Learning for Cell Detection and Tumour Proportion Score Estimation. ELECTRONICS 2022. [DOI: 10.3390/electronics11101642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cell detection in microscopy images can provide useful clinical information. Most methods based on deep learning for cell detection are fully supervised. Without enough labelled samples, the accuracy of these methods would drop rapidly. To handle limited annotations and massive unlabelled data, semi-supervised learning methods have been developed. However, many of these are trained off-line, and are unable to process new incoming data to meet the needs of clinical diagnosis. Therefore, we propose a novel graph-embedded online learning network (GeoNet) for cell detection. It can locate and classify cells with dot annotations, saving considerable manpower. Trained by both historical data and reliable new samples, the online network can predict nuclear locations for upcoming new images while being optimized. To be more easily adapted to open data, it engages dynamic graph regularization and learns the inherent nonlinear structures of cells. Moreover, GeoNet can be applied to downstream tasks such as quantitative estimation of tumour proportion score (TPS), which is a useful indicator for lung squamous cell carcinoma treatment and prognostics. Experimental results for five large datasets with great variability in cell type and morphology validate the effectiveness and generalizability of the proposed method. For the lung squamous cell carcinoma (LUSC) dataset, the detection F1-scores of GeoNet for negative and positive tumour cells are 0.734 and 0.769, respectively, and the relative error of GeoNet for TPS estimation is 11.1%.
Collapse
|
10
|
Learning to count biological structures with raters’ uncertainty. Med Image Anal 2022; 80:102500. [DOI: 10.1016/j.media.2022.102500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/22/2022] [Accepted: 05/25/2022] [Indexed: 11/21/2022]
|
11
|
Zhang L, Li J, Li P, Lu X, Gong M, Shen P, Zhu G, Shah SA, Bennamoun M, Qian K, Schuller BW. MEDAS: an open-source platform as a service to help break the walls between medicine and informatics. Neural Comput Appl 2022; 34:6547-6567. [PMID: 35068703 PMCID: PMC8761112 DOI: 10.1007/s00521-021-06750-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 11/10/2021] [Indexed: 11/04/2022]
Abstract
In the past decade, deep learning (DL) has achieved unprecedented success in numerous fields, such as computer vision and healthcare. Particularly, DL is experiencing an increasing development in advanced medical image analysis applications in terms of segmentation, classification, detection, and other tasks. On the one hand, tremendous needs that leverage DL's power for medical image analysis arise from the research community of a medical, clinical, and informatics background to share their knowledge, skills, and experience jointly. On the other hand, barriers between disciplines are on the road for them, often hampering a full and efficient collaboration. To this end, we propose our novel open-source platform, i.e., MEDAS-the MEDical open-source platform As Service. To the best of our knowledge, MEDAS is the first open-source platform providing collaborative and interactive services for researchers from a medical background using DL-related toolkits easily and for scientists or engineers from informatics modeling faster. Based on tools and utilities from the idea of RINV (Rapid Implementation aNd Verification), our proposed platform implements tools in pre-processing, post-processing, augmentation, visualization, and other phases needed in medical image analysis. Five tasks, concerning lung, liver, brain, chest, and pathology, are validated and demonstrated to be efficiently realizable by using MEDAS. MEDAS is available at http://medas.bnc.org.cn/.
Collapse
Affiliation(s)
| | | | - Ping Li
- Data and Virtual Research Room, Shanghai Broadband Network Center, Shanghai, China
| | - Xiaoyuan Lu
- Data and Virtual Research Room, Shanghai Broadband Network Center, Shanghai, China
| | | | | | | | - Syed Afaq Shah
- College of Science, Health, Engineering and Education, Murdoch University, Perth, Australia
| | - Mohammed Bennamoun
- School of Computer Science and Software Engineering, The University of Western Australia, Crawley, Australia
| | - Kun Qian
- School of Medical Technology, Beijing Institute of Technology, Beijing, China
| | - Björn W. Schuller
- GLAM - Group on Language, Audio & Music, Imperial College London, London, UK
- Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany
| |
Collapse
|
12
|
Song JE, Kim DH. Improved Multi-Echo Gradient-Echo-Based Myelin Water Fraction Mapping Using Dimensionality Reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:27-38. [PMID: 34357864 DOI: 10.1109/tmi.2021.3102977] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multi-echo gradient-echo (mGRE)-based myelin water fraction (MWF) mapping is a promising myelin water imaging (MWI) modality but is vulnerable to noise and artifact corruption. The linear dimensionality reduction (LDR) method has recently shown improvements with regard to these challenges. However, the magnitude value based low rank operators have been shown to misestimate the MWF for regions with [Formula: see text] anisotropy. This paper presents a nonlinear dimensionality reduction (NLDR) method to estimate the MWF map better by encouraging nonlinear low dimensionality of mGRE signal sources. Specifically, we implemented a fully connected deep autoencoder to extract the low-dimensional features of complex-valued signals and incorporated a sparse regularization to separate the anomaly sources that do not reside in the low-dimensional manifold. Simulations and in vivo experiments were performed to evaluate the accuracy of the MWF map under various situations. The proposed NLDR-based MWF improves the accuracy of the MWF map over the conventional nonlinear least-squares method and the LDR-based MWF and maintains robustness against noise and artifact corruption.
Collapse
|
13
|
Seo H, Yu L, Ren H, Li X, Shen L, Xing L. Deep Neural Network With Consistency Regularization of Multi-Output Channels for Improved Tumor Detection and Delineation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3369-3378. [PMID: 34048339 PMCID: PMC8692166 DOI: 10.1109/tmi.2021.3084748] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Deep learning is becoming an indispensable tool for imaging applications, such as image segmentation, classification, and detection. In this work, we reformulate a standard deep learning problem into a new neural network architecture with multi-output channels, which reflects different facets of the objective, and apply the deep neural network to improve the performance of image segmentation. By adding one or more interrelated auxiliary-output channels, we impose an effective consistency regularization for the main task of pixelated classification (i.e., image segmentation). Specifically, multi-output-channel consistency regularization is realized by residual learning via additive paths that connect main-output channel and auxiliary-output channels in the network. The method is evaluated on the detection and delineation of lung and liver tumors with public data. The results clearly show that multi-output-channel consistency implemented by residual learning improves the standard deep neural network. The proposed framework is quite broad and should find widespread applications in various deep learning problems.
Collapse
|
14
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
15
|
Zhou X, Gu M, Cheng Z. Local Integral Regression Network for Cell Nuclei Detection. ENTROPY 2021; 23:e23101336. [PMID: 34682060 PMCID: PMC8535160 DOI: 10.3390/e23101336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022]
Abstract
Nuclei detection is a fundamental task in the field of histopathology image analysis and remains challenging due to cellular heterogeneity. Recent studies explore convolutional neural networks to either isolate them with sophisticated boundaries (segmentation-based methods) or locate the centroids of the nuclei (counting-based approaches). Although these two methods have demonstrated superior success, their fully supervised training demands considerable and laborious pixel-wise annotations manually labeled by pathology experts. To alleviate such tedious effort and reduce the annotation cost, we propose a novel local integral regression network (LIRNet) that allows both fully and weakly supervised learning (FSL/WSL) frameworks for nuclei detection. Furthermore, the LIRNet can output an exquisite density map of nuclei, in which the localization of each nucleus is barely affected by the post-processing algorithms. The quantitative experimental results demonstrate that the FSL version of the LIRNet achieves a state-of-the-art performance compared to other counterparts. In addition, the WSL version has exhibited a competitive detection performance and an effortless data annotation that requires only 17.5% of the annotation effort.
Collapse
|
16
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
17
|
Roszkowiak L, Korzynska A, Siemion K, Zak J, Pijanowska D, Bosch R, Lejeune M, Lopez C. System for quantitative evaluation of DAB&H-stained breast cancer biopsy digital images (CHISEL). Sci Rep 2021; 11:9291. [PMID: 33927266 PMCID: PMC8085130 DOI: 10.1038/s41598-021-88611-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 04/14/2021] [Indexed: 02/02/2023] Open
Abstract
This study presents CHISEL (Computer-assisted Histopathological Image Segmentation and EvaLuation), an end-to-end system capable of quantitative evaluation of benign and malignant (breast cancer) digitized tissue samples with immunohistochemical nuclear staining of various intensity and diverse compactness. It stands out with the proposed seamless segmentation based on regions of interest cropping as well as the explicit step of nuclei cluster splitting followed by a boundary refinement. The system utilizes machine learning and recursive local processing to eliminate distorted (inaccurate) outlines. The method was validated using two labeled datasets which proved the relevance of the achieved results. The evaluation was based on the IISPV dataset of tissue from biopsy of breast cancer patients, with markers of T cells, along with Warwick Beta Cell Dataset of DAB&H-stained tissue from postmortem diabetes patients. Based on the comparison of the ground truth with the results of the detected and classified objects, we conclude that the proposed method can achieve better or similar results as the state-of-the-art methods. This system deals with the complex problem of nuclei quantification in digitalized images of immunohistochemically stained tissue sections, achieving best results for DAB&H-stained breast cancer tissue samples. Our method has been prepared with user-friendly graphical interface and was optimized to fully utilize the available computing power, while being accessible to users with fewer resources than needed by deep learning techniques.
Collapse
Affiliation(s)
- Lukasz Roszkowiak
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland.
| | - Anna Korzynska
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
| | - Krzysztof Siemion
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
- Medical Pathomorphology Department, Medical University of Bialystok, Białystok, Poland
| | - Jakub Zak
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
| | - Dorota Pijanowska
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
| | - Ramon Bosch
- Pathology Department, Hospital de Tortosa Verge de la Cinta, Institut d'Investigacio Sanitaria Pere Virgili (IISPV), URV, Tortosa, Spain
| | - Marylene Lejeune
- Molecular Biology and Research Section, Hospital de Tortosa Verge de la Cinta, Institut d'Investigacio Sanitaria Pere Virgili (IISPV), URV, Tortosa, Spain
| | - Carlos Lopez
- Molecular Biology and Research Section, Hospital de Tortosa Verge de la Cinta, Institut d'Investigacio Sanitaria Pere Virgili (IISPV), URV, Tortosa, Spain
| |
Collapse
|
18
|
He S, Minn KT, Solnica-Krezel L, Anastasio MA, Li H. Deeply-supervised density regression for automatic cell counting in microscopy images. Med Image Anal 2021; 68:101892. [PMID: 33285481 PMCID: PMC7856299 DOI: 10.1016/j.media.2020.101892] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/21/2022]
Abstract
Accurately counting the number of cells in microscopy images is required in many medical diagnosis and biological studies. This task is tedious, time-consuming, and prone to subjective errors. However, designing automatic counting methods remains challenging due to low image contrast, complex background, large variance in cell shapes and counts, and significant cell occlusions in two-dimensional microscopy images. In this study, we proposed a new density regression-based method for automatically counting cells in microscopy images. The proposed method processes two innovations compared to other state-of-the-art density regression-based methods. First, the density regression model (DRM) is designed as a concatenated fully convolutional regression network (C-FCRN) to employ multi-scale image features for the estimation of cell density maps from given images. Second, auxiliary convolutional neural networks (AuxCNNs) are employed to assist in the training of intermediate layers of the designed C-FCRN to improve the DRM performance on unseen datasets. Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method.
Collapse
Affiliation(s)
- Shenghua He
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA
| | - Kyaw Thu Minn
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA; Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Lilianna Solnica-Krezel
- Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA; Center of Regenerative Medicine, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Mark A Anastasio
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
| | - Hua Li
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Carle Cancer Center, Carle Foundation Hospital, Urbana, IL 61801 USA.
| |
Collapse
|
19
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
20
|
Javed S, Mahmood A, Werghi N, Benes K, Rajpoot N. Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9204-9219. [PMID: 32966218 DOI: 10.1109/tip.2020.3023795] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In computational pathology, automated tissue phenotyping in cancer histology images is a fundamental tool for profiling tumor microenvironments. Current tissue phenotyping methods use features derived from image patches which may not carry biological significance. In this work, we propose a novel multiplex cellular community-based algorithm for tissue phenotyping integrating cell-level features within a graph-based hierarchical framework. We demonstrate that such integration offers better performance compared to prior deep learning and texture-based methods as well as to cellular community based methods using uniplex networks. To this end, we construct celllevel graphs using texture, alpha diversity and multi-resolution deep features. Using these graphs, we compute cellular connectivity features which are then employed for the construction of a patch-level multiplex network. Over this network, we compute multiplex cellular communities using a novel objective function. The proposed objective function computes a low-dimensional subspace from each cellular network and subsequently seeks a common low-dimensional subspace using the Grassmann manifold. We evaluate our proposed algorithm on three publicly available datasets for tissue phenotyping, demonstrating a significant improvement over existing state-of-the-art methods.
Collapse
|
21
|
Javed S, Mahmood A, Fraz MM, Koohbanani NA, Benes K, Tsang YW, Hewitt K, Epstein D, Snead D, Rajpoot N. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med Image Anal 2020; 63:101696. [PMID: 32330851 DOI: 10.1016/j.media.2020.101696] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 02/18/2020] [Accepted: 04/02/2020] [Indexed: 02/01/2023]
Abstract
Classification of various types of tissue in cancer histology images based on the cellular compositions is an important step towards the development of computational pathology tools for systematic digital profiling of the spatial tumor microenvironment. Most existing methods for tissue phenotyping are limited to the classification of tumor and stroma and require large amount of annotated histology images which are often not available. In the current work, we pose the problem of identifying distinct tissue phenotypes as finding communities in cellular graphs or networks. First, we train a deep neural network for cell detection and classification into five distinct cellular components. Considering the detected nuclei as nodes, potential cell-cell connections are assigned using Delaunay triangulation resulting in a cell-level graph. Based on this cell graph, a feature vector capturing potential cell-cell connection of different types of cells is computed. These feature vectors are used to construct a patch-level graph based on chi-square distance. We map patch-level nodes to the geometric space by representing each node as a vector of geodesic distances from other nodes in the network and iteratively drifting the patch nodes in the direction of positive density gradients towards maximum density regions. The proposed algorithm is evaluated on a publicly available dataset and another new large-scale dataset consisting of 280K patches of seven tissue phenotypes. The estimated communities have significant biological meanings as verified by the expert pathologists. A comparison with current state-of-the-art methods reveals significant performance improvement in tissue phenotyping.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Khalifa University Center for Autonomous Robotic Systems (KUCARS), Abu Dhabi, P.O. Box 127788, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Muhammad Moazam Fraz
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; National University of Science and Technology (NUST), Islamabad, Pakistan
| | | | - Ksenija Benes
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Yee-Wah Tsang
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - David Epstein
- Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK
| | - David Snead
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, UK.
| |
Collapse
|
22
|
Jiang H, Li S, Liu W, Zheng H, Liu J, Zhang Y. Geometry-Aware Cell Detection with Deep Learning. mSystems 2020; 5:e00840-19. [PMID: 32019836 PMCID: PMC7002118 DOI: 10.1128/msystems.00840-19] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 01/15/2020] [Indexed: 11/20/2022] Open
Abstract
Analyzing cells and tissues under a microscope is a cornerstone of biological research and clinical practice. However, the challenge faced by conventional microscopy image analysis is the fact that cell recognition through a microscope is still time-consuming and lacks both accuracy and consistency. Despite enormous progress in computer-aided microscopy cell detection, especially with recent deep-learning-based techniques, it is still difficult to translate an established method directly to a new cell target without extensive modification. The morphology of a cell is complex and highly varied, but it has long been known that cells show a nonrandom geometrical order in which a distinct and defined shape can be formed in a given type of cell. Thus, we have proposed a geometry-aware deep-learning method, geometric-feature spectrum ExtremeNet (GFS-ExtremeNet), for cell detection. GFS-ExtremeNet is built on the framework of ExtremeNet with a collection of geometric features, resulting in the accurate detection of any given cell target. We obtained promising detection results with microscopic images of publicly available mammalian cell nuclei and newly collected protozoa, whose cell shapes and sizes varied. Even more striking, our method was able to detect unicellular parasites within red blood cells without misdiagnosis of each other.IMPORTANCE Automated diagnostic microscopy powered by deep learning is useful, particularly in rural areas. However, there is no general method for object detection of different cells. In this study, we developed GFS-ExtremeNet, a geometry-aware deep-learning method which is based on the detection of four extreme key points for each object (topmost, bottommost, rightmost, and leftmost) and its center point. A postprocessing step, namely, adjacency spectrum, was employed to measure whether the distances between the key points were below a certain threshold for a particular cell candidate. Our newly proposed geometry-aware deep-learning method outperformed other conventional object detection methods and could be applied to any type of cell with a certain geometrical order. Our GFS-ExtremeNet approach opens a new window for the development of an automated cell detection system.
Collapse
Affiliation(s)
- Hao Jiang
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Sen Li
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Weihuang Liu
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Hongjin Zheng
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Jinghao Liu
- College of Science, Harbin Institute of Technology, Shenzhen, China
| | - Yang Zhang
- College of Science, Harbin Institute of Technology, Shenzhen, China
| |
Collapse
|