1
|
Xu J, Shi L, Li S, Zhang Y, Zhao G, Shi Y, Li J, Gao Y. PointFormer: Keypoint-Guided Transformer for Simultaneous Nuclei Segmentation and Classification in Multi-Tissue Histology Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:2883-2895. [PMID: 40323744 DOI: 10.1109/tip.2025.3565184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2025]
Abstract
Automatic nuclei segmentation and classification (NSC) is a fundamental prerequisite in digital pathology analysis as it enables the quantification of biomarkers and histopathological features for precision medicine. Nuclei appear to be small, however, global spatial distribution and brightness contrast, or color correlation between the nucleus and background, have been recognized as key rationales for accurate nuclei segmentation in actual clinical practice. Although recent great breakthroughs in medical image segmentation have been achieved by Transformer-based methods, the adaptability of segmenting and classifying nuclei from histopathological images is rarely investigated. Also, the severe overlap of nuclei and the large intra-class variability are common in clinical wild data. Prevailing methods based on polygonal representations or distance maps are limited by empirically designed post-processing strategies, resulting in ineffective segmentation of large irregular nuclei instances. To address these challenges, we propose a keypoint-guided tri-decoder Transformer (PointFormer) for NSC simultaneously. Specifically, the overall NSC task is decoupled to a multi-task learning problem, where a tri-decoder structure is employed for decoding nuclei instance, edges, and types, respectively. The nuclei detection and classification (NDC) subtask is reformulated as a semantic keypoint estimation problem. Meanwhile, introduces a novel attention-guiding strategy to capture strong inter-branch correlations and mitigate inconsistencies between multi-decoder predictions. Finally, a multi-local perception module is designed as the base building block of PointFormer to achieve local and global trade-offs and reduce model complexity. Comprehensive quantitative and qualitative experimental results on three datasets of different volumes have demonstrated the superiority of the proposed method over prevalent methods, especially for the PanNuke dataset with an achievement of 70.6% on bPQ.
Collapse
|
2
|
Du Y, Chen X, Fu Y. Multiscale transformers and multi-attention mechanism networks for pathological nuclei segmentation. Sci Rep 2025; 15:12549. [PMID: 40221423 PMCID: PMC11993704 DOI: 10.1038/s41598-025-90397-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 02/12/2025] [Indexed: 04/14/2025] Open
Abstract
Pathology nuclei segmentation is crucial of computer-aided diagnosis in pathology. However, due to the high density, complex backgrounds, and blurred cell boundaries, it makes pathology cell segmentation still a challenging problem. In this paper, we propose a network model for pathology image segmentation based on a multi-scale Transformer multi-attention mechanism. To solve the problem that the high density of cell nuclei and the complexity of the background make it difficult to extract features, a dense attention module is embedded in the encoder, which improves the learning of the target cell information to minimize target information loss; Additionally, to solve the problem of poor segmentation accuracy due to the blurred cell boundaries, the Multi-scale Transformer Attention module is embedded between encoder and decoder, improving the transfer of the boundary feature information and makes the segmented cell boundaries more accurate. Experimental results on MoNuSeg, GlaS and CoNSeP datasets demonstrate the network's superior accuracy.
Collapse
Affiliation(s)
- Yongzhao Du
- College of Engineering, Huaqiao University, Fujian, 362021, China.
- College of Internet of Things Industry, Huaqiao University, Fujian, 362021, China.
| | - Xin Chen
- College of Engineering, Huaqiao University, Fujian, 362021, China
| | - Yuqing Fu
- College of Engineering, Huaqiao University, Fujian, 362021, China
- College of Internet of Things Industry, Huaqiao University, Fujian, 362021, China
| |
Collapse
|
3
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
4
|
Krikid F, Rositi H, Vacavant A. State-of-the-Art Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues. J Imaging 2024; 10:311. [PMID: 39728208 DOI: 10.3390/jimaging10120311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 11/20/2024] [Accepted: 12/02/2024] [Indexed: 12/28/2024] Open
Abstract
Microscopic image segmentation (MIS) is a fundamental task in medical imaging and biological research, essential for precise analysis of cellular structures and tissues. Despite its importance, the segmentation process encounters significant challenges, including variability in imaging conditions, complex biological structures, and artefacts (e.g., noise), which can compromise the accuracy of traditional methods. The emergence of deep learning (DL) has catalyzed substantial advancements in addressing these issues. This systematic literature review (SLR) provides a comprehensive overview of state-of-the-art DL methods developed over the past six years for the segmentation of microscopic images. We critically analyze key contributions, emphasizing how these methods specifically tackle challenges in cell, nucleus, and tissue segmentation. Additionally, we evaluate the datasets and performance metrics employed in these studies. By synthesizing current advancements and identifying gaps in existing approaches, this review not only highlights the transformative potential of DL in enhancing diagnostic accuracy and research efficiency but also suggests directions for future research. The findings of this study have significant implications for improving methodologies in medical and biological applications, ultimately fostering better patient outcomes and advancing scientific understanding.
Collapse
Affiliation(s)
- Fatma Krikid
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| | - Hugo Rositi
- LORIA, CNRS, Université de Lorraine, F-54000 Nancy, France
| | - Antoine Vacavant
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| |
Collapse
|
5
|
Cao R, Meng Q, Tan D, Wei P, Ding Y, Zheng C. AER-Net: Attention-Enhanced Residual Refinement Network for Nuclei Segmentation and Classification in Histology Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:7208. [PMID: 39598984 PMCID: PMC11598247 DOI: 10.3390/s24227208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/04/2024] [Accepted: 11/08/2024] [Indexed: 11/29/2024]
Abstract
The acurate segmentation and classification of nuclei in histological images are crucial for the diagnosis and treatment of colorectal cancer. However, the aggregation of nuclei and intra-class variability in histology images present significant challenges for nuclei segmentation and classification. In addition, the imbalance of various nuclei classes exacerbates the difficulty of nuclei classification and segmentation using deep learning models. To address these challenges, we present a novel attention-enhanced residual refinement network (AER-Net), which consists of one encoder and three decoder branches that have same network structure. In addition to the nuclei instance segmentation branch and nuclei classification branch, one branch is used to predict the vertical and horizontal distance from each pixel to its nuclear center, which is combined with output by the segmentation branch to improve the final segmentation results. The AER-Net utilizes an attention-enhanced encoder module to focus on more valuable features. To further refine predictions and achieve more accurate results, an attention-enhancing residual refinement module is employed at the end of each encoder branch. Moreover, the coarse predictions and refined predictions are combined by using a loss function that employs cross-entropy loss and generalized dice loss to efficiently tackle the challenge of class imbalance among nuclei in histology images. Compared with other state-of-the-art methods on two colorectal cancer datasets and a pan-cancer dataset, AER-Net demonstrates outstanding performance, validating its effectiveness in nuclear segmentation and classification.
Collapse
Affiliation(s)
- Ruifen Cao
- The Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Computer Science and Technology, Anhui University, Hefei 230601, China
| | - Qingbin Meng
- The Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Computer Science and Technology, Anhui University, Hefei 230601, China
| | - Dayu Tan
- Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China
| | - Pijing Wei
- Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China
| | - Yun Ding
- School of Artificial Intelligence, Anhui University, Hefei 230601, China
| | - Chunhou Zheng
- School of Artificial Intelligence, Anhui University, Hefei 230601, China
| |
Collapse
|
6
|
Lou W, Wan X, Li G, Lou X, Li C, Gao F, Li H. Structure Embedded Nucleus Classification for Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3149-3160. [PMID: 38607704 DOI: 10.1109/tmi.2024.3388328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Nuclei classification provides valuable information for histopathology image analysis. However, the large variations in the appearance of different nuclei types cause difficulties in identifying nuclei. Most neural network based methods are affected by the local receptive field of convolutions, and pay less attention to the spatial distribution of nuclei or the irregular contour shape of a nucleus. In this paper, we first propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order, and employ a recurrent neural network that aggregates the sequential change in distance between key points to obtain learnable shape features. Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations. To capture the correlations between the categories of nuclei and their surrounding tissue patterns, we further introduce edge features that are defined as the background textures between adjacent nuclei. Lastly, we integrate both polygon and graph structure learning mechanisms into a whole framework that can extract intra and inter-nucleus structural characteristics for nuclei classification. Experimental results show that the proposed framework achieves significant improvements compared to the previous methods. Code and data are made available via https://github.com/lhaof/SENC.
Collapse
|
7
|
Lin Y, Wang Z, Zhang D, Cheng KT, Chen H. BoNuS: Boundary Mining for Nuclei Segmentation With Partial Point Labels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2137-2147. [PMID: 38231818 DOI: 10.1109/tmi.2024.3355068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Nuclei segmentation is a fundamental prerequisite in the digital pathology workflow. The development of automated methods for nuclei segmentation enables quantitative analysis of the wide existence and large variances in nuclei morphometry in histopathology images. However, manual annotation of tens of thousands of nuclei is tedious and time-consuming, which requires significant amount of human effort and domain-specific expertise. To alleviate this problem, in this paper, we propose a weakly-supervised nuclei segmentation method that only requires partial point labels of nuclei. Specifically, we propose a novel boundary mining framework for nuclei segmentation, named BoNuS, which simultaneously learns nuclei interior and boundary information from the point labels. To achieve this goal, we propose a novel boundary mining loss, which guides the model to learn the boundary information by exploring the pairwise pixel affinity in a multiple-instance learning manner. Then, we consider a more challenging problem, i.e., partial point label, where we propose a nuclei detection module with curriculum learning to detect the missing nuclei with prior morphological knowledge. The proposed method is validated on three public datasets, MoNuSeg, CPM, and CoNIC datasets. Experimental results demonstrate the superior performance of our method to the state-of-the-art weakly-supervised nuclei segmentation methods. Code: https://github.com/hust-linyi/bonus.
Collapse
|
8
|
Wang C, Li S, Ke J, Zhang C, Shen Y. RandStainNA++: Enhance Random Stain Augmentation and Normalization Through Foreground and Background Differentiation. IEEE J Biomed Health Inform 2024; 28:3660-3671. [PMID: 38502612 DOI: 10.1109/jbhi.2024.3379280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
The wide prevalence of staining variations in digital pathology presents a significant obstacle, often undermining the effectiveness of diagnosis and analysis. The current strategies to counteract this issue primarily revolve around Stain Normalization (SN) and Stain Augmentation (SA). Nonetheless, these methodologies come with inherent limitations. They struggle to adapt to the vast array of staining styles, tend to presuppose linear associations between color spaces, and often lead to unrealistic color transformations. In response to these challenges, we introduce RandStainNA++, a novel method seamlessly integrating SN and SA. This method exploits the versatility of random SN and SA within randomly selected color spaces, effectively managing variations for the foreground and background independently. By refining the transformations of staining styles for the foreground and background within a realistic scope, this strategy promotes the generation of more practical staining transformations during the training phase. Further enhancing our approach, we propose a unique self-distillation method. This technique incorporates prior knowledge of stain variation, substantially augmenting the generalization capability of the network. The striking results yield that, compared to conventional classification models, our method boosts performance by a significant margin of 16-25%. Furthermore, when juxtaposed with baseline segmentation models, the Dice score registers an increase of 0.06.
Collapse
|
9
|
Qian Z, Wang Z, Zhang X, Wei B, Lai M, Shou J, Fan Y, Xu Y. MSNSegNet: attention-based multi-shape nuclei instance segmentation in histopathology images. Med Biol Eng Comput 2024; 62:1821-1836. [PMID: 38401007 DOI: 10.1007/s11517-024-03050-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/13/2024] [Indexed: 02/26/2024]
Abstract
In clinical research, the segmentation of irregularly shaped nuclei, particularly in mesenchymal areas like fibroblasts, is crucial yet often neglected. These irregular nuclei are significant for assessing tissue repair in immunotherapy, a process involving neovascularization and fibroblast proliferation. Proper segmentation of these nuclei is vital for evaluating immunotherapy's efficacy, as it provides insights into pathological features. However, the challenge lies in the pronounced curvature variations of these non-convex nuclei, making their segmentation more difficult than that of regular nuclei. In this work, we introduce an undefined task to segment nuclei with both regular and irregular morphology, namely multi-shape nuclei segmentation. We propose a proposal-based method to perform multi-shape nuclei segmentation. By leveraging the two-stage structure of the proposal-based method, a powerful refinement module with high computational costs can be selectively deployed only in local regions, improving segmentation accuracy without compromising computational efficiency. We introduce a novel self-attention module to refine features in proposals for the sake of effectiveness and efficiency in the second stage. The self-attention module improves segmentation performance by capturing long-range dependencies to assist in distinguishing the foreground from the background. In this process, similar features get high attention weights while dissimilar ones get low attention weights. In the first stage, we introduce a residual attention module and a semantic-aware module to accurately predict candidate proposals. The two modules capture more interpretable features and introduce additional supervision through semantic-aware loss. In addition, we construct a dataset with a proportion of non-convex nuclei compared with existing nuclei datasets, namely the multi-shape nuclei (MsN) dataset. Our MSNSegNet method demonstrates notable improvements across various metrics compared to the second-highest-scoring methods. For all nuclei, the D i c e score improved by approximately 1.66 % , A J I by about 2.15 % , and D i c e obj by roughly 0.65 % . For non-convex nuclei, which are crucial in clinical applications, our method's A J I improved significantly by approximately 3.86 % and D i c e obj by around 2.54 % . These enhancements underscore the effectiveness of our approach on multi-shape nuclei segmentation, particularly in challenging scenarios involving irregularly shaped nuclei.
Collapse
Affiliation(s)
- Ziniu Qian
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Zihua Wang
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Bingzheng Wei
- Xiaomi Corporation, Haidian District, Beijing, 100085, Beijing, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang Provincial Key Laboratory of Disease Proteomics and Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Zhejiang University, Hangzhou, 310027, Zhejiang, China
| | - Jianzhong Shou
- Department of Urology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Changyang District, Beijing, 100021, Beijing, China
| | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Haidian District, Beijing, 100191, Beijing, China.
| |
Collapse
|
10
|
Luna M, Chikontwe P, Park SH. Enhanced Nuclei Segmentation and Classification via Category Descriptors in the SAM Model. Bioengineering (Basel) 2024; 11:294. [PMID: 38534568 DOI: 10.3390/bioengineering11030294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/13/2024] [Accepted: 03/19/2024] [Indexed: 03/28/2024] Open
Abstract
Segmenting and classifying nuclei in H&E histopathology images is often limited by the long-tailed distribution of nuclei types. However, the strong generalization ability of image segmentation foundation models like the Segment Anything Model (SAM) can help improve the detection quality of rare types of nuclei. In this work, we introduce category descriptors to perform nuclei segmentation and classification by prompting the SAM model. We close the domain gap between histopathology and natural scene images by aligning features in low-level space while preserving the high-level representations of SAM. We performed extensive experiments on the Lizard dataset, validating the ability of our model to perform automatic nuclei segmentation and classification, especially for rare nuclei types, where achieved a significant detection improvement in the F1 score of up to 12%. Our model also maintains compatibility with manual point prompts for interactive refinement during inference without requiring any additional training.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
- AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| |
Collapse
|
11
|
Wu H, Wang Z, Zhao Z, Chen C, Qin J. Continual Nuclei Segmentation via Prototype-Wise Relation Distillation and Contrastive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3794-3804. [PMID: 37610902 DOI: 10.1109/tmi.2023.3307892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Deep learning models have achieved remarkable success in multi-type nuclei segmentation. These models are mostly trained at once with the full annotation of all types of nuclei available, while lack the ability of continually learning new classes due to the problem of catastrophic forgetting. In this paper, we study the practical and important class-incremental continual learning problem, where the model is incrementally updated to new classes without accessing to previous data. We propose a novel continual nuclei segmentation method, to avoid forgetting knowledge of old classes and facilitate the learning of new classes, by achieving feature-level knowledge distillation with prototype-wise relation distillation and contrastive learning. Concretely, prototype-wise relation distillation imposes constraints on the inter-class relation similarity, encouraging the encoder to extract similar class distribution for old classes in the feature space. Prototype-wise contrastive learning with a hard sampling strategy enhances the intra-class compactness and inter-class separability of features, improving the performance on both old and new classes. Experiments on two multi-type nuclei segmentation benchmarks, i.e., MoNuSAC and CoNSeP, demonstrate the effectiveness of our method with superior performance over many competitive methods. Codes are available at https://github.com/zzw-szu/CoNuSeg.
Collapse
|
12
|
Lou W, Li H, Li G, Han X, Wan X. Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:947-958. [PMID: 36355729 DOI: 10.1109/tmi.2022.3221666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
Collapse
|
13
|
Zhao T, Fu C, Tian Y, Song W, Sham CW. GSN-HVNET: A Lightweight, Multi-Task Deep Learning Framework for Nuclei Segmentation and Classification. Bioengineering (Basel) 2023; 10:bioengineering10030393. [PMID: 36978784 PMCID: PMC10045412 DOI: 10.3390/bioengineering10030393] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/13/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023] Open
Abstract
Nuclei segmentation and classification are two basic and essential tasks in computer-aided diagnosis of digital pathology images, and those deep-learning-based methods have achieved significant success. Unfortunately, most of the existing studies accomplish the two tasks by splicing two related neural networks directly, resulting in repetitive computation efforts and a redundant-and-large neural network. Thus, this paper proposes a lightweight deep learning framework (GSN-HVNET) with an encoder-decoder structure for simultaneous segmentation and classification of nuclei. The decoder consists of three branches outputting the semantic segmentation of nuclei, the horizontal and vertical (HV) distances of nuclei pixels to their mass centers, and the class of each nucleus, respectively. The instance segmentation results are obtained by combing the outputs of the first and second branches. To reduce the computational cost and improve the network stability under small batch sizes, we propose two newly designed blocks, Residual-Ghost-SN (RGS) and Dense-Ghost-SN (DGS). Furthermore, considering the practical usage in pathological diagnosis, we redefine the classification principle of the CoNSeP dataset. Experimental results demonstrate that the proposed model outperforms other state-of-the-art models in terms of segmentation and classification accuracy by a significant margin while maintaining high computational efficiency.
Collapse
Affiliation(s)
- Tengfei Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
- Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang 110819, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China
| | - Yunjia Tian
- State Grid Liaoning Information and Communication Company, Shenyang 110006, China
| | - Wei Song
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chiu-Wing Sham
- School of Computer Science, The University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
14
|
Krishnan AP, Song Z, Clayton D, Jia X, de Crespigny A, Carano RAD. Multi-arm U-Net with dense input and skip connectivity for T2 lesion segmentation in clinical trials of multiple sclerosis. Sci Rep 2023; 13:4102. [PMID: 36914715 PMCID: PMC10011580 DOI: 10.1038/s41598-023-31207-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 03/08/2023] [Indexed: 03/16/2023] Open
Abstract
T2 lesion quantification plays a crucial role in monitoring disease progression and evaluating treatment response in multiple sclerosis (MS). We developed a 3D, multi-arm U-Net for T2 lesion segmentation, which was trained on a large, multicenter clinical trial dataset of relapsing MS. We investigated its generalization to other relapsing and primary progressive MS clinical trial datasets, and to an external dataset from the MICCAI 2016 MS lesion segmentation challenge. Additionally, we assessed the model's ability to reproduce the separation of T2 lesion volumes between treatment and control arms; and the association of baseline T2 lesion volumes with clinical disability scores compared with manual lesion annotations. The trained model achieved a mean dice coefficient of ≥ 0.66 and a lesion detection sensitivity of ≥ 0.72 across the internal test datasets. On the external test dataset, the model achieved a mean dice coefficient of 0.62, which is comparable to 0.59 from the best model in the challenge, and a lesion detection sensitivity of 0.68. Lesion detection performance was reduced for smaller lesions (≤ 30 μL, 3-10 voxels). The model successfully maintained the separation of the longitudinal changes in T2 lesion volumes between the treatment and control arms. Such tools could facilitate semi-automated MS lesion quantification; and reduce rater burden in clinical trials.
Collapse
Affiliation(s)
- Anitha Priya Krishnan
- Data Analytics and Imaging, Pharma Personalized Healthcare, Genentech Inc., 600 E Grand Ave., South San Francisco, CA, 94080, USA.
| | - Zhuang Song
- Data Analytics and Imaging, Pharma Personalized Healthcare, Genentech Inc., 600 E Grand Ave., South San Francisco, CA, 94080, USA
| | - David Clayton
- Clinical Imaging Group, gRED, Genentech Inc., South San Francisco, CA, USA
| | - Xiaoming Jia
- Translational Medicine OMNI - Biomarker Development, Genentech Inc., South San Francisco, CA, USA
| | - Alex de Crespigny
- Clinical Imaging Group, gRED, Genentech Inc., South San Francisco, CA, USA
| | - Richard A D Carano
- Data Analytics and Imaging, Pharma Personalized Healthcare, Genentech Inc., 600 E Grand Ave., South San Francisco, CA, 94080, USA
| |
Collapse
|
15
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
16
|
Chen S, Ding C, Liu M, Cheng J, Tao D. CPP-Net: Context-Aware Polygon Proposal Network for Nucleus Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:980-994. [PMID: 37022023 DOI: 10.1109/tip.2023.3237013] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Nucleus segmentation is a challenging task due to the crowded distribution and blurry boundaries of nuclei. Recent approaches represent nuclei by means of polygons to differentiate between touching and overlapping nuclei and have accordingly achieved promising performance. Each polygon is represented by a set of centroid-to-boundary distances, which are in turn predicted by features of the centroid pixel for a single nucleus. However, using the centroid pixel alone does not provide sufficient contextual information for robust prediction and thus degrades the segmentation accuracy. To handle this problem, we propose a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation. First, we sample a point set rather than one single pixel within each cell for distance prediction. This strategy substantially enhances contextual information and thereby improves the robustness of the prediction. Second, we propose a Confidence-based Weighting Module, which adaptively fuses the predictions from the sampled point set. Third, we introduce a novel Shape-Aware Perceptual (SAP) loss that constrains the shape of the predicted polygons. Here, the SAP loss is based on an additional network that is pre-trained by means of mapping the centroid probability map and the pixel-to-boundary distance maps to a different nucleus representation. Extensive experiments justify the effectiveness of each component in the proposed CPP-Net. Finally, CPP-Net is found to achieve state-of-the-art performance on three publicly available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper is available at https://github.com/csccsccsccsc/cpp-net.
Collapse
|
17
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington
- Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
18
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
19
|
Zhang W, Zhang J, Yang S, Wang X, Yang W, Huang J, Wang W, Han X. Knowledge-Based Representation Learning for Nucleus Instance Classification From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3939-3951. [PMID: 36037453 DOI: 10.1109/tmi.2022.3201981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The classification of nuclei in H&E-stained histopathological images is a fundamental step in the quantitative analysis of digital pathology. Most existing methods employ multi-class classification on the detected nucleus instances, while the annotation scale greatly limits their performance. Moreover, they often downplay the contextual information surrounding nucleus instances that is critical for classification. To explicitly provide contextual information to the classification model, we design a new structured input consisting of a content-rich image patch and a target instance mask. The image patch provides rich contextual information, while the target instance mask indicates the location of the instance to be classified and emphasizes its shape. Benefiting from our structured input format, we propose Structured Triplet for representation learning, a triplet learning framework on unlabelled nucleus instances with customized positive and negative sampling strategies. We pre-train a feature extraction model based on this framework with a large-scale unlabeled dataset, making it possible to train an effective classification model with limited annotated data. We also add two auxiliary branches, namely the attribute learning branch and the conventional self-supervised learning branch, to further improve its performance. As part of this work, we will release a new dataset of H&E-stained pathology images with nucleus instance masks, containing 20,187 patches of size 1024 ×1024 , where each patch comes from a different whole-slide image. The model pre-trained on this dataset with our framework significantly reduces the burden of extensive labeling. We show a substantial improvement in nucleus classification accuracy compared with the state-of-the-art methods.
Collapse
|
20
|
Liang P, Zhang Y, Ding Y, Chen J, Madukoma CS, Weninger T, Shrout JD, Chen DZ. H-EMD: A Hierarchical Earth Mover's Distance Method for Instance Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2582-2597. [PMID: 35446762 DOI: 10.1109/tmi.2022.3169449] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep learning (DL) based semantic segmentation methods have achieved excellent performance in biomedical image segmentation, producing high quality probability maps to allow extraction of rich instance information to facilitate good instance segmentation. While numerous efforts were put into developing new DL semantic segmentation models, less attention was paid to a key issue of how to effectively explore their probability maps to attain the best possible instance segmentation. We observe that probability maps by DL semantic segmentation models can be used to generate many possible instance candidates, and accurate instance segmentation can be achieved by selecting from them a set of "optimized" candidates as output instances. Further, the generated instance candidates form a well-behaved hierarchical structure (a forest), which allows selecting instances in an optimized manner. Hence, we propose a novel framework, called hierarchical earth mover's distance (H-EMD), for instance segmentation in biomedical 2D+time videos and 3D images, which judiciously incorporates consistent instance selection with semantic-segmentation-generated probability maps. H-EMD contains two main stages: (1) instance candidate generation: capturing instance-structured information in probability maps by generating many instance candidates in a forest structure; (2) instance candidate selection: selecting instances from the candidate set for final instance segmentation. We formulate a key instance selection problem on the instance candidate forest as an optimization problem based on the earth mover's distance (EMD), and solve it by integer linear programming. Extensive experiments on eight biomedical video or 3D datasets demonstrate that H-EMD consistently boosts DL semantic segmentation models and is highly competitive with state-of-the-art methods.
Collapse
|
21
|
Saednia K, Tran WT, Sadeghi-Naini A. A Cascaded Deep Learning Framework for Segmentation of Nuclei in Digital Histology Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4764-4767. [PMID: 36086360 DOI: 10.1109/embc48229.2022.9871996] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Accurate segmentation of nuclei is an essential step in analysis of digital histology images for diagnostic and prognostic applications. Despite recent advances in automated frameworks for nuclei segmentation, this task is still challenging. Specifically, detecting small nuclei in large-scale histology images and delineating the border of touching nuclei accurately is a complicated task even for advanced deep neural networks. In this study, a cascaded deep learning framework is proposed to segment nuclei accurately in digitized microscopy images of histology slides. A U-Net based model with customized pixel-wised weighted loss function is adapted in the proposed framework, followed by a U-Net based model with VGG16 backbone and a soft Dice loss function. The model was pretrained on the Post-NAT-BRCA public dataset before training and independent evaluation on the MoNuSeg dataset. The cascaded model could outperform the other state-of-the-art models with an AJI of 0.72 and a F1-score of 0.83 on the MoNuSeg test set.
Collapse
|
22
|
Qin J, He Y, Zhou Y, Zhao J, Ding B. REU-Net: Region-enhanced nuclei segmentation network. Comput Biol Med 2022; 146:105546. [DOI: 10.1016/j.compbiomed.2022.105546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 03/24/2022] [Accepted: 04/17/2022] [Indexed: 11/03/2022]
|
23
|
Wang D, Dai W, Tang D, Liang Y, Ouyang J, Wang H, Peng Y. Deep learning approach for bubble segmentation from hysteroscopic images. Med Biol Eng Comput 2022; 60:1613-1626. [DOI: 10.1007/s11517-022-02562-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 03/25/2022] [Indexed: 11/30/2022]
|
24
|
Rastogi P, Khanna K, Singh V. Gland segmentation in colorectal cancer histopathological images using U-net inspired convolutional network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06687-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
25
|
Butte S, Wang H, Xian M, Vakanski A. SHARP-GAN: SHARPNESS LOSS REGULARIZED GAN FOR HISTOPATHOLOGY IMAGE SYNTHESIS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2022; 2022. [PMID: 35530970 DOI: 10.1109/isbi52829.2022.9761534] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Existing deep learning-based approaches for histopathology image analysis require large annotated training sets to achieve good performance; but annotating histopathology images is slow and resource-intensive. Conditional generative adversarial networks have been applied to generate synthetic histopathology images to alleviate this issue, but current approaches fail to generate clear contours for overlapped and touching nuclei. In this study, We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images. The proposed network uses normalized nucleus distance map rather than the binary mask to encode nuclei contour information. The proposed sharpness loss enhances the contrast of nuclei contour pixels. The proposed method is evaluated using four image quality metrics and segmentation results on two public datasets. Both quantitative and qualitative results demonstrate that the proposed approach can generate realistic histopathology images with clear nuclei contours.
Collapse
Affiliation(s)
- Sujata Butte
- Department of Computer Science, University of Idaho, Idaho, USA
| | - Haotian Wang
- Department of Computer Science, University of Idaho, Idaho, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho, USA
| | | |
Collapse
|
26
|
Doan TNN, Song B, Vuong TTL, Kim K, Kwak JT. SONNET: A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images. IEEE J Biomed Health Inform 2022; 26:3218-3228. [PMID: 35139032 DOI: 10.1109/jbhi.2022.3149936] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automated nuclei segmentation and classification are the keys to analyze and understand the cellular characteristics and functionality, supporting computer-aided digital pathology in disease diagnosis. However, the task still remains challenging due to the intrinsic variations in size, intensity, and morphology of different types of nuclei. Herein, we propose a self-guided ordinal regression neural network for simultaneous nuclear segmentation and classification that can exploit the intrinsic characteristics of nuclei and focus on highly uncertain areas during training. The proposed network formulates nuclei segmentation as an ordinal regression learning by introducing a distance decreasing discretization strategy, which stratifies nuclei in a way that inner regions forming a regular shape of nuclei are separated from outer regions forming an irregular shape. It also adopts a self-guided training strategy to adaptively adjust the weights associated with nuclear pixels, depending on the difficulty of the pixels that is assessed by the network itself. To evaluate the performance of the proposed network, we employ large-scale multi-tissue datasets with 276349 exhaustively annotated nuclei. We show that the proposed network achieves the state-of-the-art performance in both nuclei segmentation and classification in comparison to several methods that are recently developed for segmentation and/or classification.
Collapse
|
27
|
Hollandi R, Moshkov N, Paavolainen L, Tasnadi E, Piccinini F, Horvath P. Nucleus segmentation: towards automated solutions. Trends Cell Biol 2022; 32:295-310. [DOI: 10.1016/j.tcb.2021.12.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/30/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
|
28
|
Liang H, Cheng Z, Zhong H, Qu A, Chen L. A region-based convolutional network for nuclei detection and segmentation in microscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
29
|
Zhao J, He YJ, Zhao SQ, Huang JJ, Zuo WM. AL-Net: Attention Learning Network based on Multi-Task Learning for Cervical Nucleus Segmentation. IEEE J Biomed Health Inform 2021; 26:2693-2702. [PMID: 34928808 DOI: 10.1109/jbhi.2021.3136568] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Cervical nucleus segmentation is a crucial and challenging issue in automatic pathological diagnosis due to uneven staining, blurry boundaries, and adherent or overlapping nuclei in nucleus images. To overcome the limitation of current methods, we propose a multi-task network based on U-Net for cervical nucleus segmentation. This network consists of a primary task and an auxiliary task. The primary task is employed to predict nuclei regions. The auxiliary task, which predicts the boundaries of nuclei, is designed to improve the feature extraction of the main task. Furthermore, a context encoding layer is added behind each encoding layer of the U-Net. The output of each context encoding layer is processed by an attention learning module and then fused with the features of the decoding layer. In addition, a codec block is used in the attention learning module to obtain saliency-based attention and focused attention simultaneously. Experiment results show that the proposed network performs better than the state-of-the-art methods on the 2014 ISBI dataset, BNS, MoNuSeg, and our nucluesSeg dataset.
Collapse
|
30
|
Elameer AS, Jaber MM, Abd SK. Radiography image analysis using cat swarm optimized deep belief networks. JOURNAL OF INTELLIGENT SYSTEMS 2021; 31:40-54. [DOI: 10.1515/jisys-2021-0172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023] Open
Abstract
Abstract
Radiography images are widely utilized in the health sector to recognize the patient health condition. The noise and irrelevant region information minimize the entire disease detection accuracy and computation complexity. Therefore, in this study, statistical Kolmogorov–Smirnov test has been integrated with wavelet transform to overcome the de-noising issues. Then the cat swarm-optimized deep belief network is applied to extract the features from the affected region. The optimized deep learning model reduces the feature training cost and time and improves the overall disease detection accuracy. The network learning process is enhanced according to the AdaDelta learning process, which replaces the learning parameter with a delta value. This process minimizes the error rate while recognizing the disease. The efficiency of the system evaluated using image retrieval in medical application dataset. This process helps to determine the various diseases such as breast, lung, and pediatric studies.
Collapse
Affiliation(s)
- Amer S. Elameer
- Biomedical Informatics College, University of Information Technology and Communications (UOITC) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
- Department of Computer Science, Al-Turath University College , Baghdad , Iraq
| | - Sura Khalil Abd
- Department of Computer Science, Dijlah University Collage , Baghdad , 00964 , Iraq
| |
Collapse
|
31
|
Mandal D, Vahadane A, Sharma S, Majumdar S. Blur-Robust Nuclei Segmentation for Immunofluorescence Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3475-3478. [PMID: 34891988 DOI: 10.1109/embc46164.2021.9629787] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automated nuclei segmentation from immunofluorescence (IF) microscopic image is a crucial first step in digital pathology. A lot of research has been devoted to develop novel nuclei segmentation algorithms to give high performance on good quality images. However, fewer methods were developed for poor-quality images like out-of-focus (blurry) data. In this work, we take a principled approach to study the performance of nuclei segmentation algorithms on out-of-focus images for different levels of blur. A deep learning encoder-decoder framework with a novel Y forked decoder is proposed here. The two fork ends are tied to segmentation and deblur output. The addition of a separate deblurring task in the training paradigm helps to regularize the network on blurry images. Our proposed method accurately predicts the instance nuclei segmentation on sharp as well as out-of-focus images. Additionally, predicted deblurred image provides interpretable insights to experts. Experimental analysis on the Human U2OS cells (out-of-focus) dataset shows that our algorithm is robust and outperforms the state-of-the-art methods.
Collapse
|
32
|
Abousamra S, Belinsky D, Van Arnam J, Allard F, Yee E, Gupta R, Kurc T, Samaras D, Saltz J, Chen C. Multi-Class Cell Detection Using Spatial Context Representation. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3985-3994. [PMID: 38783989 PMCID: PMC11114143 DOI: 10.1109/iccv48922.2021.00397] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
In digital pathology, both detection and classification of cells are important for automatic diagnostic and prognostic tasks. Classifying cells into subtypes, such as tumor cells, lymphocytes or stromal cells is particularly challenging. Existing methods focus on morphological appearance of individual cells, whereas in practice pathologists often infer cell classes through their spatial context. In this paper, we propose a novel method for both detection and classification that explicitly incorporates spatial contextual information. We use the spatial statistical function to describe local density in both a multi-class and a multi-scale manner. Through representation learning and deep clustering techniques, we learn advanced cell representation with both appearance and spatial context. On various benchmarks, our method achieves better performance than state-of-the-arts, especially on the classification task. We also create a new dataset for multi-class cell detection and classification in breast cancer and we make both our code and data publicly available.
Collapse
Affiliation(s)
| | | | | | | | - Eric Yee
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Tahsin Kurc
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Joel Saltz
- Stony Brook University, Stony Brook, NY 11794, USA
| | - Chao Chen
- Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
33
|
Zhao T, Yin Z. Weakly Supervised Cell Segmentation by Point Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2736-2747. [PMID: 33347404 DOI: 10.1109/tmi.2020.3046292] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose weakly supervised training schemes to train end-to-end cell segmentation networks that only require a single point annotation per cell as the training label and generate a high-quality segmentation mask close to those fully supervised methods using mask annotation on cells. Three training schemes are investigated to train cell segmentation networks, using the point annotation. First, self-training is performed to learn additional information near the annotated points. Next, co-training is applied to learn more cell regions using multiple networks that supervise each other. Finally, a hybrid-training scheme is proposed to leverage the advantages of both self-training and co-training. During the training process, we propose a divergence loss to avoid the overfitting and a consistency loss to enforce the consensus among multiple co-trained networks. Furthermore, we propose weakly supervised learning with human in the loop, aiming at achieving high segmentation accuracy and annotation efficiency simultaneously. Evaluated on two benchmark datasets, our proposal achieves high-quality cell segmentation results comparable to the fully supervised methods, but with much less amount of human annotation effort.
Collapse
|
34
|
Baranwal M, Krishnan S, Oneka M, Frankel T, Rao A. CGAT: Cell Graph ATtention Network for Grading of Pancreatic Disease Histology Images. Front Immunol 2021; 12:727610. [PMID: 34671349 PMCID: PMC8522581 DOI: 10.3389/fimmu.2021.727610] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Accepted: 09/03/2021] [Indexed: 11/13/2022] Open
Abstract
Early detection of Pancreatic Ductal Adenocarcinoma (PDAC), one of the most aggressive malignancies of the pancreas, is crucial to avoid metastatic spread to other body regions. Detection of pancreatic cancer is typically carried out by assessing the distribution and arrangement of tumor and immune cells in histology images. This is further complicated due to morphological similarities with chronic pancreatitis (CP), and the co-occurrence of precursor lesions in the same tissue. Most of the current automated methods for grading pancreatic cancers rely on extensive feature engineering involving accurate identification of cell features or utilising single number spatially informed indices for grading purposes. Moreover, sophisticated methods involving black-box approaches, such as neural networks, do not offer insights into the model's ability to accurately identify the correct disease grade. In this paper, we develop a novel cell-graph based Cell-Graph Attention (CGAT) network for the precise classification of pancreatic cancer and its precursors from multiplexed immunofluorescence histology images into the six different types of pancreatic diseases. The issue of class imbalance is addressed through bootstrapping multiple CGAT-nets, while the self-attention mechanism facilitates visualization of cell-cell features that are likely responsible for the predictive capabilities of the model. It is also shown that the model significantly outperforms the decision tree classifiers built using spatially informed metric, such as the Morisita-Horn (MH) indices.
Collapse
Affiliation(s)
- Mayank Baranwal
- Division of Data & Decision Sciences, Tata Consultancy Services Research, Mumbai, India
- Department of Systems and Control Engineering, Indian Institute of Technology, Bombay, India
| | - Santhoshi Krishnan
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, United States
| | - Morgan Oneka
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, United States
| | - Timothy Frankel
- Department of Surgery, University of Michigan, Ann Arbor, MI, United States
| | - Arvind Rao
- Department of Electrical & Computer Engineering, Rice University, Houston, TX, United States
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, United States
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, United States
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, United States
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
35
|
Yi J, Wu P, Tang H, Liu B, Huang Q, Qu H, Han L, Fan W, Hoeppner DJ, Metaxas DN. Object-Guided Instance Segmentation With Auxiliary Feature Refinement for Biological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2403-2414. [PMID: 33945472 DOI: 10.1109/tmi.2021.3077285] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Instance segmentation is of great importance for many biological applications, such as study of neural cell interactions, plant phenotyping, and quantitatively measuring how cells react to drug treatment. In this paper, we propose a novel box-based instance segmentation method. Box-based instance segmentation methods capture objects via bounding boxes and then perform individual segmentation within each bounding box region. However, existing methods can hardly differentiate the target from its neighboring objects within the same bounding box region due to their similar textures and low-contrast boundaries. To deal with this problem, in this paper, we propose an object-guided instance segmentation method. Our method first detects the center points of the objects, from which the bounding box parameters are then predicted. To perform segmentation, an object-guided coarse-to-fine segmentation branch is built along with the detection branch. The segmentation branch reuses the object features as guidance to separate target object from the neighboring ones within the same bounding box region. To further improve the segmentation quality, we design an auxiliary feature refinement module that densely samples and refines point-wise features in the boundary regions. Experimental results on three biological image datasets demonstrate the advantages of our method. The code will be available at https://github.com/yijingru/ObjGuided-Instance-Segmentation.
Collapse
|
36
|
Lee SMW, Shaw A, Simpson JL, Uminsky D, Garratt LW. Differential cell counts using center-point networks achieves human-level accuracy and efficiency over segmentation. Sci Rep 2021; 11:16917. [PMID: 34413367 PMCID: PMC8377024 DOI: 10.1038/s41598-021-96067-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 08/03/2021] [Indexed: 11/08/2022] Open
Abstract
Differential cell counts is a challenging task when applying computer vision algorithms to pathology. Existing approaches to train cell recognition require high availability of multi-class segmentation and/or bounding box annotations and suffer in performance when objects are tightly clustered. We present differential count network ("DCNet"), an annotation efficient modality that utilises keypoint detection to locate in brightfield images the centre points of cells (not nuclei) and their cell class. The single centre point annotation for DCNet lowered burden for experts to generate ground truth data by 77.1% compared to bounding box labeling. Yet centre point annotation still enabled high accuracy when training DCNet on a multi-class algorithm on whole cell features, matching human experts in all 5 object classes in average precision and outperforming humans in consistency. The efficacy and efficiency of the DCNet end-to-end system represents a significant progress toward an open source, fully computationally approach to differential cell count based diagnosis that can be adapted to any pathology need.
Collapse
Affiliation(s)
- Sarada M W Lee
- Perth Machine Learning Group, Perth, WA, 6000, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - Andrew Shaw
- Data Institute, University of San Francisco, San Francisco, CA, 94117, USA
| | - Jodie L Simpson
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
- Priority Research Centre for Healthy Lungs, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - David Uminsky
- Department of Computer Science, University of Chicago, Chicago, IL, 60637, USA
| | - Luke W Garratt
- Wal-yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Nedlands, WA, 6009, Australia.
| |
Collapse
|
37
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
38
|
He K, Lian C, Adeli E, Huo J, Gao Y, Zhang B, Zhang J, Shen D. MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling. Med Image Anal 2021; 71:102039. [PMID: 33831595 DOI: 10.1016/j.media.2021.102039] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 02/13/2021] [Accepted: 03/09/2021] [Indexed: 10/21/2022]
Abstract
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China
| | - Chunfeng Lian
- School of Mathematics and Statistics, Xi'an Jiaotong University, Shanxi, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences and the Department of Computer Science, Stanford University, CA, USA
| | - Jing Huo
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, China; State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital, Nanjing University Medical School, Nanjing, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
39
|
Graham S, Epstein D, Rajpoot N. Dense Steerable Filter CNNs for Exploiting Rotational Symmetry in Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4124-4136. [PMID: 32746153 DOI: 10.1109/tmi.2020.3013246] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Histology images are inherently symmetric under rotation, where each orientation is equally as likely to appear. However, this rotational symmetry is not widely utilised as prior knowledge in modern Convolutional Neural Networks (CNNs), resulting in data hungry models that learn independent features at each orientation. Allowing CNNs to be rotation-equivariant removes the necessity to learn this set of transformations from the data and instead frees up model capacity, allowing more discriminative features to be learned. This reduction in the number of required parameters also reduces the risk of overfitting. In this paper, we propose Dense Steerable Filter CNNs (DSF-CNNs) that use group convolutions with multiple rotated copies of each filter in a densely connected framework. Each filter is defined as a linear combination of steerable basis filters, enabling exact rotation and decreasing the number of trainable parameters compared to standard filters. We also provide the first in-depth comparison of different rotation-equivariant CNNs for histology image analysis and demonstrate the advantage of encoding rotational symmetry into modern architectures. We show that DSF-CNNs achieve state-of-the-art performance, with significantly fewer parameters, when applied to three different tasks in the area of computational pathology: breast tumour classification, colon gland segmentation and multi-tissue nuclear segmentation.
Collapse
|
40
|
Lal S, Das D, Alabhya K, Kanfade A, Kumar A, Kini J. NucleiSegNet: Robust deep learning architecture for the nuclei segmentation of liver cancer histopathology images. Comput Biol Med 2020; 128:104075. [PMID: 33190012 DOI: 10.1016/j.compbiomed.2020.104075] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 10/18/2020] [Accepted: 10/18/2020] [Indexed: 12/26/2022]
Abstract
The nuclei segmentation of hematoxylin and eosin (H&E) stained histopathology images is an important prerequisite in designing a computer-aided diagnostics (CAD) system for cancer diagnosis and prognosis. Automated nuclei segmentation methods enable the qualitative and quantitative analysis of tens of thousands of nuclei within H&E stained histopathology images. However, a major challenge during nuclei segmentation is the segmentation of variable sized, touching nuclei. To address this challenge, we present NucleiSegNet - a robust deep learning network architecture for the nuclei segmentation of H&E stained liver cancer histopathology images. Our proposed architecture includes three blocks: a robust residual block, a bottleneck block, and an attention decoder block. The robust residual block is a newly proposed block for the efficient extraction of high-level semantic maps. The attention decoder block uses a new attention mechanism for efficient object localization, and it improves the proposed architecture's performance by reducing false positives. When applied to nuclei segmentation tasks, the proposed deep-learning architecture yielded superior results compared to state-of-the-art nuclei segmentation methods. We applied our proposed deep learning architecture for nuclei segmentation to a set of H&E stained histopathology images from two datasets, and our comprehensive results show that our proposed architecture outperforms state-of-the-art methods. As part of this work, we also introduced a new liver dataset (KMC liver dataset) of H&E stained liver cancer histopathology image tiles, containing 80 images with annotated nuclei procured from Kasturba Medical College (KMC), Mangalore, Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India. The proposed model's source code is available at https://github.com/shyamfec/NucleiSegNet.
Collapse
Affiliation(s)
- Shyam Lal
- Department of E & C Engg., National Institute of Technology Karnataka, Surathkal, Mangaluru, 575025, Karnataka, India.
| | - Devikalyan Das
- Department of E & C Engg., National Institute of Technology Karnataka, Surathkal, Mangaluru, 575025, Karnataka, India
| | - Kumar Alabhya
- Department of E & C Engg., National Institute of Technology Karnataka, Surathkal, Mangaluru, 575025, Karnataka, India
| | - Anirudh Kanfade
- Department of E & C Engg., National Institute of Technology Karnataka, Surathkal, Mangaluru, 575025, Karnataka, India
| | - Aman Kumar
- Department of E & C Engg., National Institute of Technology Karnataka, Surathkal, Mangaluru, 575025, Karnataka, India
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, India; Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
41
|
Mahmood T, Owais M, Noh KJ, Yoon HS, Haider A, Sultan H, Park KR. Artificial Intelligence-based Segmentation of Nuclei in Multi-organ Histopathology Images: Model Development and Validation (Preprint). JMIR Med Inform 2020. [DOI: 10.2196/24394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
42
|
Yan Z, Yang X, Cheng KT. Enabling a Single Deep Learning Model for Accurate Gland Instance Segmentation: A Shape-Aware Adversarial Learning Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2176-2189. [PMID: 31944936 DOI: 10.1109/tmi.2020.2966594] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Segmenting gland instances in histology images is highly challenging as it requires not only detecting glands from a complex background but also separating each individual gland instance with accurate boundary detection. However, due to the boundary uncertainty problem in manual annotations, pixel-to-pixel matching based loss functions are too restrictive for simultaneous gland detection and boundary detection. State-of-the-art approaches adopted multi-model schemes, resulting in unnecessarily high model complexity and difficulties in the training process. In this paper, we propose to use one single deep learning model for accurate gland instance segmentation. To address the boundary uncertainty problem, instead of pixel-to-pixel matching, we propose a segment-level shape similarity measure to calculate the curve similarity between each annotated boundary segment and the corresponding detected boundary segment within a fixed searching range. As the segment-level measure allows location variations within a fixed range for shape similarity calculation, it has better tolerance to boundary uncertainty and is more effective for boundary detection. Furthermore, by adjusting the radius of the searching range, the segment-level shape similarity measure is able to deal with different levels of boundary uncertainty. Therefore, in our framework, images of different scales are down-sampled and integrated to provide both global and local contextual information for training, which is helpful in segmenting gland instances of different sizes. To reduce the variations of multi-scale training images, by referring to adversarial domain adaptation, we propose a pseudo domain adaptation framework for feature alignment. By constructing loss functions based on the segment-level shape similarity measure, combining with the adversarial loss function, the proposed shape-aware adversarial learning framework enables one single deep learning model for gland instance segmentation. Experimental results on the 2015 MICCAI Gland Challenge dataset demonstrate that the proposed framework achieves state-of-the-art performance with one single deep learning model. As the boundary uncertainty problem widely exists in medical image segmentation, it is broadly applicable to other applications.
Collapse
|
43
|
Wang H, Xian M, Vakanski A. BENDING LOSS REGULARIZED NETWORK FOR NUCLEI SEGMENTATION IN HISTOPATHOLOGY IMAGES. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2020; 2020:258-262. [PMID: 33312394 PMCID: PMC7733529 DOI: 10.1109/isbi45749.2020.9098611] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Separating overlapped nuclei is a major challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on public datasets; however, their performance in segmenting overlapped nuclei are limited. To address the issue, we propose the bending loss regularized network for nuclei segmentation. The proposed bending loss defines high penalties to contour points with large curvatures, and applies small penalties to contour points with small curvature. Minimizing the bending loss can avoid generating contours that encompass multiple nuclei. The proposed approach is validated on the MoNuSeg dataset using five quantitative metrics. It outperforms six state-of-the-art approaches on the following metrics: Aggregate Jaccard Index, Dice, Recognition Quality, and Panoptic Quality.
Collapse
Affiliation(s)
- Haotian Wang
- Department of Computer Science, University of Idaho, Idaho, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho, USA
| | | |
Collapse
|
44
|
Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, Rajpoot N. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal 2019; 58:101563. [PMID: 31561183 DOI: 10.1016/j.media.2019.101563] [Citation(s) in RCA: 466] [Impact Index Per Article: 77.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 09/04/2019] [Accepted: 09/16/2019] [Indexed: 12/21/2022]
Abstract
Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.
Collapse
Affiliation(s)
- Simon Graham
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, UK; Department of Computer Science, University of Warwick, UK.
| | - Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, South Korea
| | - Shan E Ahmed Raza
- Department of Computer Science, University of Warwick, UK; Centre for Evolution and Cancer & Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Ayesha Azam
- Department of Computer Science, University of Warwick, UK; University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, South Korea
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK
| |
Collapse
|