1
|
Coupeau P, Fasquel JB, Hertz-Pannier L, Dinomais M. GNN-based structural information to improve DNN-based basal ganglia segmentation in children following early brain lesion. Comput Med Imaging Graph 2024; 115:102396. [PMID: 38744197 DOI: 10.1016/j.compmedimag.2024.102396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/16/2024]
Abstract
Analyzing the basal ganglia following an early brain lesion is crucial due to their noteworthy role in sensory-motor functions. However, the segmentation of these subcortical structures on MRI is challenging in children and is further complicated by the presence of a lesion. Although current deep neural networks (DNN) perform well in segmenting subcortical brain structures in healthy brains, they lack robustness when faced with lesion variability, leading to structural inconsistencies. Given the established spatial organization of the basal ganglia, we propose enhancing the DNN-based segmentation through post-processing with a graph neural network (GNN). The GNN conducts node classification on graphs encoding both class probabilities and spatial information regarding the regions segmented by the DNN. In this study, we focus on neonatal arterial ischemic stroke (NAIS) in children. The approach is evaluated on both healthy children and children after NAIS using three DNN backbones: U-Net, UNETr, and MSGSE-Net. The results show an improvement in segmentation performance, with an increase in the median Dice score by up to 4% and a reduction in the median Hausdorff distance (HD) by up to 93% for healthy children (from 36.45 to 2.57) and up to 91% for children suffering from NAIS (from 40.64 to 3.50). The performance of the method is compared with atlas-based methods. Severe cases of neonatal stroke result in a decline in performance in the injured hemisphere, without negatively affecting the segmentation of the contra-injured hemisphere. Furthermore, the approach demonstrates resilience to small training datasets, a widespread challenge in the medical field, particularly in pediatrics and for rare pathologies.
Collapse
Affiliation(s)
- Patty Coupeau
- Universite d'Angers, LARIS, SFR MATHSTIC, F-49000 Angers, France.
| | | | - Lucie Hertz-Pannier
- UNIACT/Neurospin/JOLIOT/DRF/CEA-Saclay, and U1141 NeuroDiderot/Inserm, CEA, Paris University, France
| | - Mickaël Dinomais
- Universite d'Angers, LARIS, SFR MATHSTIC, F-49000 Angers, France; Departement de medecine physique et de readaptation, Centre Hospitalier Universitaire d'Angers, France
| |
Collapse
|
2
|
Wang J, Zhang B, Wang Y, Zhou C, Vonsky MS, Mitrofanova LB, Zou D, Li Q. CrossU-Net: Dual-modality cross-attention U-Net for segmentation of precancerous lesions in gastric cancer. Comput Med Imaging Graph 2024; 112:102339. [PMID: 38262134 DOI: 10.1016/j.compmedimag.2024.102339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/20/2023] [Accepted: 01/15/2024] [Indexed: 01/25/2024]
Abstract
Gastric precancerous lesions (GPL) significantly elevate the risk of gastric cancer, and precise diagnosis and timely intervention are critical for patient survival. Due to the elusive pathological features of precancerous lesions, the early detection rate is less than 10%, which hinders lesion localization and diagnosis. In this paper, we provide a GPL pathological dataset and propose a novel method for improving the segmentation accuracy on a limited-scale dataset, namely RGB and Hyperspectral dual-modal pathological image Cross-attention U-Net (CrossU-Net). Specifically, we present a self-supervised pre-training model for hyperspectral images to serve downstream segmentation tasks. Secondly, we design a dual-stream U-Net-based network to extract features from different modal images. To promote information exchange between spatial information in RGB images and spectral information in hyperspectral images, we customize the cross-attention mechanism between the two networks. Furthermore, we use an intermediate agent in this mechanism to improve computational efficiency. Finally, we add a distillation loss to align predicted results for both branches, improving network generalization. Experimental results show that our CrossU-Net achieves accuracy and Dice of 96.53% and 91.62%, respectively, for GPL lesion segmentation, providing a promising spectral research approach for the localization and subsequent quantitative analysis of pathological features in early diagnosis.
Collapse
Affiliation(s)
- Jiansheng Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China; Engineering Research Center of Nanophotonics & Advanced Instrument, Ministry of Education, East China Normal University, Shanghai, China
| | - Benyan Zhang
- Department of Gastroenterology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yan Wang
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China
| | - Chunhua Zhou
- Department of Gastroenterology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Maxim S Vonsky
- D.I. Mendeleev Institute for Metrology, Moskovsky Pr 19, St Petersburg, Russia; Almazov National Medical Research Centre, Saint-Petersburg, Russia
| | | | - Duowu Zou
- Department of Gastroenterology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qingli Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, China; Engineering Research Center of Nanophotonics & Advanced Instrument, Ministry of Education, East China Normal University, Shanghai, China; Engineering Center of SHMEC for Space Information and GNSS, Shanghai, China.
| |
Collapse
|
3
|
Chelebian E, Avenel C, Ciompi F, Wählby C. DEPICTER: Deep representation clustering for histology annotation. Comput Biol Med 2024; 170:108026. [PMID: 38308865 DOI: 10.1016/j.compbiomed.2024.108026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 01/24/2024] [Accepted: 01/24/2024] [Indexed: 02/05/2024]
Abstract
Automatic segmentation of histopathology whole-slide images (WSI) usually involves supervised training of deep learning models with pixel-level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non-fully supervised methods, ranging from semi-supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real-world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch-wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi-supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi-resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
Collapse
Affiliation(s)
- Eduard Chelebian
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden.
| | - Chirstophe Avenel
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Carolina Wählby
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| |
Collapse
|
4
|
Wang B, Zou L, Chen J, Cao Y, Cai Z, Qiu Y, Mao L, Wang Z, Chen J, Gui L, Yang X. A Weakly Supervised Segmentation Network Embedding Cross-Scale Attention Guidance and Noise-Sensitive Constraint for Detecting Tertiary Lymphoid Structures of Pancreatic Tumors. IEEE J Biomed Health Inform 2024; 28:988-999. [PMID: 38064334 DOI: 10.1109/jbhi.2023.3340686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
The presence of tertiary lymphoid structures (TLSs) on pancreatic pathological images is an important prognostic indicator of pancreatic tumors. Therefore, TLSs detection on pancreatic pathological images plays a crucial role in diagnosis and treatment for patients with pancreatic tumors. However, fully supervised detection algorithms based on deep learning usually require a large number of manual annotations, which is time-consuming and labor-intensive. In this paper, we aim to detect the TLSs in a manner of few-shot learning by proposing a weakly supervised segmentation network. We firstly obtain the lymphocyte density maps by combining a pretrained model for nuclei segmentation and a domain adversarial network for lymphocyte nuclei recognition. Then, we establish a cross-scale attention guidance mechanism by jointly learning the coarse-scale features from the original histopathology images and fine-scale features from our designed lymphocyte density attention. A noise-sensitive constraint is introduced by an embedding signed distance function loss in the training procedure to reduce tiny prediction errors. Experimental results on two collected datasets demonstrate that our proposed method significantly outperforms the state-of-the-art segmentation-based algorithms in terms of TLSs detection accuracy. Additionally, we apply our method to study the congruent relationship between the density of TLSs and peripancreatic vascular invasion and obtain some clinically statistical results.
Collapse
|
5
|
Pati P, Jaume G, Ayadi Z, Thandiackal K, Bozorgtabar B, Gabrani M, Goksel O. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med Image Anal 2023; 89:102915. [PMID: 37633177 DOI: 10.1016/j.media.2023.102915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/28/2023]
Abstract
The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.
Collapse
Affiliation(s)
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber/Harvard Cancer Center, Boston, MA, USA
| | - Zeineb Ayadi
- IBM Research Europe, Zurich, Switzerland; EPFL, Lausanne, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | | | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| |
Collapse
|
6
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
7
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
8
|
Zhao L, Hou R, Teng H, Fu X, Han Y, Zhao J. CoADS: Cross attention based dual-space graph network for survival prediction of lung cancer using whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 236:107559. [PMID: 37119773 DOI: 10.1016/j.cmpb.2023.107559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/18/2023] [Accepted: 04/18/2023] [Indexed: 05/21/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate overall survival (OS) prediction for lung cancer patients is of great significance, which can help classify patients into different risk groups to benefit from personalized treatment. Histopathology slides are considered the gold standard for cancer diagnosis and prognosis, and many algorithms have been proposed to predict the OS risk. Most methods rely on selecting key patches or morphological phenotypes from whole slide images (WSIs). However, OS prediction using the existing methods exhibits limited accuracy and remains challenging. METHODS In this paper, we propose a novel cross-attention-based dual-space graph convolutional neural network model (CoADS). To facilitate the improvement of survival prediction, we fully take into account the heterogeneity of tumor sections from different perspectives. CoADS utilizes the information from both physical and latent spaces. With the guidance of cross-attention, both the spatial proximity in physical space and the feature similarity in latent space between different patches from WSIs are integrated effectively. RESULTS We evaluated our approach on two large lung cancer datasets of 1044 patients. The extensive experimental results demonstrated that the proposed model outperforms state-of-the-art methods with the highest concordance index. CONCLUSIONS The qualitative and quantitative results show that the proposed method is more powerful for identifying the pathology features associated with prognosis. Furthermore, the proposed framework can be extended to other pathological images for predicting OS or other prognosis indicators, and thus delivering individualized treatment.
Collapse
Affiliation(s)
- Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China; Department of radiation oncology, Shanghai Chest Hospital, Shanghai, China
| | - Haohua Teng
- Department of pathology, Shanghai Chest Hospital, Shanghai, China
| | - Xiaolong Fu
- Department of radiation oncology, Shanghai Chest Hospital, Shanghai, China
| | - Yuchen Han
- Department of pathology, Shanghai Chest Hospital, Shanghai, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
9
|
Zhang Z, Wei X. Artificial intelligence-assisted selection and efficacy prediction of antineoplastic strategies for precision cancer therapy. Semin Cancer Biol 2023; 90:57-72. [PMID: 36796530 DOI: 10.1016/j.semcancer.2023.02.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/12/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023]
Abstract
The rapid development of artificial intelligence (AI) technologies in the context of the vast amount of collectable data obtained from high-throughput sequencing has led to an unprecedented understanding of cancer and accelerated the advent of a new era of clinical oncology with a tone of precision treatment and personalized medicine. However, the gains achieved by a variety of AI models in clinical oncology practice are far from what one would expect, and in particular, there are still many uncertainties in the selection of clinical treatment options that pose significant challenges to the application of AI in clinical oncology. In this review, we summarize emerging approaches, relevant datasets and open-source software of AI and show how to integrate them to address problems from clinical oncology and cancer research. We focus on the principles and procedures for identifying different antitumor strategies with the assistance of AI, including targeted cancer therapy, conventional cancer therapy, and cancer immunotherapy. In addition, we also highlight the current challenges and directions of AI in clinical oncology translation. Overall, we hope this article will provide researchers and clinicians with a deeper understanding of the role and implications of AI in precision cancer therapy, and help AI move more quickly into accepted cancer guidelines.
Collapse
Affiliation(s)
- Zhe Zhang
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy and Cancer Center, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu 610041, PR China; State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University, and Collaborative Innovation Center for Biotherapy, Chengdu 610041, PR China
| | - Xiawei Wei
- Laboratory of Aging Research and Cancer Drug Target, State Key Laboratory of Biotherapy and Cancer Center, National Clinical Research Center for Geriatrics, West China Hospital, Sichuan University, Chengdu 610041, PR China.
| |
Collapse
|
10
|
Investigation of semi- and self-supervised learning methods in the histopathological domain. J Pathol Inform 2023; 14:100305. [PMID: 37025325 PMCID: PMC10070179 DOI: 10.1016/j.jpi.2023.100305] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/14/2023] Open
Abstract
Training models with semi- or self-supervised learning methods is one way to reduce annotation effort since they rely on unlabeled or sparsely labeled datasets. Such approaches are particularly promising for domains with a time-consuming annotation process requiring specialized expertise and where high-quality labeled machine learning datasets are scarce, like in computational pathology. Even though some of these methods have been used in the histopathological domain, there is, so far, no comprehensive study comparing different approaches. Therefore, this work compares feature extractors models trained with state-of-the-art semi- or self-supervised learning methods PAWS, SimCLR, and SimSiam within a unified framework. We show that such models, across different architectures and network configurations, have a positive performance impact on histopathological classification tasks, even in low data regimes. Moreover, our observations suggest that features learned from a particular dataset, i.e., tissue type, are only in-domain transferable to a certain extent. Finally, we share our experience using each method in computational pathology and provide recommendations for its use.
Collapse
|
11
|
Han Y, Cheng L, Huang G, Zhong G, Li J, Yuan X, Liu H, Li J, Zhou J, Cai M. Weakly supervised semantic segmentation of histological tissue via attention accumulation and pixel-level contrast learning. Phys Med Biol 2023; 68. [PMID: 36577142 DOI: 10.1088/1361-6560/acaeee] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 12/28/2022] [Indexed: 12/29/2022]
Abstract
Objective. Histopathology image segmentation can assist medical professionals in identifying and diagnosing diseased tissue more efficiently. Although fully supervised segmentation models have excellent performance, the annotation cost is extremely expensive. Weakly supervised models are widely used in medical image segmentation due to their low annotation cost. Nevertheless, these weakly supervised models have difficulty in accurately locating the boundaries between different classes of regions in pathological images, resulting in a high rate of false alarms Our objective is to design a weakly supervised segmentation model to resolve the above problems.Approach. The segmentation model is divided into two main stages, the generation of pseudo labels based on class residual attention accumulation network (CRAANet) and the semantic segmentation based on pixel feature space construction network (PFSCNet). CRAANet provides attention scores for each class through the class residual attention module, while the Attention Accumulation (AA) module overlays the attention feature maps generated in each training epoch. PFSCNet employs a network model containing an inflated convolutional residual neural network and a multi-scale feature-aware module as the segmentation backbone, and proposes dense energy loss and pixel clustering modules are based on contrast learning to solve the pseudo-labeling-inaccuracy problem.Main results. We validate our method using the lung adenocarcinoma (LUAD-HistoSeg) dataset and the breast cancer (BCSS) dataset. The results of the experiments show that our proposed method outperforms other state-of-the-art methods on both datasets in several metrics. This suggests that it is capable of performing well in a wide variety of histopathological image segmentation tasks.Significance. We propose a weakly supervised semantic segmentation network that achieves approximate fully supervised segmentation performance even in the case of incomplete labels. The proposed AA and pixel-level contrast learning also make the edges more accurate and can well assist pathologists in their research.
Collapse
Affiliation(s)
- Yongqi Han
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Lianglun Cheng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guo Zhong
- School of Information Science and Technology, Guangdong University of Foreign Studies, Guangzhou 510420, People's Republic of China
| | - Jiahua Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Xiaochen Yuan
- Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, People's Republic of China
| | - Hongrui Liu
- Department of Industrial and Systems Engineering, San Jose State University, CA 95192, United States of America
| | - Jiao Li
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| | - Jian Zhou
- Department of Medical Imaging, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| | - Muyan Cai
- Department of Pathology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| |
Collapse
|
12
|
Ji J, Wan T, Chen D, Wang H, Zheng M, Qin Z. A deep learning method for automatic evaluation of diagnostic information from multi-stained histopathological images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
13
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
14
|
Yu Y, Tao Y, Guan H, Xiao S, Li F, Yu C, Liu Z, Li J. A multi-branch hierarchical attention network for medical target segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Li Y, Zhang Y, Cui W, Lei B, Kuang X, Zhang T. Dual Encoder-Based Dynamic-Channel Graph Convolutional Network With Edge Enhancement for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1975-1989. [PMID: 35167444 DOI: 10.1109/tmi.2022.3151666] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Retinal vessel segmentation with deep learning technology is a crucial auxiliary method for clinicians to diagnose fundus diseases. However, the deep learning approaches inevitably lose the edge information, which contains spatial features of vessels while performing down-sampling, leading to the limited segmentation performance of fine blood vessels. Furthermore, the existing methods ignore the dynamic topological correlations among feature maps in the deep learning framework, resulting in the inefficient capture of the channel characterization. To address these limitations, we propose a novel dual encoder-based dynamic-channel graph convolutional network with edge enhancement (DE-DCGCN-EE) for retinal vessel segmentation. Specifically, we first design an edge detection-based dual encoder to preserve the edge of vessels in down-sampling. Secondly, we investigate a dynamic-channel graph convolutional network to map the image channels to the topological space and synthesize the features of each channel on the topological map, which solves the limitation of insufficient channel information utilization. Finally, we study an edge enhancement block, aiming to fuse the edge and spatial features in the dual encoder, which is beneficial to improve the accuracy of fine blood vessel segmentation. Competitive experimental results on five retinal image datasets validate the efficacy of the proposed DE-DCGCN-EE, which achieves more remarkable segmentation results against the other state-of-the-art methods, indicating its potential clinical application.
Collapse
|
16
|
Wan J, Yue S, Ma J, Ma X. A coarse-to-fine full attention guided capsule network for medical image segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103682] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
17
|
Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Med Image Anal 2022; 80:102487. [PMID: 35671591 DOI: 10.1016/j.media.2022.102487] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 05/07/2022] [Accepted: 05/20/2022] [Indexed: 01/15/2023]
Abstract
Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue.
Collapse
|
18
|
Hu W, Li C, Li X, Rahaman MM, Ma J, Zhang Y, Chen H, Liu W, Sun C, Yao Y, Sun H, Grzegorzek M. GasHisSDB: A new gastric histopathology image dataset for computer aided diagnosis of gastric cancer. Comput Biol Med 2022; 142:105207. [PMID: 35016101 DOI: 10.1016/j.compbiomed.2021.105207] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 12/30/2021] [Accepted: 12/31/2021] [Indexed: 12/26/2022]
Abstract
BACKGROUND AND OBJECTIVE Gastric cancer is the fifth most common cancer globally, and early detection of gastric cancer is essential to save lives. Histopathological examination of gastric cancer is the gold standard for the diagnosis of gastric cancer. However, computer-aided diagnostic techniques are challenging to evaluate due to the scarcity of publicly available gastric histopathology image datasets. METHODS In this paper, a noble publicly available Gastric Histopathology Sub-size Image Database (GasHisSDB) is published to identify classifiers' performance. Specifically, two types of data are included: normal and abnormal, with a total of 245,196 tissue case images. In order to prove that the methods of different periods in the field of image classification have discrepancies on GasHisSDB, we select a variety of classifiers for evaluation. Seven classical machine learning classifiers, three Convolutional Neural Network classifiers, and a novel transformer-based classifier are selected for testing on image classification tasks. RESULTS This study performed extensive experiments using traditional machine learning and deep learning methods to prove that the methods of different periods have discrepancies on GasHisSDB. Traditional machine learning achieved the best accuracy rate of 86.08% and a minimum of just 41.12%. The best accuracy of deep learning reached 96.47% and the lowest was 86.21%. Accuracy rates vary significantly across classifiers. CONCLUSIONS To the best of our knowledge, it is the first publicly available gastric cancer histopathology dataset containing a large number of images for weakly supervised learning. We believe that GasHisSDB can attract researchers to explore new algorithms for the automated diagnosis of gastric cancer, which can help physicians and patients in the clinical setting.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, 110 169, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, 110 169, Shenyang, China.
| | - Xiaoyan Li
- Department of Pathology, Cancer Hospital, China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, 110 042, China
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, 110 169, Shenyang, China; School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Jiquan Ma
- Department of Computer Science and Technology, Heilongjiang University, Harbin, Heilongjiang, 150 080, China
| | - Yong Zhang
- Department of Pathology, Cancer Hospital, China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, 110 042, China
| | - Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, 110 169, Shenyang, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, 110 169, Shenyang, China
| | - Changhao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, 110 169, Shenyang, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110 169, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07 030, USA
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital, China Medical University, Shenyang, 110 122, China
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Luebeck, Luebeck, Germany
| |
Collapse
|