1
|
Mahbod A, Dorffner G, Ellinger I, Woitek R, Hatamikia S. Improving generalization capability of deep learning-based nuclei instance segmentation by non-deterministic train time and deterministic test time stain normalization. Comput Struct Biotechnol J 2024; 23:669-678. [PMID: 38292472 PMCID: PMC10825317 DOI: 10.1016/j.csbj.2023.12.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 02/01/2024] Open
Abstract
With the advent of digital pathology and microscopic systems that can scan and save whole slide histological images automatically, there is a growing trend to use computerized methods to analyze acquired images. Among different histopathological image analysis tasks, nuclei instance segmentation plays a fundamental role in a wide range of clinical and research applications. While many semi- and fully-automatic computerized methods have been proposed for nuclei instance segmentation, deep learning (DL)-based approaches have been shown to deliver the best performances. However, the performance of such approaches usually degrades when tested on unseen datasets. In this work, we propose a novel method to improve the generalization capability of a DL-based automatic segmentation approach. Besides utilizing one of the state-of-the-art DL-based models as a baseline, our method incorporates non-deterministic train time and deterministic test time stain normalization, and ensembling to boost the segmentation performance. We trained the model with one single training set and evaluated its segmentation performance on seven test datasets. Our results show that the proposed method provides up to 4.9%, 5.4%, and 5.9% better average performance in segmenting nuclei based on Dice score, aggregated Jaccard index, and panoptic quality score, respectively, compared to the baseline segmentation model.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| |
Collapse
|
2
|
Vanea C, Džigurski J, Rukins V, Dodi O, Siigur S, Salumäe L, Meir K, Parks WT, Hochner-Celnikier D, Fraser A, Hochner H, Laisk T, Ernst LM, Lindgren CM, Nellåker C. Mapping cell-to-tissue graphs across human placenta histology whole slide images using deep learning with HAPPY. Nat Commun 2024; 15:2710. [PMID: 38548713 PMCID: PMC10978962 DOI: 10.1038/s41467-024-46986-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Accurate placenta pathology assessment is essential for managing maternal and newborn health, but the placenta's heterogeneity and temporal variability pose challenges for histology analysis. To address this issue, we developed the 'Histology Analysis Pipeline.PY' (HAPPY), a deep learning hierarchical method for quantifying the variability of cells and micro-anatomical tissue structures across placenta histology whole slide images. HAPPY differs from patch-based features or segmentation approaches by following an interpretable biological hierarchy, representing cells and cellular communities within tissues at a single-cell resolution across whole slide images. We present a set of quantitative metrics from healthy term placentas as a baseline for future assessments of placenta health and we show how these metrics deviate in placentas with clinically significant placental infarction. HAPPY's cell and tissue predictions closely replicate those from independent clinical experts and placental biology literature.
Collapse
Affiliation(s)
- Claudia Vanea
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| | | | | | - Omri Dodi
- Faculty of Medicine, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Siim Siigur
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Liis Salumäe
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Karen Meir
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - W Tony Parks
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Canada
| | | | - Abigail Fraser
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK
| | - Hagit Hochner
- Braun School of Public Health, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Triin Laisk
- Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Linda M Ernst
- Department of Pathology and Laboratory Medicine, NorthShore University HealthSystem, Chicago, USA
- Department of Pathology, University of Chicago Pritzker School of Medicine, Chicago, USA
| | - Cecilia M Lindgren
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
- Centre for Human Genetics, Nuffield Department, University of Oxford, Oxford, UK
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Nuffield Department of Population Health Health, University of Oxford, Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| |
Collapse
|
3
|
van Eekelen L, Spronck J, Looijen-Salamon M, Vos S, Munari E, Girolami I, Eccher A, Acs B, Boyaci C, de Souza GS, Demirel-Andishmand M, Meesters LD, Zegers D, van der Woude L, Theelen W, van den Heuvel M, Grünberg K, van Ginneken B, van der Laak J, Ciompi F. Comparing deep learning and pathologist quantification of cell-level PD-L1 expression in non-small cell lung cancer whole-slide images. Sci Rep 2024; 14:7136. [PMID: 38531958 DOI: 10.1038/s41598-024-57067-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 03/14/2024] [Indexed: 03/28/2024] Open
Abstract
Programmed death-ligand 1 (PD-L1) expression is currently used in the clinic to assess eligibility for immune-checkpoint inhibitors via the tumor proportion score (TPS), but its efficacy is limited by high interobserver variability. Multiple papers have presented systems for the automatic quantification of TPS, but none report on the task of determining cell-level PD-L1 expression and often reserve their evaluation to a single PD-L1 monoclonal antibody or clinical center. In this paper, we report on a deep learning algorithm for detecting PD-L1 negative and positive tumor cells at a cellular level and evaluate it on a cell-level reference standard established by six readers on a multi-centric, multi PD-L1 assay dataset. This reference standard also provides for the first time a benchmark for computer vision algorithms. In addition, in line with other papers, we also evaluate our algorithm at slide-level by measuring the agreement between the algorithm and six pathologists on TPS quantification. We find a moderately low interobserver agreement at cell-level level (mean reader-reader F1 score = 0.68) which our algorithm sits slightly under (mean reader-AI F1 score = 0.55), especially for cases from the clinical center not included in the training set. Despite this, we find good AI-pathologist agreement on quantifying TPS compared to the interobserver agreement (mean reader-reader Cohen's kappa = 0.54, 95% CI 0.26-0.81, mean reader-AI kappa = 0.49, 95% CI 0.27-0.72). In conclusion, our deep learning algorithm demonstrates promise in detecting PD-L1 expression at a cellular level and exhibits favorable agreement with pathologists in quantifying the tumor proportion score (TPS). We publicly release our models for use via the Grand-Challenge platform.
Collapse
Affiliation(s)
- Leander van Eekelen
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands.
| | - Joey Spronck
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Monika Looijen-Salamon
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Shoko Vos
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Enrico Munari
- Pathology Unit, Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Ilaria Girolami
- Department of Pathology, Provincial Hospital of Bolzano (SABES-ASDAA), Bolzano-Bozen, Italy
| | - Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, Verona, Italy
| | - Balazs Acs
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
| | - Ceren Boyaci
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
| | - Gabriel Silva de Souza
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Muradije Demirel-Andishmand
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Luca Dulce Meesters
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Daan Zegers
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Lieke van der Woude
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Willemijn Theelen
- Department of Thoracic Oncology, Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Michel van den Heuvel
- Respiratory Diseases Department, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Katrien Grünberg
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Department of Radiology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, P.O.Box 9101, 6500 HB, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Chen Z, Wang X, Jin Z, Li B, Jiang D, Wang Y, Jiang M, Zhang D, Yuan P, Zhao Y, Feng F, Lin Y, Jiang L, Wang C, Meng W, Ye W, Wang J, Qiu W, Liu H, Huang D, Hou Y, Wang X, Jiao Y, Ying J, Liu Z, Liu Y. Deep learning on tertiary lymphoid structures in hematoxylin-eosin predicts cancer prognosis and immunotherapy response. NPJ Precis Oncol 2024; 8:73. [PMID: 38519580 PMCID: PMC10959936 DOI: 10.1038/s41698-024-00579-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 03/14/2024] [Indexed: 03/25/2024] Open
Abstract
Tertiary lymphoid structures (TLSs) have been associated with favorable immunotherapy responses and prognosis in various cancers. Despite their significance, their quantification using multiplex immunohistochemistry (mIHC) staining of T and B lymphocytes remains labor-intensive, limiting its clinical utility. To address this challenge, we curated a dataset from matched mIHC and H&E whole-slide images (WSIs) and developed a deep learning model for automated segmentation of TLSs. The model achieved Dice coefficients of 0.91 on the internal test set and 0.866 on the external validation set, along with intersection over union (IoU) scores of 0.819 and 0.787, respectively. The TLS ratio, defined as the segmented TLS area over the total tissue area, correlated with B lymphocyte levels and the expression of CXCL13, a chemokine associated with TLS formation, in 6140 patients spanning 16 tumor types from The Cancer Genome Atlas (TCGA). The prognostic models for overall survival indicated that the inclusion of the TLS ratio with TNM staging significantly enhanced the models' discriminative ability, outperforming the traditional models that solely incorporated TNM staging, in 10 out of 15 TCGA tumor types. Furthermore, when applied to biopsied treatment-naïve tumor samples, higher TLS ratios predicted a positive immunotherapy response across multiple cohorts, including specific therapies for esophageal squamous cell carcinoma, non-small cell lung cancer, and stomach adenocarcinoma. In conclusion, our deep learning-based approach offers an automated and reproducible method for TLS segmentation and quantification, highlighting its potential in predicting immunotherapy response and informing cancer prognosis.
Collapse
Affiliation(s)
- Ziqiang Chen
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China
| | - Xiaobing Wang
- State Key Laboratory of Molecular Oncology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zelin Jin
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China
| | - Bosen Li
- Department of General Surgery/Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dongxian Jiang
- Department of Pathology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yanqiu Wang
- Departments of Pathology, International Peace Maternity and Child Health Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Mengping Jiang
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China
| | - Dandan Zhang
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China
| | - Pei Yuan
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yahui Zhao
- State Key Laboratory of Molecular Oncology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Feiyue Feng
- Thoracic Surgery Department, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yicheng Lin
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China
| | - Liping Jiang
- State Key Laboratory of Molecular Oncology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chenxi Wang
- State Key Laboratory of Molecular Oncology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Weida Meng
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China
| | - Wenjing Ye
- Division of Rheumatology and Immunology, Huashan Hospital, Fudan University, Shanghai, China
| | - Jie Wang
- Departments of Thoracic Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Wenqing Qiu
- Shanghai Xuhui Central Hospital, Shanghai, China
| | - Houbao Liu
- Shanghai Xuhui Central Hospital, Shanghai, China
- Department of General Surgery/Biliary Tract Disease Center, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Dan Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Yingyong Hou
- Department of Pathology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Xuefei Wang
- Department of General Surgery/Gastric Cancer Center, Zhongshan Hospital, Fudan University, Shanghai, China
- Department of General Surgery, Zhongshan Hospital (Xiamen), Fudan University, Shanghai, China
| | - Yuchen Jiao
- State Key Laboratory of Molecular Oncology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Jianming Ying
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Zhihua Liu
- State Key Laboratory of Molecular Oncology, National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.
| | - Yun Liu
- MOE Key Laboratory of Metabolism and Molecular Medicine, Department of Biochemistry and Molecular Biology, School of Basic Medical Sciences and Shanghai Xuhui Central Hospital, Fudan University, Shanghai, China.
- State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University, Shanghai, China.
| |
Collapse
|
5
|
Mahbod A, Polak C, Feldmann K, Khan R, Gelles K, Dorffner G, Woitek R, Hatamikia S, Ellinger I. NuInsSeg: A fully annotated dataset for nuclei instance segmentation in H&E-stained histological images. Sci Data 2024; 11:295. [PMID: 38486039 PMCID: PMC10940572 DOI: 10.1038/s41597-024-03117-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
In computational pathology, automatic nuclei instance segmentation plays an essential role in whole slide image analysis. While many computerized approaches have been proposed for this task, supervised deep learning (DL) methods have shown superior segmentation performances compared to classical machine learning and image processing techniques. However, these models need fully annotated datasets for training which is challenging to acquire, especially in the medical domain. In this work, we release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg. This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs. Moreover, for the first time, we provide additional ambiguous area masks for the entire dataset. These vague areas represent the parts of the images where precise and deterministic manual annotations are impossible, even for human experts. The dataset and detailed step-by-step instructions to generate related segmentation masks are publicly available on the respective repositories.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria.
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria.
| | - Christine Polak
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Feldmann
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Rumsha Khan
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Gelles
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, 1090, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, 2700, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| |
Collapse
|
6
|
Sheikh TS, Cho M. Segmentation of Variants of Nuclei on Whole Slide Images by Using Radiomic Features. Bioengineering (Basel) 2024; 11:252. [PMID: 38534526 DOI: 10.3390/bioengineering11030252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/10/2024] [Accepted: 02/26/2024] [Indexed: 03/28/2024] Open
Abstract
The histopathological segmentation of nuclear types is a challenging task because nuclei exhibit distinct morphologies, textures, and staining characteristics. Accurate segmentation is critical because it affects the diagnostic workflow for patient assessment. In this study, a framework was proposed for segmenting various types of nuclei from different organs of the body. The proposed framework improved the segmentation performance for each nuclear type using radiomics. First, we used distinct radiomic features to extract and analyze quantitative information about each type of nucleus and subsequently trained various classifiers based on the best input sub-features of each radiomic feature selected by a LASSO operator. Second, we inputted the outputs of the best classifier to various segmentation models to learn the variants of nuclei. Using the MoNuSAC2020 dataset, we achieved state-of-the-art segmentation performance for each category of nuclei type despite the complexity, overlapping, and obscure regions. The generalized adaptability of the proposed framework was verified by the consistent performance obtained in whole slide images of different organs of the body and radiomic features.
Collapse
Affiliation(s)
- Taimoor Shakeel Sheikh
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| | - Migyung Cho
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| |
Collapse
|
7
|
Xun D, Wang R, Zhang X, Wang Y. Microsnoop: A generalist tool for microscopy image representation. Innovation (N Y) 2024; 5:100541. [PMID: 38235187 PMCID: PMC10794109 DOI: 10.1016/j.xinn.2023.100541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 11/17/2023] [Indexed: 01/19/2024] Open
Abstract
Accurate profiling of microscopy images from small scale to high throughput is an essential procedure in basic and applied biological research. Here, we present Microsnoop, a novel deep learning-based representation tool trained on large-scale microscopy images using masked self-supervised learning. Microsnoop can process various complex and heterogeneous images, and we classified images into three categories: single-cell, full-field, and batch-experiment images. Our benchmark study on 10 high-quality evaluation datasets, containing over 2,230,000 images, demonstrated Microsnoop's robust and state-of-the-art microscopy image representation ability, surpassing existing generalist and even several custom algorithms. Microsnoop can be integrated with other pipelines to perform tasks such as superresolution histopathology image and multimodal analysis. Furthermore, Microsnoop can be adapted to various hardware and can be easily deployed on local or cloud computing platforms. We will regularly retrain and reevaluate the model using community-contributed data to consistently improve Microsnoop.
Collapse
Affiliation(s)
- Dejin Xun
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
| | - Rui Wang
- State Key Lab of Computer-Aided Design & Computer Graphics, Zhejiang University, Hangzhou 310058, China
| | - Xingcai Zhang
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Yi Wang
- Pharmaceutical Informatics Institute, College of Pharmaceutical Sciences, Zhejiang University, Hangzhou 310058, China
- Innovation Institute for Artificial Intelligence in Medicine of Zhejiang University, Hangzhou 310018, China
- National Key Laboratory of Chinese Medicine Modernization, Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing 314100, China
| |
Collapse
|
8
|
Wibawa MS, Zhou JY, Wang R, Huang YY, Zhan Z, Chen X, Lv X, Young LS, Rajpoot N. AI-Based Risk Score from Tumour-Infiltrating Lymphocyte Predicts Locoregional-Free Survival in Nasopharyngeal Carcinoma. Cancers (Basel) 2023; 15:5789. [PMID: 38136336 PMCID: PMC10742296 DOI: 10.3390/cancers15245789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 11/28/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Locoregional recurrence of nasopharyngeal carcinoma (NPC) occurs in 10% to 50% of cases following primary treatment. However, the current main prognostic markers for NPC, both stage and plasma Epstein-Barr virus DNA, are not sensitive to locoregional recurrence. METHODS We gathered 385 whole-slide images (WSIs) from haematoxylin and eosin (H&E)-stained NPC sections (n = 367 cases), which were collected from Sun Yat-sen University Cancer Centre. We developed a deep learning algorithm to detect tumour nuclei and lymphocyte nuclei in WSIs, followed by density-based clustering to quantify the tumour-infiltrating lymphocytes (TILs) into 12 scores. The Random Survival Forest model was then trained on the TILs to generate risk score. RESULTS Based on Kaplan-Meier analysis, the proposed methods were able to stratify low- and high-risk NPC cases in a validation set of locoregional recurrence with a statically significant result (p < 0.001). This finding was also found in distant metastasis-free survival (p < 0.001), progression-free survival (p < 0.001), and regional recurrence-free survival (p < 0.05). Furthermore, in both univariate analysis (HR: 1.58, CI: 1.13-2.19, p < 0.05) and multivariate analysis (HR:1.59, CI: 1.11-2.28, p < 0.05), we also found that our methods demonstrated a strong prognostic value for locoregional recurrence. CONCLUSION The proposed novel digital markers could potentially be utilised to assist treatment decisions in cases of NPC.
Collapse
Affiliation(s)
- Made Satria Wibawa
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
| | - Jia-Yu Zhou
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Ruoyu Wang
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
| | - Ying-Ying Huang
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Zejiang Zhan
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xi Chen
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xing Lv
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Lawrence S. Young
- Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK;
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
- The Alan Turing Institute, London NW1 2DB, UK
| |
Collapse
|
9
|
Wu H, Wang Z, Zhao Z, Chen C, Qin J. Continual Nuclei Segmentation via Prototype-Wise Relation Distillation and Contrastive Learning. IEEE Trans Med Imaging 2023; 42:3794-3804. [PMID: 37610902 DOI: 10.1109/tmi.2023.3307892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Deep learning models have achieved remarkable success in multi-type nuclei segmentation. These models are mostly trained at once with the full annotation of all types of nuclei available, while lack the ability of continually learning new classes due to the problem of catastrophic forgetting. In this paper, we study the practical and important class-incremental continual learning problem, where the model is incrementally updated to new classes without accessing to previous data. We propose a novel continual nuclei segmentation method, to avoid forgetting knowledge of old classes and facilitate the learning of new classes, by achieving feature-level knowledge distillation with prototype-wise relation distillation and contrastive learning. Concretely, prototype-wise relation distillation imposes constraints on the inter-class relation similarity, encouraging the encoder to extract similar class distribution for old classes in the feature space. Prototype-wise contrastive learning with a hard sampling strategy enhances the intra-class compactness and inter-class separability of features, improving the performance on both old and new classes. Experiments on two multi-type nuclei segmentation benchmarks, i.e., MoNuSAC and CoNSeP, demonstrate the effectiveness of our method with superior performance over many competitive methods. Codes are available at https://github.com/zzw-szu/CoNuSeg.
Collapse
|
10
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
11
|
Wang S, Liu X, Li Y, Sun X, Li Q, She Y, Xu Y, Huang X, Lin R, Kang D, Wang X, Tu H, Liu W, Huang F, Chen J. A deep learning-based stripe self-correction method for stitched microscopic images. Nat Commun 2023; 14:5393. [PMID: 37669977 PMCID: PMC10480181 DOI: 10.1038/s41467-023-41165-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 08/22/2023] [Indexed: 09/07/2023] Open
Abstract
Stitched fluorescence microscope images inevitably exist in various types of stripes or artifacts caused by uncertain factors such as optical devices or specimens, which severely affects the image quality and downstream quantitative analysis. Here, we present a deep learning-based Stripe Self-Correction method, so-called SSCOR. Specifically, we propose a proximity sampling scheme and adversarial reciprocal self-training paradigm that enable SSCOR to utilize stripe-free patches sampled from the stitched microscope image itself to correct their adjacent stripe patches. Comparing to off-the-shelf approaches, SSCOR can not only adaptively correct non-uniform, oblique, and grid stripes, but also remove scanning, bubble, and out-of-focus artifacts, achieving the state-of-the-art performance across different imaging conditions and modalities. Moreover, SSCOR does not require any physical parameter estimation, patch-wise manual annotation, or raw stitched information in the correction process. This provides an intelligent prior-free image restoration solution for microscopists or even microscope companies, thus ensuring more precise biomedical applications for researchers.
Collapse
Affiliation(s)
- Shu Wang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, 350007, China
| | - Xiaoxiang Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China
| | - Yueying Li
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
| | - Xinquan Sun
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
| | - Qi Li
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China
| | - Yinhua She
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China
| | - Yixuan Xu
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
| | - Xingxin Huang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, 350007, China
| | - Ruolan Lin
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, 350001, China
| | - Deyong Kang
- Department of Pathology, Fujian Medical University Union Hospital, Fuzhou, 350001, China
| | - Xingfu Wang
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - Haohua Tu
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA
| | - Wenxi Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Feng Huang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China.
| | - Jianxin Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, 350007, China.
| |
Collapse
|
12
|
Li Z, Li C, Luo X, Zhou Y, Zhu J, Xu C, Yang M, Wu Y, Chen Y. Toward Source-Free Cross Tissues Histopathological Cell Segmentation via Target-Specific Finetuning. IEEE Trans Med Imaging 2023; 42:2666-2677. [PMID: 37030826 DOI: 10.1109/tmi.2023.3263465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Recognition and quantitative analytics of histopathological cells are the golden standard for diagnosing multiple cancers. Despite recent advances in deep learning techniques that have been widely investigated for the automated segmentation of various types of histopathological cells, the heavy dependency on specific histopathological image types with sufficient supervised annotations, as well as the limited access to clinical data in hospitals, still pose significant challenges in the application of computer-aided diagnosis in pathology. In this paper, we focus on the model generalization of cell segmentation towards cross-tissue histopathological images. Remarkably, a novel target-specific finetuning-based self-supervised domain adaptation framework is proposed to transfer the cell segmentation model to unlabeled target datasets, without access to source datasets and annotations. When performed on the target unlabeled histopathological image set, the proposed method only needs to tune very few parameters of the pre-trained model in a self-supervised manner. Considering the morphological properties of pathological cells, we introduce two constraint terms at both local and global levels into this framework to access more reliable predictions. The proposed cross-domain framework is validated on three different types of histopathological tissues, showing promising performance in self-supervised cell segmentation. Additionally, the whole framework can be further applied to clinical tools in pathology without accessing the original training image data. The code and dataset are released at: https://github.com/NeuronXJTU/SFDA-CellSeg.
Collapse
|
13
|
Li Z, Tang Z, Hu J, Wang X, Jia D, Zhang Y. NST: A nuclei segmentation method based on transformer for gastrointestinal cancer pathological images. Biomed Signal Process Control 2023; 84:104785. [DOI: 10.1016/j.bspc.2023.104785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
14
|
Xie L, Ge T, Xiao B, Han X, Zhang Q, Xu Z, He D, Tian W. Identification of Adolescent Menarche Status Using Biplanar X-ray Images: A Deep Learning-Based Method. Bioengineering (Basel) 2023; 10:769. [PMID: 37508796 PMCID: PMC10375958 DOI: 10.3390/bioengineering10070769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 07/30/2023] Open
Abstract
The purpose of this study is to develop an automated method for identifying the menarche status of adolescents based on EOS radiographs. We designed a deep-learning-based algorithm that contains a region of interest detection network and a classification network. The algorithm was trained and tested on a retrospective dataset of 738 adolescent EOS cases using a five-fold cross-validation strategy and was subsequently tested on a clinical validation set of 259 adolescent EOS cases. On the clinical validation set, our algorithm achieved accuracy of 0.942, macro precision of 0.933, macro recall of 0.938, and a macro F1-score of 0.935. The algorithm showed almost perfect performance in distinguishing between males and females, with the main classification errors found in females aged 12 to 14 years. Specifically for females, the algorithm had accuracy of 0.910, sensitivity of 0.943, and specificity of 0.855 in estimating menarche status, with an area under the curve of 0.959. The kappa value of the algorithm, in comparison to the actual situation, was 0.806, indicating strong agreement between the algorithm and the real-world scenario. This method can efficiently analyze EOS radiographs and identify the menarche status of adolescents. It is expected to become a routine clinical tool and provide references for doctors' decisions under specific clinical conditions.
Collapse
Affiliation(s)
- Linzhen Xie
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Tenghui Ge
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Bin Xiao
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Xiaoguang Han
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Qi Zhang
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Zhongning Xu
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Da He
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Wei Tian
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| |
Collapse
|
15
|
Foucart A, Debeir O, Decaestecker C. Panoptic quality should be avoided as a metric for assessing cell nuclei segmentation and classification in digital pathology. Sci Rep 2023; 13:8614. [PMID: 37244964 DOI: 10.1038/s41598-023-35605-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 05/20/2023] [Indexed: 05/29/2023] Open
Abstract
Panoptic Quality (PQ), designed for the task of "Panoptic Segmentation" (PS), has been used in several digital pathology challenges and publications on cell nucleus instance segmentation and classification (ISC) since its introduction in 2019. Its purpose is to encompass the detection and the segmentation aspects of the task in a single measure, so that algorithms can be ranked according to their overall performance. A careful analysis of the properties of the metric, its application to ISC and the characteristics of nucleus ISC datasets, shows that is not suitable for this purpose and should be avoided. Through a theoretical analysis we demonstrate that PS and ISC, despite their similarities, have some fundamental differences that make PQ unsuitable. We also show that the use of the Intersection over Union as a matching rule and as a segmentation quality measure within PQ is not adapted for such small objects as nuclei. We illustrate these findings with examples taken from the NuCLS and MoNuSAC datasets. The code for replicating our results is available on GitHub ( https://github.com/adfoucart/panoptic-quality-suppl ).
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, École polytechnique de Bruxelles, Université Libre de Bruxelles (ULB), 1050, Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, École polytechnique de Bruxelles, Université Libre de Bruxelles (ULB), 1050, Brussels, Belgium
- Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles (ULB), Gosselies, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, École polytechnique de Bruxelles, Université Libre de Bruxelles (ULB), 1050, Brussels, Belgium.
- Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles (ULB), Gosselies, Belgium.
| |
Collapse
|
16
|
Islam Sumon R, Bhattacharjee S, Hwang YB, Rahman H, Kim HC, Ryu WS, Kim DM, Cho NH, Choi HK. Densely Convolutional Spatial Attention Network for nuclei segmentation of histological images for computational pathology. Front Oncol 2023; 13:1009681. [PMID: 37305563 PMCID: PMC10248729 DOI: 10.3389/fonc.2023.1009681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 05/05/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction Automatic nuclear segmentation in digital microscopic tissue images can aid pathologists to extract high-quality features for nuclear morphometrics and other analyses. However, image segmentation is a challenging task in medical image processing and analysis. This study aimed to develop a deep learning-based method for nuclei segmentation of histological images for computational pathology. Methods The original U-Net model sometime has a caveat in exploring significant features. Herein, we present the Densely Convolutional Spatial Attention Network (DCSA-Net) model based on U-Net to perform the segmentation task. Furthermore, the developed model was tested on external multi-tissue dataset - MoNuSeg. To develop deep learning algorithms for well-segmenting nuclei, a large quantity of data are mandatory, which is expensive and less feasible. We collected hematoxylin and eosin-stained image data sets from two hospitals to train the model with a variety of nuclear appearances. Because of the limited number of annotated pathology images, we introduced a small publicly accessible data set of prostate cancer (PCa) with more than 16,000 labeled nuclei. Nevertheless, to construct our proposed model, we developed the DCSA module, an attention mechanism for capturing useful information from raw images. We also used several other artificial intelligence-based segmentation methods and tools to compare their results to our proposed technique. Results To prioritize the performance of nuclei segmentation, we evaluated the model's outputs based on the Accuracy, Dice coefficient (DC), and Jaccard coefficient (JC) scores. The proposed technique outperformed the other methods and achieved superior nuclei segmentation with accuracy, DC, and JC of 96.4% (95% confidence interval [CI]: 96.2 - 96.6), 81.8 (95% CI: 80.8 - 83.0), and 69.3 (95% CI: 68.2 - 70.0), respectively, on the internal test data set. Conclusion Our proposed method demonstrates superior performance in segmenting cell nuclei of histological images from internal and external datasets, and outperforms many standard segmentation algorithms used for comparative analysis.
Collapse
Affiliation(s)
- Rashadul Islam Sumon
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Subrata Bhattacharjee
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Yeong-Byn Hwang
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hafizur Rahman
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Hee-Cheol Kim
- Department of Digital Anti-Aging Healthcare, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
| | - Wi-Sun Ryu
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Dong Min Kim
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| | - Nam-Hoon Cho
- Department of Pathology, Yonsei University Hospital, Seoul, Republic of Korea
| | - Heung-Kook Choi
- Department of Computer Engineering, Ubiquitous-Anti-aging-Healthcare Research Center (u-AHRC), Inje University, Gimhae, Republic of Korea
- Artificial Intelligence R&D Center, JLK Inc., Seoul, Republic of Korea
| |
Collapse
|
17
|
Ke J, Lu Y, Shen Y, Zhu J, Zhou Y, Huang J, Yao J, Liang X, Guo Y, Wei Z, Liu S, Huang Q, Jiang F, Shen D. ClusterSeg: A crowd cluster pinpointed nucleus segmentation framework with cross-modality datasets. Med Image Anal 2023; 85:102758. [PMID: 36731275 DOI: 10.1016/j.media.2023.102758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/27/2022] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
The detection and segmentation of individual cells or nuclei is often involved in image analysis across a variety of biology and biomedical applications as an indispensable prerequisite. However, the ubiquitous presence of crowd clusters with morphological variations often hinders successful instance segmentation. In this paper, nuclei cluster focused annotation strategies and frameworks are proposed to overcome this challenging practical problem. Specifically, we design a nucleus segmentation framework, namely ClusterSeg, to tackle nuclei clusters, which consists of a convolutional-transformer hybrid encoder and a 2.5-path decoder for precise predictions of nuclei instance mask, contours, and clustered-edges. Additionally, an annotation-efficient clustered-edge pointed strategy pinpoints the salient and error-prone boundaries, where a partially-supervised PS-ClusterSeg is presented using ClusterSeg as the segmentation backbone. The framework is evaluated with four privately curated image sets and two public sets with characteristic severely clustered nuclei across a variety range of image modalities, e.g., microscope, cytopathology, and histopathology images. The proposed ClusterSeg and PS-ClusterSeg are modality-independent and generalizable, and superior to current state-of-the-art approaches in multiple metrics empirically. Our collected data, the elaborate annotations to both public and private set, as well the source code, are released publicly at https://github.com/lu-yizhou/ClusterSeg.
Collapse
Affiliation(s)
- Jing Ke
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China; School of Computer Science and Engineering, University of New South Wales, Sydney, Australia.
| | - Yizhou Lu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, MD, USA
| | - Junchao Zhu
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yijin Zhou
- School of Mathematical Sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Jinghan Huang
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Jieteng Yao
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaoyao Liang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yi Guo
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Zhonghua Wei
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Sheng Liu
- Department of Thyroid Breast and Vascular Surgery, Shanghai Fourth People's Hospital, School of Medicine, Tongji University, Shanghai, China.
| | - Qin Huang
- Department of Pathology, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Fusong Jiang
- Department of Endocrinology and Metabolism, Shanghai Sixth people's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
18
|
Vu QD, Rajpoot K, Raza SEA, Rajpoot N. Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images. Med Image Anal 2023; 85:102743. [PMID: 36702037 DOI: 10.1016/j.media.2023.102743] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 11/30/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Kashif Rajpoot
- School of Computer Science, University of Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK.
| |
Collapse
|
19
|
Kanadath A, Angel Arul Jothi J, Urolagin S. Multilevel Multiobjective Particle Swarm Optimization Guided Superpixel Algorithm for Histopathology Image Detection and Segmentation. J Imaging 2023; 9:jimaging9040078. [PMID: 37103229 PMCID: PMC10145642 DOI: 10.3390/jimaging9040078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 03/22/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Histopathology image analysis is considered as a gold standard for the early diagnosis of serious diseases such as cancer. The advancements in the field of computer-aided diagnosis (CAD) have led to the development of several algorithms for accurately segmenting histopathology images. However, the application of swarm intelligence for segmenting histopathology images is less explored. In this study, we introduce a Multilevel Multiobjective Particle Swarm Optimization guided Superpixel algorithm (MMPSO-S) for the effective detection and segmentation of various regions of interest (ROIs) from Hematoxylin and Eosin (H&E)-stained histopathology images. Several experiments are conducted on four different datasets such as TNBC, MoNuSeg, MoNuSAC, and LD to ascertain the performance of the proposed algorithm. For the TNBC dataset, the algorithm achieves a Jaccard coefficient of 0.49, a Dice coefficient of 0.65, and an F-measure of 0.65. For the MoNuSeg dataset, the algorithm achieves a Jaccard coefficient of 0.56, a Dice coefficient of 0.72, and an F-measure of 0.72. Finally, for the LD dataset, the algorithm achieves a precision of 0.96, a recall of 0.99, and an F-measure of 0.98. The comparative results demonstrate the superiority of the proposed method over the simple Particle Swarm Optimization (PSO) algorithm, its variants (Darwinian particle swarm optimization (DPSO), fractional order Darwinian particle swarm optimization (FODPSO)), Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D), non-dominated sorting genetic algorithm 2 (NSGA2), and other state-of-the-art traditional image processing methods.
Collapse
|
20
|
Wang J, Qin L, Chen D, Wang J, Han BW, Zhu Z, Qiao G. An improved Hover-net for nuclear segmentation and classification in histopathology images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08394-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
21
|
Komura D, Onoyama T, Shinbo K, Odaka H, Hayakawa M, Ochi M, Herdiantoputri RR, Endo H, Katoh H, Ikeda T, Ushiku T, Ishikawa S. Restaining-based annotation for cancer histology segmentation to overcome annotation-related limitations among pathologists. Patterns (N Y) 2023; 4:100688. [PMID: 36873900 DOI: 10.1016/j.patter.2023.100688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 11/30/2022] [Accepted: 01/12/2023] [Indexed: 02/12/2023]
Abstract
Numerous cancer histopathology specimens have been collected and digitized over the past few decades. A comprehensive evaluation of the distribution of various cells in tumor tissue sections can provide valuable information for understanding cancer. Deep learning is suitable for achieving these goals; however, the collection of extensive, unbiased training data is hindered, thus limiting the production of accurate segmentation models. This study presents SegPath-the largest annotation dataset (>10 times larger than publicly available annotations)-for the segmentation of hematoxylin and eosin (H&E)-stained sections for eight major cell types in cancer tissue. The SegPath generating pipeline used H&E-stained sections that were destained and subsequently immunofluorescence-stained with carefully selected antibodies. We found that SegPath is comparable with, or outperforms, pathologist annotations. Moreover, annotations by pathologists are biased toward typical morphologies. However, the model trained on SegPath can overcome this limitation. Our results provide foundational datasets for machine-learning research in histopathology.
Collapse
|
22
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
23
|
Liang Y, Yin Z, Liu H, Zeng H, Wang J, Liu J, Che N. Weakly Supervised Deep Nuclei Segmentation With Sparsely Annotated Bounding Boxes for DNA Image Cytometry. IEEE/ACM Trans Comput Biol Bioinform 2023; 20:785-795. [PMID: 34951851 DOI: 10.1109/tcbb.2021.3138189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Nuclei segmentation is an essential step in DNA ploidy analysis by image-based cytometry (DNA-ICM) which is widely used in cytopathology and allows an objective measurement of DNA content (ploidy). The routine fully supervised learning-based method requires often tedious and expensive pixel-wise labels. In this paper, we propose a novel weakly supervised nuclei segmentation framework which exploits only sparsely annotated bounding boxes, without any segmentation labels. The key is to integrate the traditional image segmentation and self-training into fully supervised instance segmentation. We first leverage the traditional segmentation to generate coarse masks for each box-annotated nucleus to supervise the training of a teacher model, which is then responsible for both the refinement of these coarse masks and pseudo labels generation of unlabeled nuclei. These pseudo labels and refined masks along with the original manually annotated bounding boxes jointly supervise the training of student model. Both teacher and student share the same architecture and especially the student is initialized by the teacher. We have extensively evaluated our method with both our DNA-ICM dataset and public cytopathological dataset. Without bells and whistles, our method outperforms all existing weakly supervised entries on both datasets. Code and our DNA-ICM dataset are publicly available at https://github.com/CVIU-CSU/Weakly-Supervised-Nuclei-Segmentation.
Collapse
|
24
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
25
|
Tavolara TE, Gurcan MN, Niazi MKK. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels. Cancers (Basel) 2022; 14:cancers14235778. [PMID: 36497258 PMCID: PMC9738801 DOI: 10.3390/cancers14235778] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/16/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022] Open
Abstract
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
Collapse
|
26
|
Mahbod A, Schaefer G, Dorffner G, Hatamikia S, Ecker R, Ellinger I. A dual decoder U-Net-based model for nuclei instance segmentation in hematoxylin and eosin-stained histological images. Front Med (Lausanne) 2022; 9:978146. [PMID: 36438040 PMCID: PMC9691672 DOI: 10.3389/fmed.2022.978146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 10/28/2022] [Indexed: 11/03/2023] Open
Abstract
Even in the era of precision medicine, with various molecular tests based on omics technologies available to improve the diagnosis process, microscopic analysis of images derived from stained tissue sections remains crucial for diagnostic and treatment decisions. Among other cellular features, both nuclei number and shape provide essential diagnostic information. With the advent of digital pathology and emerging computerized methods to analyze the digitized images, nuclei detection, their instance segmentation and classification can be performed automatically. These computerized methods support human experts and allow for faster and more objective image analysis. While methods ranging from conventional image processing techniques to machine learning-based algorithms have been proposed, supervised convolutional neural network (CNN)-based techniques have delivered the best results. In this paper, we propose a CNN-based dual decoder U-Net-based model to perform nuclei instance segmentation in hematoxylin and eosin (H&E)-stained histological images. While the encoder path of the model is developed to perform standard feature extraction, the two decoder heads are designed to predict the foreground and distance maps of all nuclei. The outputs of the two decoder branches are then merged through a watershed algorithm, followed by post-processing refinements to generate the final instance segmentation results. Moreover, to additionally perform nuclei classification, we develop an independent U-Net-based model to classify the nuclei predicted by the dual decoder model. When applied to three publicly available datasets, our method achieves excellent segmentation performance, leading to average panoptic quality values of 50.8%, 51.3%, and 62.1% for the CryoNuSeg, NuInsSeg, and MoNuSAC datasets, respectively. Moreover, our model is the top-ranked method in the MoNuSAC post-challenge leaderboard.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough, United Kingdom
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| | - Rupert Ecker
- Department of Research and Development, TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
27
|
Batista LG, Bugatti PH, Saito PTM. Classification of Skin Lesion through Active Learning Strategies. Comput Methods Programs Biomed 2022; 226:107122. [PMID: 36116397 DOI: 10.1016/j.cmpb.2022.107122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 08/09/2022] [Accepted: 09/08/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE According to the National Cancer Institute, among all malignant tumors, non-melanoma skin cancer, and melanoma are the most frequent in Brazil. Despite having a lower incidence, the melanoma type has accelerated growth and greater lethality. Several studies have been performed in recent years in the computer vision area to assist in the early diagnosis of skin cancer. Despite being widely used and presenting good results, deep learning approaches require a large amount of annotated data and considerable computational cost for training the model. Therefore, the present work explores active learning approaches to select a small set of more informative data for training the classifier. For that, different selection criteria are considered to obtain more effective and efficient classifiers for skin lesions. METHODS We perform an extensive experimental evaluation considering three datasets and different learning strategies and scenarios for validation. In addition to data augmentation, we evaluated two segmentation strategies considering the U-net CNN model and the Fully Convolutional Networks (FCN) with a manual expert review. We also analyzed the best (handcrafted and deep) features that describe each skin lesion and the most suitable classifiers and combinations (extractor-classifier) for this context. The active learning approach evaluated different criteria based on uncertainty, diversity, and representativeness to select the most informative samples. The strategies used were Decreasing Boundary Edges, Entropy, Least Confidence, Margin Sampling, Minimum-Spanning Tree Boundary Edges, and Root-Distance based Sampling. RESULTS It can be observed that the segmentation with FCN and manual correction by the specialist, the Border-Interior Classification (BIC) extractor, and the Random Forest (RF) classifier showed a better performance. Regarding the active learning approach, the Margin Sampling strategy presented the best classification accuracies (about 93%) with only 35% of the training set compared to the traditional learning approach (which requires the entire set). CONCLUSIONS According to the results, it is possible to observe that the selection strategies allow for achieving high accuracies faster (fewer learning iterations) and with a smaller amount of labeled samples compared to the traditional learning approach. Hence, active learning can contribute significantly to the diagnosis of skin lesions, beneficially reducing specialists' annotation costs.
Collapse
Affiliation(s)
- Lucas G Batista
- Department of Computing, Federal University of Technology - Parana, 1640, Alberto Carazzai Av., Cornelio Procopio, PR 86300-000, Brazil.
| | - Pedro H Bugatti
- Department of Computing, Federal University of Technology - Parana, 1640, Alberto Carazzai Av., Cornelio Procopio, PR 86300-000, Brazil.
| | - Priscila T M Saito
- Department of Computing, Federal University of Technology - Parana, 1640, Alberto Carazzai Av., Cornelio Procopio, PR 86300-000, Brazil; Departament of Computing, Federal University of Sao Carlos, km 235, Rodovia Washington Luis, Sao Carlos, SP 13565-905, Brazil; Institute of Computing, State University of Campinas, 1251, Albert Einstein Ave, Cidade Universitária, Campinas, SP 13083-852, Brazil.
| |
Collapse
|
28
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
29
|
Yang Y, Yan T, Jiang X, Xie R, Li C, Zhou T. MH-Net: Model-data-driven hybrid-fusion network for medical image segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
30
|
Amgad M, Atteya LA, Hussein H, Mohammed KH, Hafiz E, Elsebaie MAT, Alhusseiny AM, AlMoslemany MA, Elmatboly AM, Pappalardo PA, Sakr RA, Mobadersany P, Rachid A, Saad AM, Alkashash AM, Ruhban IA, Alrefai A, Elgazar NM, Abdulkarim A, Farag AA, Etman A, Elsaeed AG, Alagha Y, Amer YA, Raslan AM, Nadim MK, Elsebaie MAT, Ayad A, Hanna LE, Gadallah A, Elkady M, Drumheller B, Jaye D, Manthey D, Gutman DA, Elfandy H, Cooper LAD. NuCLS: A scalable crowdsourcing approach and dataset for nucleus classification and segmentation in breast cancer. Gigascience 2022; 11:6586817. [PMID: 35579553 PMCID: PMC9112766 DOI: 10.1093/gigascience/giac037] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 12/24/2021] [Accepted: 03/18/2022] [Indexed: 01/20/2023] Open
Abstract
Background Deep learning enables accurate high-resolution mapping of cells and tissue structures that can serve as the foundation of interpretable machine-learning models for computational pathology. However, generating adequate labels for these structures is a critical barrier, given the time and effort required from pathologists. Results This article describes a novel collaborative framework for engaging crowds of medical students and pathologists to produce quality labels for cell nuclei. We used this approach to produce the NuCLS dataset, containing >220,000 annotations of cell nuclei in breast cancers. This builds on prior work labeling tissue regions to produce an integrated tissue region- and cell-level annotation dataset for training that is the largest such resource for multi-scale analysis of breast cancer histology. This article presents data and analysis results for single and multi-rater annotations from both non-experts and pathologists. We present a novel workflow that uses algorithmic suggestions to collect accurate segmentation data without the need for laborious manual tracing of nuclei. Our results indicate that even noisy algorithmic suggestions do not adversely affect pathologist accuracy and can help non-experts improve annotation quality. We also present a new approach for inferring truth from multiple raters and show that non-experts can produce accurate annotations for visually distinctive classes. Conclusions This study is the most extensive systematic exploration of the large-scale use of wisdom-of-the-crowd approaches to generate data for computational pathology applications.
Collapse
Affiliation(s)
- Mohamed Amgad
- Department of Pathology, Northwestern University, 750 N Lake Shore Dr., Chicago, IL 60611, USA
| | - Lamees A Atteya
- Cairo Health Care Administration, Egyptian Ministry of Health, 3 Magles El Shaab Street, Cairo, Postal code 222, Egypt
| | - Hagar Hussein
- Department of Pathology, Nasser institute for research and treatment, 3 Magles El Shaab Street, Cairo, Postal code 222, Egypt
| | - Kareem Hosny Mohammed
- Department of Pathology and Laboratory Medicine, University of Pennsylvania, 3620 Hamilton Walk M163, Philadelphia, PA 19104, USA
| | - Ehab Hafiz
- Department of Clinical Laboratory Research, Theodor Bilharz Research Institute, 1 El-Nile Street, Imbaba Warrak El-Hadar, Giza, Postal code 12411, Egypt
| | - Maha A T Elsebaie
- Department of Medicine, Cook County Hospital, 1969 W Ogden Ave, Chicago, IL 60612, USA
| | - Ahmed M Alhusseiny
- Department of Pathology, Baystate Medical Center, University of Massachusetts, 759 Chestnut St, Springfield, MA 01199, USA
| | - Mohamed Atef AlMoslemany
- Faculty of Medicine, Menoufia University, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Abdelmagid M Elmatboly
- Faculty of Medicine, Al-Azhar University, 15 Mohammed Abdou, El-Darb El-Ahmar, Cairo Governorate, Postal code 11651, Egypt
| | - Philip A Pappalardo
- Consultant for The Center for Applied Proteomics and Molecular Medicine (CAPMM), George Mason University, 10920 George Mason Circle Institute for Advanced Biomedical Research Room 2008, MS1A9 Manassas, Virginia 20110, USA
| | - Rokia Adel Sakr
- Department of Pathology, National Liver Institute, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Pooya Mobadersany
- Department of Pathology, Northwestern University, 750 N Lake Shore Dr., Chicago, IL 60611, USA
| | - Ahmad Rachid
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Anas M Saad
- Cleveland Clinic Foundation, 9500 Euclid Ave. Cleveland, Ohio 44195, USA
| | - Ahmad M Alkashash
- Department of Pathology, Indiana University, 635 Barnhill Drive Medical Science Building A-128 Indianapolis, IN 46202, USA
| | - Inas A Ruhban
- Faculty of Medicine, Damascus University, Damascus, Damaskus, PO Box 30621, Syria
| | - Anas Alrefai
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Nada M Elgazar
- Faculty of Medicine, Mansoura University, 1 El Gomhouria St, Dakahlia Governorate 35516, Egypt
| | - Ali Abdulkarim
- Faculty of Medicine, Cairo University, Kasr Al Ainy Hospitals, Kasr Al Ainy St., Cairo, Postal code: 11562, Egypt
| | - Abo-Alela Farag
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Amira Etman
- Faculty of Medicine, Menoufia University, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Ahmed G Elsaeed
- Faculty of Medicine, Mansoura University, 1 El Gomhouria St, Dakahlia Governorate 35516, Egypt
| | - Yahya Alagha
- Faculty of Medicine, Cairo University, Kasr Al Ainy Hospitals, Kasr Al Ainy St., Cairo, Postal code: 11562, Egypt
| | - Yomna A Amer
- Faculty of Medicine, Menoufia University, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Ahmed M Raslan
- Department of Anaesthesia and Critical Care, Menoufia University Hospital, Gamal Abd El-Nasir, Qism Shebeen El-Kom, Shibin el Kom, Menofia Governorate, Postal code: 32511, Egypt
| | - Menatalla K Nadim
- Department of Clinical Pathology, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Mai A T Elsebaie
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Ahmed Ayad
- Research Department, Oncology Consultants, 2130 W. Holcombe Blvd, 10th Floor, Houston, Texas 77030, USA
| | - Liza E Hanna
- Department of Pathology, Nasser institute for research and treatment, 3 Magles El Shaab Street, Cairo, Postal code 222, Egypt
| | - Ahmed Gadallah
- Faculty of Medicine, Ain Shams University, 38 Abbassia, Next to the Al-Nour Mosque, Cairo, Postal code: 1181, Egypt
| | - Mohamed Elkady
- Siparadigm Diagnostic Informatics, 25 Riverside Dr no. 2, Pine Brook, NJ 07058, USA
| | - Bradley Drumheller
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, 201 Dowman Dr, Atlanta, GA 30322, USA
| | - David Jaye
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, 201 Dowman Dr, Atlanta, GA 30322, USA
| | - David Manthey
- Kitware Inc., 1712 Route 9. Suite 300. Clifton Park, New York 12065, USA
| | - David A Gutman
- Department of Neurology, Emory University School of Medicine, 201 Dowman Dr, Atlanta, GA 30322, USA
| | - Habiba Elfandy
- Department of Pathology, National Cancer Institute, Kasr Al Eini Street, Fom El Khalig, Cairo, Postal code: 11562, Egypt.,Department of Pathology, Children's Cancer Hospital Egypt (CCHE 57357), 1 Seket Al-Emam Street, El-Madbah El-Kadeem Yard, El-Saida Zenab, Cairo, Postal code: 11562, Egypt
| | - Lee A D Cooper
- Department of Pathology, Northwestern University, 750 N Lake Shore Dr., Chicago, IL 60611, USA.,Lurie Cancer Center, Northwestern University, 675 N St Clair St Fl 21 Ste 100, Chicago, IL 60611, USA.,Center for Computational Imaging and Signal Analytics, Northwestern University Feinberg School of Medicine, 750 N Lake Shore Dr., Chicago, IL 60611, USA
| |
Collapse
|
31
|
Lee HM, Kim YJ, Kim KG. Segmentation Performance Comparison Considering Regional Characteristics in Chest X-ray Using Deep Learning. Sensors (Basel) 2022; 22:3143. [PMID: 35590833 DOI: 10.3390/s22093143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/06/2022] [Accepted: 04/14/2022] [Indexed: 12/31/2022]
Abstract
Chest radiography is one of the most widely used diagnostic methods in hospitals, but it is difficult to read clearly because several human organ tissues and bones overlap. Therefore, various image processing and rib segmentation methods have been proposed to focus on the desired target. However, it is challenging to segment ribs elaborately using deep learning because they cannot reflect the characteristics of each region. Identifying which region has specific characteristics vulnerable to deep learning is an essential indicator of developing segmentation methods in medical imaging. Therefore, it is necessary to compare the deep learning performance differences based on regional characteristics. This study compares the differences in deep learning performance based on the rib region to verify whether deep learning reflects the characteristics of each part and to demonstrate why this regional performance difference has occurred. We utilized 195 normal chest X-ray datasets with data augmentation for learning and 5-fold cross-validation. To compare segmentation performance, the rib image was divided vertically and horizontally based on the spine, clavicle, heart, and lower organs, which are characteristic indicators of the baseline chest X-ray. Resultingly, we found that the deep learning model showed a 6-7% difference in the segmentation performance depending on the regional characteristics of the rib. We verified that the performance differences in each region cannot be ignored. This study will enable a more precise segmentation of the ribs and the development of practical deep learning algorithms.
Collapse
|
32
|
Guan S, Mehta B, Slater D, Thompson JR, DiCarlo E, Pannellini T, Pearce‐Fisher D, Zhang F, Raychaudhuri S, Hale C, Jiang CS, Goodman S, Orange DE. Rheumatoid Arthritis Synovial Inflammation Quantification Using Computer Vision. ACR Open Rheumatol 2022; 4:322-331. [PMID: 35014221 PMCID: PMC8992472 DOI: 10.1002/acr2.11381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 10/11/2021] [Indexed: 11/26/2022] Open
Abstract
OBJECTIVE We quantified inflammatory burden in rheumatoid arthritis (RA) synovial tissue by using computer vision to automate the process of counting individual nuclei in hematoxylin and eosin images. METHODS We adapted and applied computer vision algorithms to quantify nuclei density (count of nuclei per unit area of tissue) on synovial tissue from arthroplasty samples. A pathologist validated algorithm results by labeling nuclei in synovial images that were mislabeled or missed by the algorithm. Nuclei density was compared with other measures of RA inflammation such as semiquantitative histology scores, gene-expression data, and clinical measures of disease activity. RESULTS The algorithm detected a median of 112,657 (range 8,160-821,717) nuclei per synovial sample. Based on pathologist-validated results, the sensitivity and specificity of the algorithm was 97% and 100%, respectively. The mean nuclei density calculated by the algorithm was significantly higher (P < 0.05) in synovium with increased histology scores for lymphocytic inflammation, plasma cells, and lining hyperplasia. Analysis of RNA sequencing identified 915 significantly differentially expressed genes in correlation with nuclei density (false discovery rate is less than 0.05). Mean nuclei density was significantly higher (P < 0.05) in patients with elevated levels of C-reactive protein, erythrocyte sedimentation rate, rheumatoid factor, and cyclized citrullinated protein antibody. CONCLUSION Nuclei density is a robust measurement of inflammatory burden in RA and correlates with multiple orthogonal measurements of inflammation.
Collapse
Affiliation(s)
| | - Bella Mehta
- Hospital for Special SurgeryNew YorkNew York
- Weill Cornell MedicineNew YorkNew York
| | | | | | | | | | | | - Fan Zhang
- Center for Data Sciences, Brigham and Women's HospitalBostonMassachusetts
- Division of Genetics, Department of MedicineBrigham and Women's HospitalBostonMassachusetts
- Department of Biomedical InformaticsHarvard Medical SchoolBostonMassachusetts
- Program in Medical and Population Genetics, Broad Institute of MIT and HarvardCambridgeMassachusetts
- Division of Rheumatology, Inflammation and Immunity, Department of MedicineBrigham and Women's Hospital and Harvard Medical SchoolBostonMassachusetts
| | - Soumya Raychaudhuri
- Center for Data Sciences, Brigham and Women's HospitalBostonMassachusetts
- Division of Genetics, Department of MedicineBrigham and Women's HospitalBostonMassachusetts
- Department of Biomedical InformaticsHarvard Medical SchoolBostonMassachusetts
- Program in Medical and Population Genetics, Broad Institute of MIT and HarvardCambridgeMassachusetts
- Division of Rheumatology, Inflammation and Immunity, Department of MedicineBrigham and Women's Hospital and Harvard Medical SchoolBostonMassachusetts
- Centre for Genetics and Genomics Versus Arthritis, Manchester Academic Health Science Centre, University of ManchesterManchesterUK
| | | | | | - Susan Goodman
- Hospital for Special SurgeryNew YorkNew York
- Weill Cornell MedicineNew YorkNew York
| | - Dana E. Orange
- Hospital for Special SurgeryNew YorkNew York
- Rockefeller UniversityNew YorkNew York
| |
Collapse
|
33
|
Verma R, Kumar N, Patil A, Kurian NC, Rane S, Sethi A. Author's Reply to "MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge". IEEE Trans Med Imaging 2022; 41:1000-1003. [PMID: 35363607 DOI: 10.1109/tmi.2022.3157048] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We had released MoNuSAC2020 as one of the largest publicly available, manually annotated, curated, multi-class, and multi-instance medical image segmentation datasets. Based on this dataset, we had organized a challenge at the International Symposium on Biomedical Imaging (ISBI) 2020. Along with the challenge participants, we had published an article summarizing the results and findings of the challenge (Verma et al., 2021). Foucart et al. (2022) in their "Analysis of the MoNuSAC 2020 challenge evaluation and results: metric implementation errors" have pointed ways in which the computation of the segmentation performance metric for the challenge can be corrected or improved. After a careful examination of their analysis, we have found a small bug in our code and an erroneous column-header swap in one of our result tables. Here, we present our response to their analysis, and issue an errata. After fixing the bug the challenge rankings remain largely unaffected. On the other hand, two of Foucart et al.'s other suggestions are good for future consideration, but it is not clear that those should be immediately implemented. We thank Foucart et al. for their detailed analysis to help us fix the two errors.
Collapse
|
34
|
Doan TNN, Song B, Vuong TTL, Kim K, Kwak JT. SONNET: A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images. IEEE J Biomed Health Inform 2022; 26:3218-3228. [PMID: 35139032 DOI: 10.1109/jbhi.2022.3149936] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automated nuclei segmentation and classification are the keys to analyze and understand the cellular characteristics and functionality, supporting computer-aided digital pathology in disease diagnosis. However, the task still remains challenging due to the intrinsic variations in size, intensity, and morphology of different types of nuclei. Herein, we propose a self-guided ordinal regression neural network for simultaneous nuclear segmentation and classification that can exploit the intrinsic characteristics of nuclei and focus on highly uncertain areas during training. The proposed network formulates nuclei segmentation as an ordinal regression learning by introducing a distance decreasing discretization strategy, which stratifies nuclei in a way that inner regions forming a regular shape of nuclei are separated from outer regions forming an irregular shape. It also adopts a self-guided training strategy to adaptively adjust the weights associated with nuclear pixels, depending on the difficulty of the pixels that is assessed by the network itself. To evaluate the performance of the proposed network, we employ large-scale multi-tissue datasets with 276349 exhaustively annotated nuclei. We show that the proposed network achieves the state-of-the-art performance in both nuclei segmentation and classification in comparison to several methods that are recently developed for segmentation and/or classification.
Collapse
|
35
|
Hollandi R, Moshkov N, Paavolainen L, Tasnadi E, Piccinini F, Horvath P. Nucleus segmentation: towards automated solutions. Trends Cell Biol 2022; 32:295-310. [DOI: 10.1016/j.tcb.2021.12.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/30/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
|
36
|
Liang H, Cheng Z, Zhong H, Qu A, Chen L. A region-based convolutional network for nuclei detection and segmentation in microscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
Yu WH, Li CH, Wang RC, Yeh CY, Chuang SS. Machine Learning Based on Morphological Features Enables Classification of Primary Intestinal T-Cell Lymphomas. Cancers (Basel) 2021; 13:5463. [PMID: 34771625 DOI: 10.3390/cancers13215463] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/27/2021] [Accepted: 10/28/2021] [Indexed: 01/07/2023] Open
Abstract
The aim of this study was to investigate the feasibility of using machine learning techniques based on morphological features in classifying two subtypes of primary intestinal T-cell lymphomas (PITLs) defined according to the WHO criteria: monomorphic epitheliotropic intestinal T-cell lymphoma (MEITL) versus intestinal T-cell lymphoma, not otherwise specified (ITCL-NOS), which is considered a major challenge for pathological diagnosis. A total of 40 histopathological whole-slide images (WSIs) from 40 surgically resected PITL cases were used as the dataset for model training and testing. A deep neural network was trained to detect and segment the nuclei of lymphocytes. Quantitative nuclear morphometrics were further computed from these predicted contours. A decision-tree-based machine learning algorithm, XGBoost, was then trained to classify PITL cases into two disease subtypes using these nuclear morphometric features. The deep neural network achieved an average precision of 0.881 in the cell segmentation work. In terms of classifying MEITL versus ITCL-NOS, the XGBoost model achieved an area under receiver operating characteristic curve (AUC) of 0.966. Our research demonstrated an accurate, human-interpretable approach to using machine learning algorithms for reducing the high dimensionality of image features and classifying T cell lymphomas that present challenges in morphologic diagnosis. The quantitative nuclear morphometric features may lead to further discoveries concerning the relationship between cellular phenotype and disease status.
Collapse
|