1
|
Ahmad Z, Al-Thelaya K, Alzubaidi M, Joad F, Gilal NU, Mifsud W, Boughorbel S, Pintore G, Gobbetti E, Schneider J, Agus M. HistoMSC: Density and topology analysis for AI-based visual annotation of histopathology whole slide images. Comput Biol Med 2025; 190:109991. [PMID: 40120181 DOI: 10.1016/j.compbiomed.2025.109991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 12/20/2024] [Accepted: 03/04/2025] [Indexed: 03/25/2025]
Abstract
We introduce an end-to-end framework for the automated visual annotation of histopathology whole slide images. Our method integrates deep learning models to achieve precise localization and classification of cell nuclei with spatial data aggregation to extend classes of sparsely distributed nuclei across the entire slide. We introduce a novel and cost-effective approach to localization, leveraging a U-Net architecture and a ResNet-50 backbone. The performance is boosted through color normalization techniques, helping achieve robustness under color variations resulting from diverse scanners and staining reagents. The framework is complemented by a YOLO detection architecture, augmented with generative methods. For classification, we use context patches around each nucleus, fed to various deep architectures. Sparse nuclei-level annotations are then aggregated using kernel density estimation, followed by color-coding and isocontouring. This reduces visual clutter and provides per-pixel probabilities with respect to pathology taxonomies. Finally, we use Morse-Smale theory to generate abstract annotations, highlighting extrema in the density functions and potential spatial interactions in the form of abstract graphs. Thus, our visualization allows for exploration at scales ranging from individual nuclei to the macro-scale. We tested the effectiveness of our framework in an assessment by six pathologists using various neoplastic cases. Our results demonstrate the robustness and usefulness of the proposed framework in aiding histopathologists in their analysis and interpretation of whole slide images.
Collapse
Affiliation(s)
- Zahoor Ahmad
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Khaled Al-Thelaya
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Mahmood Alzubaidi
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Faaiz Joad
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Nauman Ullah Gilal
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | | | - Sabri Boughorbel
- Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | | | | | - Jens Schneider
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar
| | - Marco Agus
- Division of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha, Qatar.
| |
Collapse
|
2
|
Beltzung F, Le VL, Molnar I, Boutault E, Darcha C, Le Loarer F, Kossai M, Saut O, Biau J, Penault-Llorca F, Chautard E. Leveraging Deep Learning for Immune Cell Quantification and Prognostic Evaluation in Radiotherapy-Treated Oropharyngeal Squamous Cell Carcinomas. J Transl Med 2025; 105:104094. [PMID: 39826685 DOI: 10.1016/j.labinv.2025.104094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 12/24/2024] [Accepted: 01/09/2025] [Indexed: 01/22/2025] Open
Abstract
The tumor microenvironment plays a critical role in cancer progression and therapeutic responsiveness, with the tumor immune microenvironment (TIME) being a key modulator. In head and neck squamous cell carcinomas (HNSCCs), immune cell infiltration significantly influences the response to radiotherapy (RT). A better understanding of the TIME in HNSCCs could help identify patients most likely to benefit from combining RT with immunotherapy. Standardized, cost-effective methods for studying TIME in HNSCCs are currently lacking. This study aims to leverage deep learning (DL) to quantify immune cell densities using immunohistochemistry in untreated oropharyngeal squamous cell carcinoma (OPSCC) biopsies of patients scheduled for curative RT and assess their prognostic value. We analyzed 84 pretreatment formalin-fixed paraffin-embedded tumor biopsies from OPSCC patients. Immunohistochemistry was performed for CD3, CD8, CD20, CD163, and FOXP3, and whole slide images were digitized for analysis using a U-Net-based DL model. Two quantification approaches were applied: a cell-counting method and an area-based method. These methods were applied to stained regions. The DL model achieved high accuracy in detecting stained cells across all biomarkers. Strong correlations were found between our DL pipeline, the HALO Image Analysis Platform, and the open-source QuPath software for estimating immune cell densities. Our DL pipeline provided an accurate and reproducible approach for quantifying immune cells in OPSCC. The area-based method demonstrated superior prognostic value for recurrence-free survival, when compared with the cell-counting method. Elevated densities of CD3, CD8, CD20, and FOXP3 were associated with improved recurrence-free survival, whereas CD163 showed no significant prognostic association. These results highlight the potential of DL in digital pathology for assessing TIME and predicting patient outcomes.
Collapse
Affiliation(s)
- Fanny Beltzung
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France; Department of Pathology, Hôpital Haut-Lévêque, CHU de Bordeaux, Pessac, France.
| | - Van-Linh Le
- MONC team, Center INRIA at University of Bordeaux, Talence, France; Bordeaux Mathematics Institute (IMB), UMR CNRS 5251, University of Bordeaux, Talence, France; Department of Data and Digital Health, Bergonié Institute, Bordeaux, France
| | - Ioana Molnar
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France; Clinical Research Division, Clinical Research & Innovation Division, Centre Jean PERRIN, Clermont-Ferrand, France
| | - Erwan Boutault
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France
| | - Claude Darcha
- Department of Pathology, CHU Clermont-Ferrand, Clermont-Ferrand, France
| | - François Le Loarer
- Department of Pathology, Bergonié Institute, Bordeaux, France; Bordeaux Institute of Oncology (BRIC U1312), INSERM, Université de Bordeaux, Institut Bergonié, Bordeaux, France
| | - Myriam Kossai
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France; Department of Pathology, Centre Jean PERRIN, Clermont-Ferrand, France
| | - Olivier Saut
- MONC team, Center INRIA at University of Bordeaux, Talence, France; Bordeaux Mathematics Institute (IMB), UMR CNRS 5251, University of Bordeaux, Talence, France
| | - Julian Biau
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France; Department of Radiation Therapy, Centre Jean PERRIN, Clermont-Ferrand, France
| | - Frédérique Penault-Llorca
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France; Department of Pathology, Centre Jean PERRIN, Clermont-Ferrand, France
| | - Emmanuel Chautard
- Department of Molecular Imaging & Theragnostic Strategies (IMOST), University Clermont Auvergne, INSERM U1240, Clermont-Ferrand, France; Department of Pathology, Centre Jean PERRIN, Clermont-Ferrand, France
| |
Collapse
|
3
|
Griffith JL, Joseph J, Jensen A, Banks S, Allen KD. Using deep-learning based segmentation to enable spatial evaluation of knee osteoarthritis (SEKO) in rodent models. Osteoarthritis Cartilage 2025:S1063-4584(25)00867-2. [PMID: 40139644 DOI: 10.1016/j.joca.2025.02.787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 01/21/2025] [Accepted: 02/20/2025] [Indexed: 03/29/2025]
Abstract
OBJECTIVE In preclinical models of osteoarthritis (OA), histology is commonly used to evaluate joint remodeling. The current study introduces a deep learning driven histological analysis pipeline for the spatial evaluation of knee osteoarthritis (SEKO) focused on quantifying and visualizing joint remodeling in the medial compartment of rodent knees. METHODS The SEKO pipeline contains both segmentation and visualization tools. For segmentation, two separate convolutional neural network architectures, HRNet and U-Net, were considered for identifying multiple regions of interest. Following segmentation, SEKO calculates multiple morphometric and location dependent measures to summarize joint-level changes. Additionally, SEKO generates probabilistic heat maps for visualization of the spatial aspects of joint remodeling. RESULTS SEKO incorporated the U-NET architecture - due to its higher prediction accuracy - and identified similar cartilage loss changes that were reported using by-hand segmentation in prior work. Additionally, SEKO enabled the detection of changes in subchondral bone area and location dependent bone remodeling. SEKO also enabled visualization of spatial changes in cartilage thinning and bone remodeling using probabilistic heat maps. CONCLUSION The SEKO pipeline offers the potential for objective comparison of OA progression and therapeutic interventions through visualization of spatial and morphometric changes. SEKO is provided as an open-source tool for the OA research community, facilitating collaborative research efforts and comprehensive analysis of knee joint histology.
Collapse
Affiliation(s)
- Jacob L Griffith
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA; Pain Research & Intervention Center of Excellence (PRICE), University of Florida, Gainesville, FL, USA
| | - Justin Joseph
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Andrew Jensen
- Department of Mechanical and Aerospace Engineering at the University of Florida, Gainesville, FL, USA
| | - Scott Banks
- Department of Mechanical and Aerospace Engineering at the University of Florida, Gainesville, FL, USA; Department of Orthopaedic Surgery and Sports Medicine, University of Florida, Gainesville, FL, USA
| | - Kyle D Allen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA; Pain Research & Intervention Center of Excellence (PRICE), University of Florida, Gainesville, FL, USA; Department of Mechanical and Aerospace Engineering at the University of Florida, Gainesville, FL, USA; Department of Orthopaedic Surgery and Sports Medicine, University of Florida, Gainesville, FL, USA.
| |
Collapse
|
4
|
Ejiyi CJ, Qin Z, Agbesi VK, Yi D, Atwereboannah AA, Chikwendu IA, Bamisile OF, Bakanina Kissanga GM, Bamisile OO. Advancing cancer diagnosis and prognostication through deep learning mastery in breast, colon, and lung histopathology with ResoMergeNet. Comput Biol Med 2025; 185:109494. [PMID: 39637456 DOI: 10.1016/j.compbiomed.2024.109494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 10/11/2024] [Accepted: 11/26/2024] [Indexed: 12/07/2024]
Abstract
Cancer, a global health threat, demands effective diagnostic solutions to combat its impact on public health, particularly for breast, colon, and lung cancers. Early and accurate diagnosis is essential for successful treatment, prompting the rise of Computer-Aided Diagnosis Systems as reliable and cost-effective tools. Histopathology, renowned for its precision in cancer imaging, has become pivotal in the diagnostic landscape of breast, colon, and lung cancers. However, while deep learning models have been widely explored in this domain, they often face challenges in generalizing to diverse clinical settings and in efficiently capturing both local and global feature representations, particularly for multi-class tasks. This underscores the need for models that can reduce biases, improve diagnostic accuracy, and minimize error susceptibility in cancer classification tasks. To this end, we introduce ResoMergeNet (RMN), an advanced deep-learning model designed for both multi-class and binary cancer classification using histopathological images of breast, colon, and lung. ResoMergeNet integrates the Resboost mechanism which enhances feature representation, and the ConvmergeNet mechanism which optimizes feature extraction, leading to improved diagnostic accuracy. Comparative evaluations against state-of-the-art models show ResoMergeNet's superior performance. Validated on the LC-25000 and BreakHis (400× and 40× magnifications) datasets, ResoMergeNet demonstrates outstanding performance, achieving perfect scores of 100 % in accuracy, sensitivity, precision, and F1 score for binary classification. For multi-class classification with five classes from the LC25000 dataset, it maintains an impressive 99.96 % across all performance metrics. When applied to the BreakHis dataset, ResoMergeNet achieved 99.87 % accuracy, 99.75 % sensitivity, 99.78 % precision, and 99.77 % F1 score at 400× magnification. At 40× magnification, it still delivered robust results with 98.85 % accuracy, sensitivity, precision, and F1 score. These results emphasize the efficacy of ResoMergeNet, marking a substantial advancement in diagnostic and prognostic systems for breast, colon, and lung cancers. ResoMergeNet's superior diagnostic accuracy can significantly reduce diagnostic errors, minimize human biases, and expedite clinical workflows, making it a valuable tool for enhancing cancer diagnosis and treatment outcomes.
Collapse
Affiliation(s)
- Chukwuebuka Joseph Ejiyi
- College of Nuclear Technology and Automation Engineering, & Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Research Center, Chengdu University of Technology, Sichuan, Chengdu, China; Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Sichuan, Chengdu, China.
| | - Zhen Qin
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Sichuan, Chengdu, China.
| | - Victor K Agbesi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Ding Yi
- Network and Data Security Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, Sichuan, Chengdu, China.
| | - Abena A Atwereboannah
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Ijeoma A Chikwendu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Oluwatoyosi F Bamisile
- College of Nuclear Technology and Automation Engineering, & Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Research Center, Chengdu University of Technology, Sichuan, Chengdu, China
| | | | - Olusola O Bamisile
- College of Nuclear Technology and Automation Engineering, & Sichuan Industrial Internet Intelligent Monitoring and Application Engineering Research Center, Chengdu University of Technology, Sichuan, Chengdu, China
| |
Collapse
|
5
|
Trahearn N, Sakr C, Banerjee A, Lee SH, Baker A, Kocher HM, Angerilli V, Morano F, Bergamo F, Maddalena G, Intini R, Cremolini C, Caravagna G, Graham T, Pietrantonio F, Lonardi S, Fassan M, Sottoriva A. Computational pathology applied to clinical colorectal cancer cohorts identifies immune and endothelial cell spatial patterns predictive of outcome. J Pathol 2025; 265:198-210. [PMID: 39788558 PMCID: PMC11717494 DOI: 10.1002/path.6378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 10/04/2024] [Accepted: 11/06/2024] [Indexed: 01/12/2025]
Abstract
Colorectal cancer (CRC) is a histologically heterogeneous disease with variable clinical outcome. The role the tumour microenvironment (TME) plays in determining tumour progression is complex and not fully understood. To improve our understanding, it is critical that the TME is studied systematically within clinically annotated patient cohorts with long-term follow-up. Here we studied the TME in three clinical cohorts of metastatic CRC with diverse molecular subtype and treatment history. The MISSONI cohort included cases with microsatellite instability that received immunotherapy (n = 59, 24 months median follow-up). The BRAF cohort included BRAF V600E mutant microsatellite stable (MSS) cancers (n = 141, 24 months median follow-up). The VALENTINO cohort included RAS/RAF WT MSS cases who received chemotherapy and anti-EGFR therapy (n = 175, 32 months median follow-up). Using a Deep learning cell classifier, trained upon >38,000 pathologist annotations, to detect eight cell types within H&E-stained sections of CRC, we quantified the spatial tissue organisation and colocalisation of cell types across these cohorts. We found that the ratio of infiltrating endothelial cells to cancer cells, a possible marker of vascular invasion, was an independent predictor of progression-free survival (PFS) in the BRAF+MISSONI cohort (p = 0.033, HR = 1.44, CI = 1.029-2.01). In the VALENTINO cohort, this pattern was also an independent PFS predictor in TP53 mutant patients (p = 0.009, HR = 0.59, CI = 0.40-0.88). Tumour-infiltrating lymphocytes were an independent predictor of PFS in BRAF+MISSONI (p = 0.016, HR = 0.36, CI = 0.153-0.83). Elevated tumour-infiltrating macrophages were predictive of improved PFS in the MISSONI cohort (p = 0.031). We validated our cell classification using highly multiplexed immunofluorescence for 17 markers applied to the same sections that were analysed by the classifier (n = 26 cases). These findings uncovered important microenvironmental factors that underpin treatment response across and within CRC molecular subtypes, while providing an atlas of the distribution of 180 million cells in 375 clinically annotated CRC patients. © 2025 The Author(s). The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Nicholas Trahearn
- Centre for Evolution and CancerThe Institute of Cancer ResearchLondonUK
- UCL Cancer InstituteUCLLondon, UK
| | - Chirine Sakr
- Centre for Evolution and CancerThe Institute of Cancer ResearchLondonUK
| | | | - Seung Hyun Lee
- Centre for Evolution and CancerThe Institute of Cancer ResearchLondonUK
- Systems Oncology group, Cancer Research UK Manchester InstituteThe University of ManchesterManchesterUK
| | - Ann‐Marie Baker
- Centre for Evolution and CancerThe Institute of Cancer ResearchLondonUK
- Barts Cancer InstituteQueen Mary University of LondonLondonUK
| | - Hemant M Kocher
- Barts Cancer InstituteQueen Mary University of LondonLondonUK
| | | | | | | | - Giulia Maddalena
- Veneto Institute of Oncology IOV‐IRCCSPaduaItaly
- Department of Surgical, Oncological and Gastroenterological SciencesUniversity of PaduaPaduaItaly
| | | | | | | | - Trevor Graham
- Centre for Evolution and CancerThe Institute of Cancer ResearchLondonUK
- Barts Cancer InstituteQueen Mary University of LondonLondonUK
| | | | - Sara Lonardi
- Veneto Institute of Oncology IOV‐IRCCSPaduaItaly
| | - Matteo Fassan
- Department of Medicine (DIMED)University of PaduaPaduaItaly
- Veneto Institute of Oncology IOV‐IRCCSPaduaItaly
| | - Andrea Sottoriva
- Centre for Evolution and CancerThe Institute of Cancer ResearchLondonUK
- Computational Biology Research CentreHuman TechnopoleMilanItaly
| |
Collapse
|
6
|
Wang X, Wang Z, Wang W, Liu Z, Ma Z, Guo Y, Su D, Sun Q, Pei D, Duan W, Qiu Y, Wang M, Yang Y, Li W, Liu H, Ma C, Yu M, Yu Y, Chen T, Fu J, Li S, Yu B, Ji Y, Li W, Yan D, Liu X, Li ZC, Zhang Z. IDH-mutant glioma risk stratification via whole slide images: Identifying pathological feature associations. iScience 2025; 28:111605. [PMID: 39845415 PMCID: PMC11751506 DOI: 10.1016/j.isci.2024.111605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 08/12/2024] [Accepted: 12/11/2024] [Indexed: 01/24/2025] Open
Abstract
This article aims to develop and validate a pathological prognostic model for predicting prognosis in patients with isocitrate dehydrogenase (IDH)-mutant gliomas and reveal the biological underpinning of the prognostic pathological features. The pathomic model was constructed based on whole slide images (WSIs) from a training set (N = 486) and evaluated on internal validation set (N = 209), HPPH validation set (N = 54), and TCGA validation set (N = 352). Biological implications of PathScore and individual pathomic features were identified by pathogenomics set (N = 100). The WSI-based pathological signature was an independent prognostic factor. Incorporating the pathological features into a clinical model resulted in a pathological-clinical model that predicted survival better than either the pathological model or clinical model alone. Ten categories of pathways (metabolism, proliferation, immunity, DNA damage response, disease, migrate, protein modification, synapse, transcription and translation, and complex cellular functions) were significantly correlated with the WSI-based pathological features.
Collapse
Affiliation(s)
- Xiaotao Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Zilong Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Weiwei Wang
- Department of Pathology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Zaoqu Liu
- Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Zeyu Ma
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yang Guo
- Department of Neurosurgery, Henan Provincial People’s Hospital, Zhengzhou, Henan, China
| | - Dingyuan Su
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Qiuchang Sun
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dongling Pei
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wenchao Duan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yuning Qiu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Minkai Wang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yongqiang Yang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wenyuan Li
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Haoran Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Caoyuan Ma
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Miaomiao Yu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yinhui Yu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Te Chen
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Jing Fu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Sen Li
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Bin Yu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Yuchen Ji
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Wencai Li
- Department of Pathology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Dongming Yan
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Xianzhi Liu
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| | - Zhi-Cheng Li
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- The Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan, China
| |
Collapse
|
7
|
Fourkioti O, De Vries M, Naidoo R, Bakal C. Not seeing the trees for the forest. The impact of neighbours on graph-based configurations in histopathology. BMC Bioinformatics 2025; 26:9. [PMID: 39794715 PMCID: PMC11724494 DOI: 10.1186/s12859-024-06007-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 12/05/2024] [Indexed: 01/13/2025] Open
Abstract
BACKGROUND Deep learning (DL) has set new standards in cancer diagnosis, significantly enhancing the accuracy of automated classification of whole slide images (WSIs) derived from biopsied tissue samples. To enable DL models to process these large images, WSIs are typically divided into thousands of smaller tiles, each containing 10-50 cells. Multiple Instance Learning (MIL) is a commonly used approach, where WSIs are treated as bags comprising numerous tiles (instances) and only bag-level labels are provided during training. The model learns from these broad labels to extract more detailed, instance-level insights. However, biopsied sections often exhibit high intra- and inter-phenotypic heterogeneity, presenting a significant challenge for classification. To address this, many graph-based methods have been proposed, where each WSI is represented as a graph with tiles as nodes and edges defined by specific spatial relationships. RESULTS In this study, we investigate how different graph configurations, varying in connectivity and neighborhood structure, affect the performance of MIL models. We developed a novel pipeline, K-MIL, to evaluate the impact of contextual information on cell classification performance. By incorporating neighboring tiles into the analysis, we examined whether contextual information improves or impairs the network's ability to identify patterns and features critical for accurate classification. Our experiments were conducted on two datasets: COLON cancer and UCSB datasets. CONCLUSIONS Our results indicate that while incorporating more spatial context information generally improves model accuracy at both the bag and tile levels, the improvement at the tile level is not linear. In some instances, increasing spatial context leads to misclassification, suggesting that more context is not always beneficial. This finding highlights the need for careful consideration when incorporating spatial context information in digital pathology classification tasks.
Collapse
Affiliation(s)
- Olga Fourkioti
- The Institute of Cancer Research, London, United Kingdom.
| | - Matt De Vries
- The Institute of Cancer Research, London, United Kingdom
- Imperial College, London, United Kingdom
| | - Reed Naidoo
- The Institute of Cancer Research, London, United Kingdom
| | - Chris Bakal
- The Institute of Cancer Research, London, United Kingdom.
| |
Collapse
|
8
|
Gifford R, Reid A, Jhawar SR, VanKoevering K, Krening S. Convolutional Neural Network for Classification of Oropharynx Cancer with Video Nasopharyngolaryngoscopy. J Otolaryngol Head Neck Surg 2025; 54:19160216251326590. [PMID: 40099484 PMCID: PMC11915290 DOI: 10.1177/19160216251326590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025] Open
Affiliation(s)
- Ryan Gifford
- Department of Integrated Systems Engineering, Ohio State University, Columbus, OH, USA
| | - Abigail Reid
- School of Medicine, Creighton University, Omaha, NE, USA
| | - Sachin R Jhawar
- Department of Radiation Oncology, Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Kyle VanKoevering
- Department of Otolaryngology, Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Samantha Krening
- Department of Integrated Systems Engineering, Ohio State University, Columbus, OH, USA
| |
Collapse
|
9
|
Nunes JD, Montezuma D, Oliveira D, Pereira T, Cardoso JS. A survey on cell nuclei instance segmentation and classification: Leveraging context and attention. Med Image Anal 2025; 99:103360. [PMID: 39383642 DOI: 10.1016/j.media.2024.103360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 08/26/2024] [Accepted: 09/27/2024] [Indexed: 10/11/2024]
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&E-stains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
Collapse
Affiliation(s)
- João D Nunes
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal.
| | - Diana Montezuma
- IMP Diagnostics, Praça do Bom Sucesso, 4150-146 Porto, Portugal; Cancer Biology and Epigenetics Group, Research Center of IPO Porto (CI-IPOP)/[RISE@CI-IPOP], Portuguese Oncology Institute of Porto (IPO Porto)/Porto Comprehensive Cancer Center (Porto.CCC), R. Dr. António Bernardino de Almeida, 4200-072, Porto, Portugal; Doctoral Programme in Medical Sciences, School of Medicine and Biomedical Sciences - University of Porto (ICBAS-UP), Porto, Portugal
| | | | - Tania Pereira
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; FCTUC - Faculty of Science and Technology, University of Coimbra, Coimbra, 3004-516, Portugal
| | - Jaime S Cardoso
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal
| |
Collapse
|
10
|
Oboma YI, Ekpenyong BO, Umar MS, Nja GME, Chelimo JJ, Igwe MC, Bunu UO. Histopathological, Cytological and Radiological Correlations in Allergy and Public Health Concerns: A Comprehensive Review. J Asthma Allergy 2024; 17:1333-1354. [PMID: 39749282 PMCID: PMC11693939 DOI: 10.2147/jaa.s498641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 12/18/2024] [Indexed: 01/04/2025] Open
Abstract
Allergies represent a significant and growing public health concern, affecting millions worldwide and burdening healthcare systems substantially. Accurate diagnosis and understanding of allergy is crucial for effective management and treatment. This review aims to explore the historical evolution, current advances, and prospects of histopathological and cytological techniques in allergy diagnosis, highlighting their crucial role in modern medicine. Major biomedical, public health, and imaging databases such as PubMed, Scopus, Web of Science, and EMBASE were used. The search strategy used include specific keywords and Medical Subject Headings (MeSH) terms related to histopathology, cytology, radiology, allergic diseases, and public health. Histopathological and cytological studies play a pivotal role in elucidating the underlying mechanisms of allergies, offering insights into the cellular and tissue-level changes associated with allergic responses. Histopathology reveals characteristic features such as inflammation, tissue remodeling, and the presence of specific immune cells like eosinophils and mast cells. Cytological analysis can detect cellular changes and abnormalities at a finer scale, providing a complementary perspective to histopathological findings. The correlation between histopathological and cytological findings is critical for achieving accurate and reliable diagnoses. Combined histopathological and cytological studies can reveal the extent of airway inflammation, epithelial damage, and immune cell infiltration, providing a robust basis for clinical decision-making. Recent advancements in diagnostic techniques have further revolutionized the field of allergy diagnosis. These technologies offer increased accuracy, speed, and reproducibility, making them invaluable in both clinical and research settings. Despite these advancements, several challenges and limitations persist. By integrating tissue-level and cellular-level analyses, clinicians can achieve more accurate diagnoses, tailor treatments to individual patients, and ultimately improve the quality of care for those suffering from allergies. In conclusion, histopathological and cytological correlation in allergy diagnosis provides a comprehensive framework for understanding and managing allergic conditions.
Collapse
Affiliation(s)
- Yibala Ibor Oboma
- Department of Medical Laboratory Sciences, School of Allied Health Sciences, Kampala International University Western Campus, Ishaka, Bushenyi, Uganda
| | - Bassey Okon Ekpenyong
- Department of Histopathology, Faculty of Medical Laboratory Science, Rivers State University Nkpolo - Oroworukwo, Port Harcourt, River State, Nigeria
| | - Mohammed Sani Umar
- Department of Radiography, School of Allied Health Sciences, Kampala International University, Western Campus, Ishaka, Bushenyi, Uganda
| | - Glory Mbe Egom Nja
- Department of Public Health, School of Allied Medical Sciences, Kampala International University, Western Campus, Ishaka, Bushenyi, Uganda
| | - Judith Jepkosgei Chelimo
- Department of Public Health, School of Allied Medical Sciences, Kampala International University, Western Campus, Ishaka, Bushenyi, Uganda
| | - Matthew Chibunna Igwe
- Department of Public Health, School of Allied Medical Sciences, Kampala International University, Western Campus, Ishaka, Bushenyi, Uganda
| | - Umi Omar Bunu
- Department of Public Health, School of Allied Medical Sciences, Kampala International University, Western Campus, Ishaka, Bushenyi, Uganda
| |
Collapse
|
11
|
Sultan S, Gorris MAJ, Martynova E, van der Woude LL, Buytenhuijs F, van Wilpe S, Verrijp K, Figdor CG, de Vries IJM, Textor J. ImmuNet: a segmentation-free machine learning pipeline for immune landscape phenotyping in tumors by multiplex imaging. Biol Methods Protoc 2024; 10:bpae094. [PMID: 39866377 PMCID: PMC11769680 DOI: 10.1093/biomethods/bpae094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Accepted: 12/16/2024] [Indexed: 01/28/2025] Open
Abstract
Tissue specimens taken from primary tumors or metastases contain important information for diagnosis and treatment of cancer patients. Multiplex imaging allows in situ visualization of heterogeneous cell populations, such as immune cells, in tissue samples. Most image processing pipelines first segment cell boundaries and then measure marker expression to assign cell phenotypes. In dense tissue environments, this segmentation-first approach can be inaccurate due to segmentation errors or overlapping cells. Here, we introduce the machine-learning pipeline "ImmuNet", which identifies positions and phenotypes of cells without segmenting them. ImmuNet is easy to train: human annotators only need to click on an immune cell and score its expression of each marker-drawing a full cell outline is not required. We trained and evaluated ImmuNet on multiplex images from human tonsil, lung cancer, prostate cancer, melanoma, and bladder cancer tissue samples and found it to consistently achieve error rates below 5%-10% across tissue types, cell types, and tissue densities, outperforming a segmentation-based baseline method. Furthermore, we externally validate ImmuNet results by comparing them to flow cytometric cell count measurements from the same tissue. In summary, ImmuNet is an effective, simpler alternative to segmentation-based approaches when only cell positions and phenotypes, but not their shapes, are required for downstream analyses. Thus, ImmuNet helps researchers to analyze cell positions in multiplex tissue images more easily and accurately.
Collapse
Affiliation(s)
- Shabaz Sultan
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Data Science Group, Institute for Computing and Information Sciences, Radboud University, Nijmegen 6525 EC, The Netherlands
| | - Mark A J Gorris
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Oncode Institute, Radboudumc, Nijmegen 6525 GA, The Netherlands
| | - Evgenia Martynova
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Data Science Group, Institute for Computing and Information Sciences, Radboud University, Nijmegen 6525 EC, The Netherlands
| | - Lieke L van der Woude
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Oncode Institute, Radboudumc, Nijmegen 6525 GA, The Netherlands
- Department of Pathology, Radboudumc, Nijmegen 6525 GA, The Netherlands
| | - Franka Buytenhuijs
- Data Science Group, Institute for Computing and Information Sciences, Radboud University, Nijmegen 6525 EC, The Netherlands
| | - Sandra van Wilpe
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Department of Medical Oncology, Radboudumc, Nijmegen 6525 GA, The Netherlands
| | - Kiek Verrijp
- Oncode Institute, Radboudumc, Nijmegen 6525 GA, The Netherlands
- Department of Pathology, Radboudumc, Nijmegen 6525 GA, The Netherlands
| | - Carl G Figdor
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Oncode Institute, Radboudumc, Nijmegen 6525 GA, The Netherlands
| | | | - Johannes Textor
- Medical BioSciences, Radboudumc, Nijmegen 6562 GA, The Netherlands
- Data Science Group, Institute for Computing and Information Sciences, Radboud University, Nijmegen 6525 EC, The Netherlands
| |
Collapse
|
12
|
Shahzadi M, Rafique H, Waheed A, Naz H, Waheed A, Zokirova FR, Khan H. Artificial intelligence for chimeric antigen receptor-based therapies: a comprehensive review of current applications and future perspectives. Ther Adv Vaccines Immunother 2024; 12:25151355241305856. [PMID: 39691280 PMCID: PMC11650588 DOI: 10.1177/25151355241305856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 11/18/2024] [Indexed: 12/19/2024] Open
Abstract
Using artificial intelligence (AI) to enhance chimeric antigen receptor (CAR)-based therapies' design, production, and delivery is a novel and promising approach. This review provides an overview of the current applications and challenges of AI for CAR-based therapies and suggests some directions for future research and development. This paper examines some of the recent advances of AI for CAR-based therapies, for example, using deep learning (DL) to design CARs that target multiple antigens and avoid antigen escape; using natural language processing to extract relevant information from clinical reports and literature; using computer vision to analyze the morphology and phenotype of CAR cells; using reinforcement learning to optimize the dose and schedule of CAR infusion; and using AI to predict the efficacy and toxicity of CAR-based therapies. These applications demonstrate the potential of AI to improve the quality and efficiency of CAR-based therapies and to provide personalized and precise treatments for cancer patients. However, there are also some challenges and limitations of using AI for CAR-based therapies, for example, the lack of high-quality and standardized data; the need for validation and verification of AI models; the risk of bias and error in AI outputs; the ethical, legal, and social issues of using AI for health care; and the possible impact of AI on the human role and responsibility in cancer immunotherapy. It is important to establish a multidisciplinary collaboration among researchers, clinicians, regulators, and patients to address these challenges and to ensure the safe and responsible use of AI for CAR-based therapies.
Collapse
Affiliation(s)
- Muqadas Shahzadi
- Department of Zoology, Faculty of Life Sciences, University of Okara, Okara, Pakistan
| | - Hamad Rafique
- College of Food Engineering and Nutritional Science, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Ahmad Waheed
- Department of Zoology, Faculty of Life Sciences, University of Okara, 2 KM Lahore Road, Renala Khurd, Okara 56130, Punjab, Pakistan
| | - Hina Naz
- Department of Zoology, Faculty of Life Sciences, University of Okara, Okara, Pakistan
| | - Atifa Waheed
- Department of Biology, Faculty of Life Sciences, University of Okara, Okara, Pakistan
| | | | - Humera Khan
- Department of Biochemistry, Sahiwal Medical College, Sahiwal, Pakistan
| |
Collapse
|
13
|
Krikid F, Rositi H, Vacavant A. State-of-the-Art Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues. J Imaging 2024; 10:311. [PMID: 39728208 DOI: 10.3390/jimaging10120311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 11/20/2024] [Accepted: 12/02/2024] [Indexed: 12/28/2024] Open
Abstract
Microscopic image segmentation (MIS) is a fundamental task in medical imaging and biological research, essential for precise analysis of cellular structures and tissues. Despite its importance, the segmentation process encounters significant challenges, including variability in imaging conditions, complex biological structures, and artefacts (e.g., noise), which can compromise the accuracy of traditional methods. The emergence of deep learning (DL) has catalyzed substantial advancements in addressing these issues. This systematic literature review (SLR) provides a comprehensive overview of state-of-the-art DL methods developed over the past six years for the segmentation of microscopic images. We critically analyze key contributions, emphasizing how these methods specifically tackle challenges in cell, nucleus, and tissue segmentation. Additionally, we evaluate the datasets and performance metrics employed in these studies. By synthesizing current advancements and identifying gaps in existing approaches, this review not only highlights the transformative potential of DL in enhancing diagnostic accuracy and research efficiency but also suggests directions for future research. The findings of this study have significant implications for improving methodologies in medical and biological applications, ultimately fostering better patient outcomes and advancing scientific understanding.
Collapse
Affiliation(s)
- Fatma Krikid
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| | - Hugo Rositi
- LORIA, CNRS, Université de Lorraine, F-54000 Nancy, France
| | - Antoine Vacavant
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| |
Collapse
|
14
|
Mei M, Wei Z, Hu B, Wang M, Mei L, Ye Z. DAT-Net: Deep Aggregation Transformer Network for automatic nuclear segmentation. Biomed Signal Process Control 2024; 98:106764. [DOI: 10.1016/j.bspc.2024.106764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
15
|
Zhang W, Yang S, Luo M, He C, Li Y, Zhang J, Wang X, Wang F. Keep it accurate and robust: An enhanced nuclei analysis framework. Comput Struct Biotechnol J 2024; 24:699-710. [PMID: 39650700 PMCID: PMC11621583 DOI: 10.1016/j.csbj.2024.10.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/21/2024] [Accepted: 10/27/2024] [Indexed: 12/11/2024] Open
Abstract
Accurate segmentation and classification of nuclei in histology images is critical but challenging due to nuclei heterogeneity, staining variations, and tissue complexity. Existing methods often struggle with limited dataset variability, with patches extracted from similar whole slide images (WSI), making models prone to falling into local optima. Here we propose a new framework to address this limitation and enable robust nuclear analysis. Our method leverages dual-level ensemble modeling to overcome issues stemming from limited dataset variation. Intra-ensembling applies diverse transformations to individual samples, while inter-ensembling combines networks of different scales. We also introduce enhancements to the HoVer-Net architecture, including updated encoders, nested dense decoding and model regularization strategy. We achieve state-of-the-art results on public benchmarks, including 1st place for nuclear composition prediction and 3rd place for segmentation/classification in the 2022 Colon Nuclei Identification and Counting (CoNIC) Challenge. This success validates our approach for accurate histological nuclei analysis. Extensive experiments and ablation studies provide insights into optimal network design choices and training techniques. In conclusion, this work proposes an improved framework advancing the state-of-the-art in nuclei analysis. We will release our code and models to serve as a toolkit for the community.
Collapse
Affiliation(s)
- Wenhua Zhang
- Institute of Artificial Intelligence, Shanghai University, Shanghai 200444, China
| | - Sen Yang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | | | - Chuan He
- Shanghai Aitrox Technology Corporation Limited, Shanghai, 200444, China
| | - Yuchen Li
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | - Jun Zhang
- Tencent AI Lab, Shenzhen 518057, China
| | - Xiyue Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | - Fang Wang
- Department of Pathology, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264000, China
| |
Collapse
|
16
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
17
|
Fiorin A, López Pablo C, Lejeune M, Hamza Siraj A, Della Mea V. Enhancing AI Research for Breast Cancer: A Comprehensive Review of Tumor-Infiltrating Lymphocyte Datasets. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2996-3008. [PMID: 38806950 PMCID: PMC11612116 DOI: 10.1007/s10278-024-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/19/2024] [Accepted: 02/07/2024] [Indexed: 05/30/2024]
Abstract
The field of immunology is fundamental to our understanding of the intricate dynamics of the tumor microenvironment. In particular, tumor-infiltrating lymphocyte (TIL) assessment emerges as essential aspect in breast cancer cases. To gain comprehensive insights, the quantification of TILs through computer-assisted pathology (CAP) tools has become a prominent approach, employing advanced artificial intelligence models based on deep learning techniques. The successful recognition of TILs requires the models to be trained, a process that demands access to annotated datasets. Unfortunately, this task is hampered not only by the scarcity of such datasets, but also by the time-consuming nature of the annotation phase required to create them. Our review endeavors to examine publicly accessible datasets pertaining to the TIL domain and thereby become a valuable resource for the TIL community. The overall aim of the present review is thus to make it easier to train and validate current and upcoming CAP tools for TIL assessment by inspecting and evaluating existing publicly available online datasets.
Collapse
Affiliation(s)
- Alessio Fiorin
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Carlos López Pablo
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Marylène Lejeune
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain
| | - Ameer Hamza Siraj
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| |
Collapse
|
18
|
Cao R, Meng Q, Tan D, Wei P, Ding Y, Zheng C. AER-Net: Attention-Enhanced Residual Refinement Network for Nuclei Segmentation and Classification in Histology Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:7208. [PMID: 39598984 PMCID: PMC11598247 DOI: 10.3390/s24227208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/04/2024] [Accepted: 11/08/2024] [Indexed: 11/29/2024]
Abstract
The acurate segmentation and classification of nuclei in histological images are crucial for the diagnosis and treatment of colorectal cancer. However, the aggregation of nuclei and intra-class variability in histology images present significant challenges for nuclei segmentation and classification. In addition, the imbalance of various nuclei classes exacerbates the difficulty of nuclei classification and segmentation using deep learning models. To address these challenges, we present a novel attention-enhanced residual refinement network (AER-Net), which consists of one encoder and three decoder branches that have same network structure. In addition to the nuclei instance segmentation branch and nuclei classification branch, one branch is used to predict the vertical and horizontal distance from each pixel to its nuclear center, which is combined with output by the segmentation branch to improve the final segmentation results. The AER-Net utilizes an attention-enhanced encoder module to focus on more valuable features. To further refine predictions and achieve more accurate results, an attention-enhancing residual refinement module is employed at the end of each encoder branch. Moreover, the coarse predictions and refined predictions are combined by using a loss function that employs cross-entropy loss and generalized dice loss to efficiently tackle the challenge of class imbalance among nuclei in histology images. Compared with other state-of-the-art methods on two colorectal cancer datasets and a pan-cancer dataset, AER-Net demonstrates outstanding performance, validating its effectiveness in nuclear segmentation and classification.
Collapse
Affiliation(s)
- Ruifen Cao
- The Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Computer Science and Technology, Anhui University, Hefei 230601, China
| | - Qingbin Meng
- The Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Computer Science and Technology, Anhui University, Hefei 230601, China
| | - Dayu Tan
- Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China
| | - Pijing Wei
- Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China
| | - Yun Ding
- School of Artificial Intelligence, Anhui University, Hefei 230601, China
| | - Chunhou Zheng
- School of Artificial Intelligence, Anhui University, Hefei 230601, China
| |
Collapse
|
19
|
Perumal A, Nithiyanantham J, Nagaraj J. An improved AlexNet deep learning method for limb tumor cancer prediction and detection. Biomed Phys Eng Express 2024; 11:015004. [PMID: 39437809 DOI: 10.1088/2057-1976/ad89c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 10/22/2024] [Indexed: 10/25/2024]
Abstract
Synovial sarcoma (SS) is a rare cancer that forms in soft tissues around joints, and early detection is crucial for improving patient survival rates. This study introduces a convolutional neural network (CNN) using an improved AlexNet deep learning classifier to improve SS diagnosis from digital pathological images. Key preprocessing steps, such as dataset augmentation and noise reduction techniques, such as adaptive median filtering (AMF) and histogram equalization were employed to improve image quality. Feature extraction was conducted using the Gray-Level Co-occurrence Matrix (GLCM) and Improved Linear Discriminant Analysis (ILDA), while image segmentation targeted spindle-shaped cells using repetitive phase-level set segmentation (RPLSS). The improved AlexNet architecture features additional convolutional layers and resized input images, leading to superior performance. The model demonstrated significant improvements in accuracy, sensitivity, specificity, and AUC, outperforming existing methods by 3%, 1.70%, 6.08%, and 8.86%, respectively, in predicting SS.
Collapse
Affiliation(s)
- Arunachalam Perumal
- Department of Biomedical Engineering, Vel Tech Rangarajan Dr Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Janakiraman Nithiyanantham
- Department of Electronics and Communication Engineering, K.L.N. College of Engineering, Pottapalayam, 630612, India
| | - Jamuna Nagaraj
- Department of General Surgery, Velammal Medical College Hospital and Research Institute, Madurai, 625009, India
| |
Collapse
|
20
|
Weng W, Yoshida N, Morinaga Y, Sugino S, Tomita Y, Kobayashi R, Inoue K, Hirose R, Dohi O, Itoh Y, Zhu X. Development of high-quality artificial intelligence for computer-aided diagnosis in determining subtypes of colorectal cancer. J Gastroenterol Hepatol 2024; 39:2319-2326. [PMID: 38923607 DOI: 10.1111/jgh.16661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/14/2024] [Accepted: 06/09/2024] [Indexed: 06/28/2024]
Abstract
BACKGROUND AND AIM There are no previous studies in which computer-aided diagnosis (CAD) diagnosed colorectal cancer (CRC) subtypes correctly. In this study, we developed an original CAD for the diagnosis of CRC subtypes. METHODS Pretraining for the CAD based on ResNet was performed using ImageNet and five open histopathological pretraining image datasets (HiPreD) containing 3 million images. In addition, sparse attention was introduced to improve the CAD compared to other attention networks. One thousand and seventy-two histopathological images from 29 early CRC cases at Kyoto Prefectural University of Medicine from 2019 to 2022 were collected (857 images for training and validation, 215 images for test). All images were annotated by a qualified histopathologist for segmentation of normal mucosa, adenoma, pure well-differentiated adenocarcinoma (PWDA), and moderately/poorly differentiated adenocarcinoma (MPDA). Diagnostic ability including dice sufficient coefficient (DSC) and diagnostic accuracy were evaluated. RESULTS Our original CAD, named Colon-seg, with the pretraining of both HiPreD and ImageNET showed a better DSC (88.4%) compared to CAD without both pretraining (76.8%). Regarding the attentional mechanism, Colon-seg with sparse attention showed a better DSC (88.4%) compared to other attentional mechanisms (dual: 79.7%, ECA: 80.7%, shuffle: 84.7%, SK: 86.9%). In addition, the DSC of Colon-seg (88.4%) was better than other types of CADs (TransUNet: 84.7%, MultiResUnet: 86.1%, Unet++: 86.7%). The diagnostic accuracy of Colon-seg for each histopathological type was 94.3% for adenoma, 91.8% for PWDA, and 92.8% for MPDA. CONCLUSION A deep learning-based CAD for CRC subtype differentiation was developed with pretraining and fine-tuning of abundant histopathological images.
Collapse
Affiliation(s)
- Weihao Weng
- Graduate School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Japan
| | - Naohisa Yoshida
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yukiko Morinaga
- Department of Surgical Pathology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Satoshi Sugino
- Department of Gastroenterology, Asahi University Hospital, Gifu, Japan
| | - Yuri Tomita
- Department of Gastroenterology, Koseikai Takeda Hospital, Kyoto, Japan
| | - Reo Kobayashi
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Ken Inoue
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Ryohei Hirose
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Osamu Dohi
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Yoshito Itoh
- Department of Molecular Gastroenterology and Hepatology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Xin Zhu
- Graduate School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
21
|
Sharma R, Sharma K, Bala M. Efficient feature selection for histopathological image classification with improved multi-objective WOA. Sci Rep 2024; 14:25163. [PMID: 39448704 PMCID: PMC11502702 DOI: 10.1038/s41598-024-75842-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 10/08/2024] [Indexed: 10/26/2024] Open
Abstract
The difficulty of selecting features efficiently in histopathology image analysis remains unresolved. Furthermore, the majority of current approaches have approached feature selection as a single objective issue. This research presents an enhanced multi-objective whale optimisation algorithm-based feature selection technique as a solution. To mine optimal feature sets, the suggested technique makes use of a unique variation known as the enhanced multi-objective whale optimisation algorithm. To verify the optimisation capability, the suggested variation has been evaluated on 10 common multi-objective CEC2009 benchmark functions. Furthermore, by comparing five classifiers in terms of accuracy, mean number of selected features, and calculation time, the effectiveness of the suggested strategy is verified against three other feature-selection techniques already in use. The experimental findings show that, when compared to the other approaches under consideration, the suggested method performed better on the assessed parameters.
Collapse
Affiliation(s)
- Ravi Sharma
- Delhi Technological University, Bawana, New Delhi, 110042, India.
| | - Kapil Sharma
- Delhi Technological University, Bawana, New Delhi, 110042, India
| | - Manju Bala
- Indraprastha College of Women, University of Delhi, Civil Lines, New Delhi, 110054, India
| |
Collapse
|
22
|
Park H, Kang WY, Woo OH, Lee J, Yang Z, Oh S. Automated deep learning-based bone mineral density assessment for opportunistic osteoporosis screening using various CT protocols with multi-vendor scanners. Sci Rep 2024; 14:25014. [PMID: 39443535 PMCID: PMC11499650 DOI: 10.1038/s41598-024-73709-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 09/20/2024] [Indexed: 10/25/2024] Open
Abstract
This retrospective study examined the diagnostic efficacy of automated deep learning-based bone mineral density (DL-BMD) measurements for osteoporosis screening using 422 CT datasets from four vendors in two medical centers, encompassing 159 chest, 156 abdominal, and 107 lumbar spine datasets. DL-BMD values on L1 and L2 vertebral bodies were compared with manual BMD (m-BMD) measurements using Pearson's correlation and intraclass correlation coefficients. Strong agreement was found between m-BMD and DL-BMD in total CT scans (r = 0.953, p < 0.001). The diagnostic performance of DL-BMD was assessed using receiver operating characteristic analysis for osteoporosis and low BMD by dual-energy x-ray absorptiometry (DXA) and m-BMD. Compared to DXA, DL-BMD demonstrated an AUC of 0.790 (95% CI 0.733-0.839) for low BMD and 0.769 (95% CI 0.710-0.820) for osteoporosis, with sensitivity, specificity, and accuracy of 80.8% (95% CI 74.2-86.3%), 56.3% (95% CI 43.4-68.6%), and 74.3% (95% CI 68.3-79.7%) for low BMD and 65.4% (95% CI 50.9-78.0%), 70.9% (95% CI 63.8-77.3%), and 69.7% (95% CI 63.5-75.4%) for osteoporosis, respectively. Compared to m-BMD, DL-BMD showed an AUC of 0.983 (95% CI 0.973-0.993) for low BMD and 0.972 (95% CI 0.958-0.987) for osteoporosis, with sensitivity, specificity, and accuracy of 97.3% (95% CI 94.5-98.9%), 85.2% (95% CI 78.8-90.3%), and 92.7% (95% CI 89.7-95.0%) for low BMD and 94.4% (95% CI 88.3-97.9%), 89.5% (95% CI 85.6-92.7%), and 90.8% (95% CI 87.6-93.4%) for osteoporosis, respectively. The DL-based method can provide accurate and reliable BMD assessments across diverse CT protocols and scanners.
Collapse
Affiliation(s)
- Heejun Park
- Department of Radiology, Guro Hospital, Korea University Medical Center, Seoul, Republic of Korea
| | - Woo Young Kang
- Department of Radiology, Guro Hospital, Korea University Medical Center, Seoul, Republic of Korea.
| | - Ok Hee Woo
- Department of Radiology, Guro Hospital, Korea University Medical Center, Seoul, Republic of Korea
| | - Jemyoung Lee
- ClariPi Inc, Seoul, Republic of Korea
- Department of Applied Bioengineering, Seoul National University, Seoul, Republic of Korea
| | - Zepa Yang
- Department of Radiology, Guro Hospital, Korea University Medical Center, Seoul, Republic of Korea
| | - Sangseok Oh
- Department of Radiology, Guro Hospital, Korea University Medical Center, Seoul, Republic of Korea
| |
Collapse
|
23
|
Sun L, Zhang R, Gu Y, Huang L, Jin C. Application of Artificial Intelligence in the diagnosis and treatment of colorectal cancer: a bibliometric analysis, 2004-2023. Front Oncol 2024; 14:1424044. [PMID: 39464716 PMCID: PMC11502294 DOI: 10.3389/fonc.2024.1424044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 09/23/2024] [Indexed: 10/29/2024] Open
Abstract
BACKGROUND An increasing number of studies have turned their lens to the application of Artificial Intelligence (AI) in the diagnosis and treatment of colorectal cancer (CRC). OBJECTIVE To clarify and visualize the basic situation, research hotspots, and development trends of AI in the diagnosis and treatment of CRC, and provide clues for research in the future. METHODS On January 31, 2024, the Web of Science Core Collection (WoSCC) database was searched to screen and export the relevant research published during 2004-2023, and Cite Space, VoSviewer, Bibliometrix were used to visualize the number of publications, countries (regions), institutions, journals, authors, citations, keywords, etc. RESULTS A total of 2715 pieces of literature were included. The number of publications grew slowly until the end of 2016, but rapidly after 2017, till to the peak of 798 in 2023. A total of 92 countries, 3997 organizations, and 15,667 authors were involved in this research. Chinese scholars released the highest number of publications, and the U.S. contributed the highest number of total citations. As to authors, MORI, YUICHI had the highest number of publications, and WANG, PU had the highest number of total citations. According to the analysis of citations and keywords, the current research hotspots are mainly related to "Colonoscopy", "Polyp Segmentation", "Digital Pathology", "Radiomics", "prognosis". CONCLUSION Research on the application of AI in the diagnosis and treatment of CRC has made significant progress and is flourishing across the world. Current research hotspots include AI-assisted early screening and diagnosis, pathology, and staging, and prognosis assessment, and future research is predicted to put weight on multimodal data fusion, personalized treatment, and drug development.
Collapse
Affiliation(s)
- Lamei Sun
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
- Department of Traditional Chinese Medicine, Jiangyin Nanzha Community Health Service Center, Wuxi, China
| | - Rong Zhang
- Department of General Surgery, Jiangyin Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| | - Yidan Gu
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| | - Lei Huang
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| | - Chunhui Jin
- Department of Oncology, Wuxi Hospital Affiliated to Nanjing University of Chinese Medicine, Wuxi, China
| |
Collapse
|
24
|
Baharun NB, Adam A, Zailani MAH, Rajpoot NM, Xu Q, Zin RRM. Automated scoring methods for quantitative interpretation of Tumour infiltrating lymphocytes (TILs) in breast cancer: a systematic review. BMC Cancer 2024; 24:1202. [PMID: 39350098 PMCID: PMC11440723 DOI: 10.1186/s12885-024-12962-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 09/18/2024] [Indexed: 10/04/2024] Open
Abstract
Tumour microenvironment (TME) of breast cancer mainly comprises malignant, stromal, immune, and tumour infiltrating lymphocyte (TILs). Assessment of TILs is crucial for determining the disease's prognosis. Manual TIL assessments are hampered by multiple limitations, including low precision, poor inter-observer reproducibility, and time consumption. In response to these challenges, automated scoring emerges as a promising approach. The aim of this systematic review is to assess the evidence on the approaches and performance of automated scoring methods for TILs assessment in breast cancer. This review presents a comprehensive compilation of studies related to automated scoring of TILs, sourced from four databases (Web of Science, Scopus, Science Direct, and PubMed), employing three primary keywords (artificial intelligence, breast cancer, and tumor-infiltrating lymphocytes). The PICOS framework was employed for study eligibility, and reporting adhered to the PRISMA guidelines. The initial search yielded a total of 1910 articles. Following screening and examination, 27 studies met the inclusion criteria and data were extracted for the review. The findings indicate a concentration of studies on automated TILs assessment in developed countries, specifically the United States and the United Kingdom. From the analysis, a combination of sematic segmentation and object detection (n = 10, 37%) and convolutional neural network (CNN) (n = 11, 41%), become the most frequent automated task and ML approaches applied for model development respectively. All models developed their own ground truth datasets for training and validation, and 59% of the studies assessed the prognostic value of TILs. In conclusion, this analysis contends that automated scoring methods for TILs assessment of breast cancer show significant promise for commodification and application within clinical settings.
Collapse
Affiliation(s)
- Nurkhairul Bariyah Baharun
- Department of Pathology, Faculty of Medicine, The National University of Malaysia, Jalan Yaacob Latif, Bandar Tun Razak, 56000 Cheras, Kuala Lumpur, Wilayah Persekutuan, 56000, Malaysia.
- Department of Medical Diagnostic, Faculty of Health Sciences, Universiti Selangor, Jalan Zirkon A7/7, Seksyen 7, Shah Alam, Selangor, 40000, Malaysia.
| | - Afzan Adam
- Centre for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, The National University of Malaysia, Bangi, Selangor, 43600, Malaysia
| | - Mohamed Afiq Hidayat Zailani
- Department of Pathology, Faculty of Medicine, The National University of Malaysia, Jalan Yaacob Latif, Bandar Tun Razak, 56000 Cheras, Kuala Lumpur, Wilayah Persekutuan, 56000, Malaysia
- Department of Pathology and Forensic Pathology, Faculty of Medicine, MAHSA University, Bandar Saujana Putra, Malaysia
| | - Nasir M Rajpoot
- Department of Computer Science, University of Warwick, 6 Lord Bhattacharyya Way, Coventry, CV4 7EZ, UK
| | - Qiaoyi Xu
- Centre for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, The National University of Malaysia, Bangi, Selangor, 43600, Malaysia
| | - Reena Rahayu Md Zin
- Department of Pathology, Faculty of Medicine, The National University of Malaysia, Jalan Yaacob Latif, Bandar Tun Razak, 56000 Cheras, Kuala Lumpur, Wilayah Persekutuan, 56000, Malaysia
| |
Collapse
|
25
|
Huang T, Yin H, Huang X. Improved genetic algorithm for multi-threshold optimization in digital pathology image segmentation. Sci Rep 2024; 14:22454. [PMID: 39341998 PMCID: PMC11439074 DOI: 10.1038/s41598-024-73335-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 09/16/2024] [Indexed: 10/01/2024] Open
Abstract
This paper presents an improved genetic algorithm focused on multi-threshold optimization for image segmentation in digital pathology. By innovatively enhancing the selection mechanism and crossover operation, the limitations of traditional genetic algorithms are effectively addressed, significantly improving both segmentation accuracy and computational efficiency. Experimental results demonstrate that the improved genetic algorithm achieves the best balance between precision and recall within the threshold range of 0.02 to 0.05, and it significantly outperforms traditional methods in terms of segmentation performance. Segmentation quality is quantified using metrics such as precision, recall, and F1 score, and statistical tests confirm the superior performance of the algorithm, especially in its global search capabilities for complex optimization problems. Although the algorithm's computation time is relatively long, its notable advantages in segmentation quality, particularly in handling high-precision segmentation tasks for complex images, are highly pronounced. The experiments also show that the algorithm exhibits strong robustness and stability, maintaining reliable performance under different initial conditions. Compared to general segmentation models, this algorithm demonstrates significant advantages in specialized tasks, such as pathology image segmentation, especially in resource-constrained environments. Therefore, this improved genetic algorithm offers an efficient and precise multi-threshold optimization solution for image segmentation, providing valuable reference for practical applications.
Collapse
Affiliation(s)
- Tangsen Huang
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, 310018, China.
- School of Information Engineering, Hunan University of Science and Engineering, Yongzhou, 425199, China.
- Lishui Institute of Hangzhou Dianzi University, Lishui, 323000, China.
| | - Haibing Yin
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Xingru Huang
- School of Communication Engineering, Hangzhou Dianzi University, Hangzhou, 310018, China
- Lishui Institute of Hangzhou Dianzi University, Lishui, 323000, China
| |
Collapse
|
26
|
Lou W, Wan X, Li G, Lou X, Li C, Gao F, Li H. Structure Embedded Nucleus Classification for Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3149-3160. [PMID: 38607704 DOI: 10.1109/tmi.2024.3388328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Nuclei classification provides valuable information for histopathology image analysis. However, the large variations in the appearance of different nuclei types cause difficulties in identifying nuclei. Most neural network based methods are affected by the local receptive field of convolutions, and pay less attention to the spatial distribution of nuclei or the irregular contour shape of a nucleus. In this paper, we first propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order, and employ a recurrent neural network that aggregates the sequential change in distance between key points to obtain learnable shape features. Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations. To capture the correlations between the categories of nuclei and their surrounding tissue patterns, we further introduce edge features that are defined as the background textures between adjacent nuclei. Lastly, we integrate both polygon and graph structure learning mechanisms into a whole framework that can extract intra and inter-nucleus structural characteristics for nuclei classification. Experimental results show that the proposed framework achieves significant improvements compared to the previous methods. Code and data are made available via https://github.com/lhaof/SENC.
Collapse
|
27
|
Fernandez-Mateos J, Cresswell GD, Trahearn N, Webb K, Sakr C, Lampis A, Stuttle C, Corbishley CM, Stavrinides V, Zapata L, Spiteri I, Heide T, Gallagher L, James C, Ramazzotti D, Gao A, Kote-Jarai Z, Acar A, Truelove L, Proszek P, Murray J, Reid A, Wilkins A, Hubank M, Eeles R, Dearnaley D, Sottoriva A. Tumor evolution metrics predict recurrence beyond 10 years in locally advanced prostate cancer. NATURE CANCER 2024; 5:1334-1351. [PMID: 38997466 PMCID: PMC11424488 DOI: 10.1038/s43018-024-00787-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 05/23/2024] [Indexed: 07/14/2024]
Abstract
Cancer evolution lays the groundwork for predictive oncology. Testing evolutionary metrics requires quantitative measurements in controlled clinical trials. We mapped genomic intratumor heterogeneity in locally advanced prostate cancer using 642 samples from 114 individuals enrolled in clinical trials with a 12-year median follow-up. We concomitantly assessed morphological heterogeneity using deep learning in 1,923 histological sections from 250 individuals. Genetic and morphological (Gleason) diversity were independent predictors of recurrence (hazard ratio (HR) = 3.12 and 95% confidence interval (95% CI) = 1.34-7.3; HR = 2.24 and 95% CI = 1.28-3.92). Combined, they identified a group with half the median time to recurrence. Spatial segregation of clones was also an independent marker of recurrence (HR = 2.3 and 95% CI = 1.11-4.8). We identified copy number changes associated with Gleason grade and found that chromosome 6p loss correlated with reduced immune infiltration. Matched profiling of relapse, decades after diagnosis, confirmed that genomic instability is a driving force in prostate cancer progression. This study shows that combining genomics with artificial intelligence-aided histopathology leads to the identification of clinical biomarkers of evolution.
Collapse
Affiliation(s)
- Javier Fernandez-Mateos
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
| | - George D Cresswell
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- St. Anna Children's Cancer Research Institute, Vienna, Austria
| | - Nicholas Trahearn
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
| | - Katharine Webb
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- The Royal Marsden NHS Foundation Trust, London, UK
| | - Chirine Sakr
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
| | - Andrea Lampis
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
| | - Christine Stuttle
- The Royal Marsden NHS Foundation Trust, London, UK
- Oncogenetics Team, The Institute of Cancer Research, London, UK
| | - Catherine M Corbishley
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK
- St. George's Hospital Healthcare NHS Trust, London, UK
| | | | - Luis Zapata
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
| | - Inmaculada Spiteri
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
| | - Timon Heide
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- Computational Biology Research Centre, Human Technopole, Milan, Italy
| | - Lewis Gallagher
- Molecular Pathology Section, The Institute of Cancer Research, London, UK
- Clinical Genomics, The Royal Marsden NHS Foundation, London, UK
| | - Chela James
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- Computational Biology Research Centre, Human Technopole, Milan, Italy
| | | | - Annie Gao
- Bob Champion Cancer Unit, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK
| | | | - Ahmet Acar
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- Department of Biological Sciences, Middle East Technical University, Ankara, Turkey
| | - Lesley Truelove
- Bob Champion Cancer Unit, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK
| | - Paula Proszek
- Molecular Pathology Section, The Institute of Cancer Research, London, UK
- Clinical Genomics, The Royal Marsden NHS Foundation, London, UK
| | - Julia Murray
- The Royal Marsden NHS Foundation Trust, London, UK
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK
| | - Alison Reid
- The Royal Marsden NHS Foundation Trust, London, UK
| | - Anna Wilkins
- The Royal Marsden NHS Foundation Trust, London, UK
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK
| | - Michael Hubank
- Molecular Pathology Section, The Institute of Cancer Research, London, UK
- Clinical Genomics, The Royal Marsden NHS Foundation, London, UK
| | - Ros Eeles
- The Royal Marsden NHS Foundation Trust, London, UK
- Oncogenetics Team, The Institute of Cancer Research, London, UK
| | - David Dearnaley
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, UK.
- Academic Urology Unit, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Andrea Sottoriva
- Evolutionary Genomics and Modelling Lab, Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
- Computational Biology Research Centre, Human Technopole, Milan, Italy.
| |
Collapse
|
28
|
Jin X, An H, Chi M. CellRegNet: Point Annotation-Based Cell Detection in Histopathological Images via Density Map Regression. Bioengineering (Basel) 2024; 11:814. [PMID: 39199772 PMCID: PMC11352042 DOI: 10.3390/bioengineering11080814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 08/05/2024] [Accepted: 08/08/2024] [Indexed: 09/01/2024] Open
Abstract
Recent advances in deep learning have shown significant potential for accurate cell detection via density map regression using point annotations. However, existing deep learning models often struggle with multi-scale feature extraction and integration in complex histopathological images. Moreover, in multi-class cell detection scenarios, current density map regression methods typically predict each cell type independently, failing to consider the spatial distribution priors of different cell types. To address these challenges, we propose CellRegNet, a novel deep learning model for cell detection using point annotations. CellRegNet integrates a hybrid CNN/Transformer architecture with innovative feature refinement and selection mechanisms, addressing the need for effective multi-scale feature extraction and integration. Additionally, we introduce a contrastive regularization loss that models the mutual exclusiveness prior in multi-class cell detection cases. Extensive experiments on three histopathological image datasets demonstrate that CellRegNet outperforms existing state-of-the-art methods for cell detection using point annotations, with F1-scores of 86.38% on BCData (breast cancer), 85.56% on EndoNuke (endometrial tissue) and 93.90% on MBM (bone marrow cells), respectively. These results highlight CellRegNet's potential to enhance the accuracy and reliability of cell detection in digital pathology.
Collapse
Affiliation(s)
| | - Hong An
- School of Computer Science and Technology, University of Science and Technology of China, Hefei 230000, China; (X.J.); (M.C.)
| | | |
Collapse
|
29
|
Cai C, Zhou Y, Jiao Y, Li L, Xu J. Prognostic Analysis Combining Histopathological Features and Clinical Information to Predict Colorectal Cancer Survival from Whole-Slide Images. Dig Dis Sci 2024; 69:2985-2995. [PMID: 38837111 DOI: 10.1007/s10620-024-08501-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 05/13/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.
Collapse
Affiliation(s)
- Chengfei Cai
- School of Automation, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
- College of Information Engineering, Taizhou University, Taizhou, 225300, China.
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Yangshu Zhou
- Department of Pathology, Zhujiang Hospital of Southern Medical University, Guangzhou, 510280, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Liang Li
- Department of Pathology, Nanfang Hospital of Southern Medical University, Guangzhou, 510515, China
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| |
Collapse
|
30
|
Yuan Y, Wang T, Sims J, Le K, Undey C, Oruklu E. Cytopathic Effect Detection and Clonal Selection using Deep Learning. Pharm Res 2024; 41:1659-1669. [PMID: 39048879 DOI: 10.1007/s11095-024-03749-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 07/11/2024] [Indexed: 07/27/2024]
Abstract
PURPOSE In biotechnology, microscopic cell imaging is often used to identify and analyze cell morphology and cell state for a variety of applications. For example, microscopy can be used to detect the presence of cytopathic effects (CPE) in cell culture samples to determine virus contamination. Another application of microscopy is to verify clonality during cell line development. Conventionally, inspection of these microscopy images is performed manually by human analysts. This is both tedious and time consuming. In this paper, we propose using supervised deep learning algorithms to automate the cell detection processes mentioned above. METHODS The proposed algorithms utilize image processing techniques and convolutional neural networks (CNN) to detect the presence of CPE and to verify the clonality in cell line development. RESULTS We train and test the algorithms on image data which have been collected and labeled by domain experts. Our experiments have shown promising results in terms of both accuracy and speed. CONCLUSION Deep learning algorithms achieve high accuracy (more than 95%) on both CPE detection and clonal selection applications, resulting in a highly efficient and cost-effective automation process.
Collapse
Affiliation(s)
- Yu Yuan
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
- Illinois Institute of Technology, Chicago, 60616, IL, USA
| | - Tony Wang
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
| | | | - Kim Le
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
| | - Cenk Undey
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
| | - Erdal Oruklu
- Illinois Institute of Technology, Chicago, 60616, IL, USA.
| |
Collapse
|
31
|
Kunhoth S, Al-Maadeed S. An Analytical Study on the Utility of RGB and Multispectral Imagery with Band Selection for Automated Tumor Grading. Diagnostics (Basel) 2024; 14:1625. [PMID: 39125501 PMCID: PMC11312293 DOI: 10.3390/diagnostics14151625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 07/20/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024] Open
Abstract
The implementation of tumor grading tasks with image processing and machine learning techniques has progressed immensely over the past several years. Multispectral imaging enabled us to capture the sample as a set of image bands corresponding to different wavelengths in the visible and infrared spectrums. The higher dimensional image data can be well exploited to deliver a range of discriminative features to support the tumor grading application. This paper compares the classification accuracy of RGB and multispectral images, using a case study on colorectal tumor grading with the QU-Al Ahli Dataset (dataset I). Rotation-invariant local phase quantization (LPQ) features with an SVM classifier resulted in 80% accuracy for the RGB images compared to 86% accuracy with the multispectral images in dataset I. However, the higher dimensionality elevates the processing time. We propose a band-selection strategy using mutual information between image bands. This process eliminates redundant bands and increases classification accuracy. The results show that our band-selection method provides better results than normal RGB and multispectral methods. The band-selection algorithm was also tested on another colorectal tumor dataset, the Texas University Dataset (dataset II), to further validate the results. The proposed method demonstrates an accuracy of more than 94% with 10 bands, compared to using the whole set of 16 multispectral bands. Our research emphasizes the advantages of multispectral imaging over the RGB imaging approach and proposes a band-selection method to address the higher computational demands of multispectral imaging.
Collapse
|
32
|
Sun M, Zou W, Wang Z, Wang S, Sun Z. An Automated Framework for Histopathological Nucleus Segmentation With Deep Attention Integrated Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:995-1006. [PMID: 37018302 DOI: 10.1109/tcbb.2022.3233400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Clinical management and accurate disease diagnosis are evolving from qualitative stage to the quantitative stage, particularly at the cellular level. However, the manual process of histopathological analysis is lab-intensive and time-consuming. Meanwhile, the accuracy is limited by the experience of the pathologist. Therefore, deep learning-empowered computer-aided diagnosis (CAD) is emerging as an important topic in digital pathology to streamline the standard process of automatic tissue analysis. Automated accurate nucleus segmentation can not only help pathologists make more accurate diagnosis, save time and labor, but also achieve consistent and efficient diagnosis results. However, nucleus segmentation is susceptible to staining variation, uneven nucleus intensity, background noises, and nucleus tissue differences in biopsy specimens. To solve these problems, we propose Deep Attention Integrated Networks (DAINets), which mainly built on self-attention based spatial attention module and channel attention module. In addition, we also introduce a feature fusion branch to fuse high-level representations with low-level features for multi-scale perception, and employ the mark-based watershed algorithm to refine the predicted segmentation maps. Furthermore, in the testing phase, we design Individual Color Normalization (ICN) to settle the dyeing variation problem in specimens. Quantitative evaluations on the multi-organ nucleus dataset indicate the priority of our automated nucleus segmentation framework.
Collapse
|
33
|
Blevins GM, Flanagan CL, Kallakuri SS, Meyer OM, Nimmagadda L, Hatch JD, Shea SA, Padmanabhan V, Shikanov A. Quantification of follicles in human ovarian tissue using image processing software and trained artificial intelligence†. Biol Reprod 2024; 110:1086-1099. [PMID: 38537569 PMCID: PMC11180617 DOI: 10.1093/biolre/ioae048] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 03/12/2024] [Accepted: 03/19/2024] [Indexed: 06/18/2024] Open
Abstract
Cancer survival rates in prepubertal girls and young women have risen in recent decades due to increasingly efficient treatments. However, many such treatments are gonadotoxic, causing premature ovarian insufficiency, loss of fertility, and ovarian endocrine function. Implantation of donor ovarian tissue encapsulated in immune-isolating capsules is a promising method to restore physiological endocrine function without immunosuppression or risk of reintroducing cancer cells harbored by the tissue. The success of this approach is largely determined by follicle density in the implanted ovarian tissue, which is analyzed manually from histologic sections and necessitates specialized, time-consuming labor. To address this limitation, we developed a fully automated method to quantify follicle density that does not require additional coding. We first analyzed ovarian tissue from 12 human donors between 16 and 37 years old using semi-automated image processing with manual follicle annotation and then trained artificial intelligence program based on follicle identification and object classification. One operator manually analyzed 102 whole slide images from serial histologic sections. Of those, 77 images were assessed by a second manual operator, followed with an automated method utilizing artificial intelligence. Of the 1181 follicles the control operator counted, the comparison operator counted 1178, and the artificial intelligence counted 927 follicles with 80% of those being correctly identified as follicles. The three-stage artificial intelligence pipeline finished 33% faster than manual annotation. Collectively, this report supports the use of artificial intelligence and automation to select tissue donors and grafts with the greatest follicle density to ensure graft longevity for premature ovarian insufficiency treatment.
Collapse
Affiliation(s)
- Gabrielle M Blevins
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Colleen L Flanagan
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Sridula S Kallakuri
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Owen M Meyer
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Likitha Nimmagadda
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - James D Hatch
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Sydney A Shea
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| | - Vasantha Padmanabhan
- Department of Pediatrics, University of Michigan, Ann Arbor, MI, USA
- Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, USA
| | - Ariella Shikanov
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
- Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, USA
- Department of Macromolecular Science and Engineering, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
34
|
Bougourzi F, Dornaika F, Distante C, Taleb-Ahmed A. D-TrAttUnet: Toward hybrid CNN-transformer architecture for generic and subtle segmentation in medical images. Comput Biol Med 2024; 176:108590. [PMID: 38763066 DOI: 10.1016/j.compbiomed.2024.108590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 04/16/2024] [Accepted: 05/09/2024] [Indexed: 05/21/2024]
Abstract
Over the past two decades, machine analysis of medical imaging has advanced rapidly, opening up significant potential for several important medical applications. As complicated diseases increase and the number of cases rises, the role of machine-based imaging analysis has become indispensable. It serves as both a tool and an assistant to medical experts, providing valuable insights and guidance. A particularly challenging task in this area is lesion segmentation, a task that is challenging even for experienced radiologists. The complexity of this task highlights the urgent need for robust machine learning approaches to support medical staff. In response, we present our novel solution: the D-TrAttUnet architecture. This framework is based on the observation that different diseases often target specific organs. Our architecture includes an encoder-decoder structure with a composite Transformer-CNN encoder and dual decoders. The encoder includes two paths: the Transformer path and the Encoders Fusion Module path. The Dual-Decoder configuration uses two identical decoders, each with attention gates. This allows the model to simultaneously segment lesions and organs and integrate their segmentation losses. To validate our approach, we performed evaluations on the Covid-19 and Bone Metastasis segmentation tasks. We also investigated the adaptability of the model by testing it without the second decoder in the segmentation of glands and nuclei. The results confirmed the superiority of our approach, especially in Covid-19 infections and the segmentation of bone metastases. In addition, the hybrid encoder showed exceptional performance in the segmentation of glands and nuclei, solidifying its role in modern medical image analysis.
Collapse
Affiliation(s)
- Fares Bougourzi
- Junia, UMR 8520, CNRS, Centrale Lille, University of Polytechnique Hauts-de-France, 59000 Lille, France.
| | - Fadi Dornaika
- University of the Basque Country UPV/EHU, San Sebastian, Spain; IKERBASQUE, Basque Foundation for Science, Bilbao, Spain.
| | - Cosimo Distante
- Institute of Applied Sciences and Intelligent Systems, National Research Council of Italy, 73100 Lecce, Italy.
| | - Abdelmalik Taleb-Ahmed
- Université Polytechnique Hauts-de-France, Université de Lille, CNRS, Valenciennes, 59313, Hauts-de-France, France.
| |
Collapse
|
35
|
Manne SKR, Martin B, Roy T, Neilson R, Peters R, Chillara M, Lary CW, Motyl KJ, Wan M. NOISe: Nuclei-Aware Osteoclast Instance Segmentation for Mouse-to-Human Domain Transfer. CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. WORKSHOPS 2024; 2024:6926-6935. [PMID: 39659628 PMCID: PMC11629985 DOI: 10.1109/cvprw63382.2024.00686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2024]
Abstract
Osteoclast cell image analysis plays a key role in osteoporosis research, but it typically involves extensive manual image processing and hand annotations by a trained expert. In the last few years, a handful of machine learning approaches for osteoclast image analysis have been developed, but none have addressed the full instance segmentation task required to produce the same output as that of the human expert led process. Furthermore, none of the prior, fully automated algorithms have publicly available code, pretrained models, or annotated datasets, inhibiting reproduction and extension of their work. We present a new dataset with ~2 × 105 expert annotated mouse osteoclast masks, together with a deep learning instance segmentation method which works for both in vitro mouse osteoclast cells on plastic tissue culture plates and human osteoclast cells on bone chips. To our knowledge, this is the first work to automate the full osteoclast instance segmentation task. Our method achieves a performance of 0.82 mAP0.5 (mean average precision at intersection-over-union threshold of 0.5) in cross validation for mouse osteoclasts. We present a novel nuclei-aware osteoclast instance segmentation training strategy (NOISe) based on the unique biology of osteoclasts, to improve the model's generalizability and boost the mAP0.5 from 0.60 to 0.82 on human osteoclasts. We publish our annotated mouse osteoclast image dataset, instance segmentation models, and code at github.com/michaelwwan/noise to enable reproducibility and to provide a public tool to accelerate osteoporosis research.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Katherine J. Motyl
- MaineHealth Institute for Research
- University of Maine
- Tufts University School of Medicine
| | | |
Collapse
|
36
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
37
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
38
|
Kiyuna T, Cosatto E, Hatanaka KC, Yokose T, Tsuta K, Motoi N, Makita K, Shimizu A, Shinohara T, Suzuki A, Takakuwa E, Takakuwa Y, Tsuji T, Tsujiwaki M, Yanai M, Yuzawa S, Ogura M, Hatanaka Y. Evaluating Cellularity Estimation Methods: Comparing AI Counting with Pathologists' Visual Estimates. Diagnostics (Basel) 2024; 14:1115. [PMID: 38893641 PMCID: PMC11171606 DOI: 10.3390/diagnostics14111115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 05/17/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
The development of next-generation sequencing (NGS) has enabled the discovery of cancer-specific driver gene alternations, making precision medicine possible. However, accurate genetic testing requires a sufficient amount of tumor cells in the specimen. The evaluation of tumor content ratio (TCR) from hematoxylin and eosin (H&E)-stained images has been found to vary between pathologists, making it an important challenge to obtain an accurate TCR. In this study, three pathologists exhaustively labeled all cells in 41 regions from 41 lung cancer cases as either tumor, non-tumor or indistinguishable, thus establishing a "gold standard" TCR. We then compared the accuracy of the TCR estimated by 13 pathologists based on visual assessment and the TCR calculated by an AI model that we have developed. It is a compact and fast model that follows a fully convolutional neural network architecture and produces cell detection maps which can be efficiently post-processed to obtain tumor and non-tumor cell counts from which TCR is calculated. Its raw cell detection accuracy is 92% while its classification accuracy is 84%. The results show that the error between the gold standard TCR and the AI calculation was significantly smaller than that between the gold standard TCR and the pathologist's visual assessment (p<0.05). Additionally, the robustness of AI models across institutions is a key issue and we demonstrate that the variation in AI was smaller than that in the average of pathologists when evaluated by institution. These findings suggest that the accuracy of tumor cellularity assessments in clinical workflows is significantly improved by the introduction of robust AI models, leading to more efficient genetic testing and ultimately to better patient outcomes.
Collapse
Affiliation(s)
- Tomoharu Kiyuna
- Healthcare Life Science Division, NEC Corporation, Tokyo 108-8556, Japan;
| | - Eric Cosatto
- Department of Machine Learning, NEC Laboratories America, Princeton, NJ 08540, USA;
| | - Kanako C. Hatanaka
- Center for Development of Advanced Diagnostics (C-DAD), Hokkaido University Hospital, Sapporo 060-8648, Japan;
| | - Tomoyuki Yokose
- Department of Pathology, Kanagawa Cancer Center, Yokohama 241-8515, Japan;
| | - Koji Tsuta
- Department of Pathology, Kansai Medical University, Osaka 573-1010, Japan;
| | - Noriko Motoi
- Department of Pathology, Saitama Cancer Center, Saitama 362-0806, Japan;
| | - Keishi Makita
- Department of Pathology, Oji General Hospital, Tomakomai 053-8506, Japan
| | - Ai Shimizu
- Department of Surgical Pathology, Hokkaido University Hospital, Sapporo 060-8648, Japan; (A.S.); (E.T.)
| | - Toshiya Shinohara
- Department of Pathology, Teine Keijinkai Hospital, Sapporo 006-0811, Japan
| | - Akira Suzuki
- Department of Pathology, KKR Sapporo Medical Center, Sapporo 062-0931, Japan
| | - Emi Takakuwa
- Department of Surgical Pathology, Hokkaido University Hospital, Sapporo 060-8648, Japan; (A.S.); (E.T.)
| | - Yasunari Takakuwa
- Department of Pathology, NTT Medical Center Sapporo, Sapporo 060-0061, Japan;
| | - Takahiro Tsuji
- Department of Pathology, Sapporo City General Hospital, Sapporo 060-8604, Japan;
| | - Mitsuhiro Tsujiwaki
- Department of Surgical Pathology, Sapporo Medical University Hospital, Sapporo 060-8543, Japan;
| | - Mitsuru Yanai
- Department of Pathology, Sapporo Tokushukai Hospital, Sapporo 004-0041, Japan;
| | - Sayaka Yuzawa
- Department of Diagnostic Pathology, Asahikawa Medical University Hospital, Asahikawa 078-8510, Japan
| | - Maki Ogura
- Healthcare Life Science Division, NEC Corporation, Tokyo 108-8556, Japan;
| | - Yutaka Hatanaka
- Center for Development of Advanced Diagnostics (C-DAD), Hokkaido University Hospital, Sapporo 060-8648, Japan;
| |
Collapse
|
39
|
Xu J. Comparing multi-class classifier performance by multi-class ROC analysis: A nonparametric approach. Neurocomputing 2024; 583:127520. [PMID: 38645687 PMCID: PMC11031188 DOI: 10.1016/j.neucom.2024.127520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
The area under the Receiver Operating Characteristic (ROC) curve (AUC) is a standard metric for quantifying and comparing binary classifiers. Real world applications often require classification into multiple (more than two) classes. For multi-class classifiers that produce class membership scores, a popular multi-class AUC (MAUC) variant is to average the pairwise AUC values [1]. Due to the complicated correlation patterns, the variance of MAUC is often estimated numerically using resampling techniques. This work is a generalization of DeLong's nonparameteric approach for binary AUC analysis [2] to MAUC. We first derive the closed-form expression of the covariance matrix of the pairwise AUCs within a single MAUC. Then by dropping higher order terms, we obtain an approximate covariance matrix with a compact, matrix factorization form, which then serves as the basis for variance estimation of a single MAUC. We further extend this approach to estimate the covariance of correlated MAUCs that arise from multiple competing classifiers. For the special case of binary correlated AUCs, our results coincide with that of DeLong. Our numerical studies confirm the accuracy of the variance and covariance estimates. We provide the source code of the proposed covariance estimation of correlated MAUCs on GitHub (https://tinyurl.com/euj6wvsz) for its easy adoption by machine learning and statistical analysis packages to quantify and compare multi-class classifiers.
Collapse
Affiliation(s)
- Jingyan Xu
- Department of Radiology, Johns Hopkins University, MD, USA
| |
Collapse
|
40
|
Wang X, Yang YQ, Cai S, Li JC, Wang HY. Deep-learning-based sampling position selection on color Doppler sonography images during renal artery ultrasound scanning. Sci Rep 2024; 14:11768. [PMID: 38782971 PMCID: PMC11116437 DOI: 10.1038/s41598-024-60355-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
Accurate selection of sampling positions is critical in renal artery ultrasound examinations, and the potential of utilizing deep learning (DL) for assisting in this selection has not been previously evaluated. This study aimed to evaluate the effectiveness of DL object detection technology applied to color Doppler sonography (CDS) images in assisting sampling position selection. A total of 2004 patients who underwent renal artery ultrasound examinations were included in the study. CDS images from these patients were categorized into four groups based on the scanning position: abdominal aorta (AO), normal renal artery (NRA), renal artery stenosis (RAS), and intrarenal interlobular artery (IRA). Seven object detection models, including three two-stage models (Faster R-CNN, Cascade R-CNN, and Double Head R-CNN) and four one-stage models (RetinaNet, YOLOv3, FoveaBox, and Deformable DETR), were trained to predict the sampling position, and their predictive accuracies were compared. The Double Head R-CNN model exhibited significantly higher average accuracies on both parameter optimization and validation datasets (89.3 ± 0.6% and 88.5 ± 0.3%, respectively) compared to other methods. On clinical validation data, the predictive accuracies of the Double Head R-CNN model for all four types of images were significantly higher than those of the other methods. The DL object detection model shows promise in assisting inexperienced physicians in improving the accuracy of sampling position selection during renal artery ultrasound examinations.
Collapse
Affiliation(s)
- Xin Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China
| | - Yu-Qing Yang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| | - Sheng Cai
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China
| | - Jian-Chu Li
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China.
| | - Hong-Yan Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, No. 1, Shuaifuyuan, Dongcheng District, Beijing, 100730, China.
| |
Collapse
|
41
|
Erdaş ÇB. Computer-aided colorectal cancer diagnosis: AI-driven image segmentation and classification. PeerJ Comput Sci 2024; 10:e2071. [PMID: 38855213 PMCID: PMC11157578 DOI: 10.7717/peerj-cs.2071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 04/29/2024] [Indexed: 06/11/2024]
Abstract
Colorectal cancer is an enormous health concern since it is among the most lethal types of malignancy. The manual examination has its limitations, including subjectivity and data overload. To overcome these challenges, computer-aided diagnostic systems focusing on image segmentation and abnormality classification have been developed. This study presents a two-stage approach for the automatic detection of five types of colorectal abnormalities in addition to a control group: polyp, low-grade intraepithelial neoplasia, high-grade intraepithelial neoplasia, serrated adenoma, adenocarcinoma. In the first stage, UNet3+ was used for image segmentation to locate the anomalies, while in the second stage, the Cross-Attention Multi-Scale Vision Transformer deep learning model was used to predict the type of anomaly after highlighting the anomaly on the raw images. In anomaly segmentation, UNet3+ achieved values of 0.9872, 0.9422, 0.9832, and 0.9560 for Dice Coefficient, Jaccard Index, Sensitivity, Specificity respectively. In anomaly detection, the Cross-Attention Multi-Scale Vision Transformer model attained a classification performance of 0.9340, 0.9037, 0.9446, 0.8723, 0.9102, 0.9849 for accuracy, F1 score, precision, recall, Matthews correlation coefficient, and specificity, respectively. The proposed approach proves its capacity to alleviate the overwhelm of pathologists and enhance the accuracy of colorectal cancer diagnosis by achieving high performance in both the identification of anomalies and the segmentation of regions.
Collapse
|
42
|
Nakhli R, Rich K, Zhang A, Darbandsari A, Shenasa E, Hadjifaradji A, Thiessen S, Milne K, Jones SJM, McAlpine JN, Nelson BH, Gilks CB, Farahani H, Bashashati A. VOLTA: an enVironment-aware cOntrastive ceLl represenTation leArning for histopathology. Nat Commun 2024; 15:3942. [PMID: 38729933 PMCID: PMC11087497 DOI: 10.1038/s41467-024-48062-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
In clinical oncology, many diagnostic tasks rely on the identification of cells in histopathology images. While supervised machine learning techniques necessitate the need for labels, providing manual cell annotations is time-consuming. In this paper, we propose a self-supervised framework (enVironment-aware cOntrastive cell represenTation learning: VOLTA) for cell representation learning in histopathology images using a technique that accounts for the cell's mutual relationship with its environment. We subject our model to extensive experiments on data collected from multiple institutions comprising over 800,000 cells and six cancer types. To showcase the potential of our proposed framework, we apply VOLTA to ovarian and endometrial cancers and demonstrate that our cell representations can be utilized to identify the known histotypes of ovarian cancer and provide insights that link histopathology and molecular subtypes of endometrial cancer. Unlike supervised models, we provide a framework that can empower discoveries without any annotation data, even in situations where sample sizes are limited.
Collapse
Affiliation(s)
- Ramin Nakhli
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Katherine Rich
- Bioinformatics Graduate Program, University of British Columbia, Vancouver, Canada
| | - Allen Zhang
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Elahe Shenasa
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Amir Hadjifaradji
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Sidney Thiessen
- Deeley Research Centre, BC Cancer Agency, Victoria, BC, Canada
| | - Katy Milne
- Deeley Research Centre, BC Cancer Agency, Victoria, BC, Canada
| | - Steven J M Jones
- Canada's Michael Smith Genome Sciences Centre, BC Cancer Research Institute, Vancouver, Canada
- Department of Medical Genetics, University of British Columbia, Vancouver, Canada
| | - Jessica N McAlpine
- Department of Obstetrics and Gynecology, University of British Columbia, Vancouver, BC, Canada
| | - Brad H Nelson
- Deeley Research Centre, BC Cancer Agency, Victoria, BC, Canada
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.
- Canada's Michael Smith Genome Sciences Centre, BC Cancer Research Institute, Vancouver, Canada.
| |
Collapse
|
43
|
Ali HR, West RB. Spatial Biology of Breast Cancer. Cold Spring Harb Perspect Med 2024; 14:a041335. [PMID: 38110242 PMCID: PMC11065165 DOI: 10.1101/cshperspect.a041335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Spatial findings have shaped on our understanding of breast cancer. In this review, we discuss how spatial methods, including spatial transcriptomics and proteomics and the resultant understanding of spatial relationships, have contributed to concepts regarding cancer progression and treatment. In addition to discussing traditional approaches, we examine how emerging multiplex imaging technologies have contributed to the field and how they might influence future research.
Collapse
Affiliation(s)
- H Raza Ali
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Cambridge CB2 0RE, United Kingdom
| | - Robert B West
- Department of Pathology, Stanford University Medical Center, Stanford, California 94305, USA
| |
Collapse
|
44
|
Hörst F, Rempe M, Heine L, Seibold C, Keyl J, Baldini G, Ugurel S, Siveke J, Grünwald B, Egger J, Kleesiek J. CellViT: Vision Transformers for precise cell segmentation and classification. Med Image Anal 2024; 94:103143. [PMID: 38507894 DOI: 10.1016/j.media.2024.103143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany.
| | - Moritz Rempe
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Lukas Heine
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Constantin Seibold
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Clinic for Nuclear Medicine, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Giulia Baldini
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Selma Ugurel
- Department of Dermatology, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany
| | - Jens Siveke
- West German Cancer Center, partner site Essen, a partnership between German Cancer Research Center (DKFZ) and University Hospital Essen, University Hospital Essen (AöR), 45147 Essen, Germany; Bridge Institute of Experimental Tumor Therapy (BIT) and Division of Solid Tumor Translational Oncology (DKTK), West German Cancer Center Essen, University Hospital Essen (AöR), University of Duisburg-Essen, 45147 Essen, Germany
| | - Barbara Grünwald
- Department of Urology, West German Cancer Center, 45147 University Hospital Essen (AöR), Germany; Princess Margaret Cancer Centre, M5G 2M9 Toronto, Ontario, Canada
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany; Department of Physics, TU Dortmund University, 44227 Dortmund, Germany
| |
Collapse
|
45
|
Boukerma R, Boucheham B, Bougueroua S. Optimized Deep Features for Colon Histology Image Retrieval. 2024 6TH INTERNATIONAL CONFERENCE ON PATTERN ANALYSIS AND INTELLIGENT SYSTEMS (PAIS) 2024:1-8. [DOI: 10.1109/pais62114.2024.10541159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Rahima Boukerma
- University of 20 Août 1955,Laboratoire de Recherche en Electronique de Skikda (LRES),Department of Computer Science,Skikda,Algeria
| | - Bachir Boucheham
- University of 20 Août 1955,Laboratoire de Recherche en Electronique de Skikda (LRES),Department of Computer Science,Skikda,Algeria
| | - Salah Bougueroua
- University of 20 Août 1955,Laboratoire de Recherche en Electronique de Skikda (LRES),Department of Computer Science,Skikda,Algeria
| |
Collapse
|
46
|
Alsubai S. Transfer learning based approach for lung and colon cancer detection using local binary pattern features and explainable artificial intelligence (AI) techniques. PeerJ Comput Sci 2024; 10:e1996. [PMID: 38660170 PMCID: PMC11042027 DOI: 10.7717/peerj-cs.1996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 03/27/2024] [Indexed: 04/26/2024]
Abstract
Cancer, a life-threatening disorder caused by genetic abnormalities and metabolic irregularities, is a substantial health danger, with lung and colon cancer being major contributors to death. Histopathological identification is critical in directing effective treatment regimens for these cancers. The earlier these disorders are identified, the lesser the risk of death. The use of machine learning and deep learning approaches has the potential to speed up cancer diagnosis processes by allowing researchers to analyse large patient databases quickly and affordably. This study introduces the Inception-ResNetV2 model with strategically incorporated local binary patterns (LBP) features to improve diagnostic accuracy for lung and colon cancer identification. The model is trained on histopathological images, and the integration of deep learning and texture-based features has demonstrated its exceptional performance with 99.98% accuracy. Importantly, the study employs explainable artificial intelligence (AI) through SHapley Additive exPlanations (SHAP) to unravel the complex inner workings of deep learning models, providing transparency in decision-making processes. This study highlights the potential to revolutionize cancer diagnosis in an era of more accurate and reliable medical assessments.
Collapse
Affiliation(s)
- Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
47
|
Mao Y, Zhou X, Hu W, Yang W, Cheng Z. Dynamic video recognition for cell-encapsulating microfluidic droplets. Analyst 2024; 149:2147-2160. [PMID: 38441128 DOI: 10.1039/d4an00022f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/26/2024]
Abstract
Droplet microfluidics is a highly sensitive and high-throughput technology extensively utilized in biomedical applications, such as single-cell sequencing and cell screening. However, its performance is highly influenced by the droplet size and single-cell encapsulation rate (following random distribution), thereby creating an urgent need for quality control. Machine learning has the potential to revolutionize droplet microfluidics, but it requires tedious pixel-level annotation for network training. This paper investigates the application software of the weakly supervised cell-counting network (WSCApp) for video recognition of microdroplets. We demonstrated its real-time performance in video processing of microfluidic droplets and further identified the locations of droplets and encapsulated cells. We verified our methods on droplets encapsulating six types of cells/beads, which were collected from various microfluidic structures. Quantitative experimental results showed that our approach can not only accurately distinguish droplet encapsulations (micro-F1 score > 0.94), but also locate each cell without any supervised location information. Furthermore, fine-tuning transfer learning on the pre-trained model also significantly reduced (>80%) annotation. This software provides a user-friendly and assistive annotation platform for the quantitative assessment of cell-encapsulating microfluidic droplets.
Collapse
Affiliation(s)
- Yuanhang Mao
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Xiao Zhou
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Weiguo Hu
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Weiyang Yang
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| | - Zhen Cheng
- Department of Automation, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
48
|
Luna M, Chikontwe P, Park SH. Enhanced Nuclei Segmentation and Classification via Category Descriptors in the SAM Model. Bioengineering (Basel) 2024; 11:294. [PMID: 38534568 DOI: 10.3390/bioengineering11030294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/13/2024] [Accepted: 03/19/2024] [Indexed: 03/28/2024] Open
Abstract
Segmenting and classifying nuclei in H&E histopathology images is often limited by the long-tailed distribution of nuclei types. However, the strong generalization ability of image segmentation foundation models like the Segment Anything Model (SAM) can help improve the detection quality of rare types of nuclei. In this work, we introduce category descriptors to perform nuclei segmentation and classification by prompting the SAM model. We close the domain gap between histopathology and natural scene images by aligning features in low-level space while preserving the high-level representations of SAM. We performed extensive experiments on the Lizard dataset, validating the ability of our model to perform automatic nuclei segmentation and classification, especially for rare nuclei types, where achieved a significant detection improvement in the F1 score of up to 12%. Our model also maintains compatibility with manual point prompts for interactive refinement during inference without requiring any additional training.
Collapse
Affiliation(s)
- Miguel Luna
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Philip Chikontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
- AI Graduate School, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| |
Collapse
|
49
|
Mahbod A, Polak C, Feldmann K, Khan R, Gelles K, Dorffner G, Woitek R, Hatamikia S, Ellinger I. NuInsSeg: A fully annotated dataset for nuclei instance segmentation in H&E-stained histological images. Sci Data 2024; 11:295. [PMID: 38486039 PMCID: PMC10940572 DOI: 10.1038/s41597-024-03117-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
In computational pathology, automatic nuclei instance segmentation plays an essential role in whole slide image analysis. While many computerized approaches have been proposed for this task, supervised deep learning (DL) methods have shown superior segmentation performances compared to classical machine learning and image processing techniques. However, these models need fully annotated datasets for training which is challenging to acquire, especially in the medical domain. In this work, we release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg. This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs. Moreover, for the first time, we provide additional ambiguous area masks for the entire dataset. These vague areas represent the parts of the images where precise and deterministic manual annotations are impossible, even for human experts. The dataset and detailed step-by-step instructions to generate related segmentation masks are publicly available on the respective repositories.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria.
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria.
| | - Christine Polak
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Feldmann
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Rumsha Khan
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Gelles
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, 1090, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, 2700, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| |
Collapse
|
50
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|