1
|
Debsarkar SS, Aronow B, Prasath VBS. Advancements in automated nuclei segmentation for histopathology using you only look once-driven approaches: A systematic review. Comput Biol Med 2025; 190:110072. [PMID: 40138968 DOI: 10.1016/j.compbiomed.2025.110072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 03/05/2025] [Accepted: 03/21/2025] [Indexed: 03/29/2025]
Abstract
Histopathology image analysis plays a pivotal role in disease diagnosis and treatment planning, relying heavily on accurate nuclei segmentation for extracting vital cellular information. In recent years, artificial intelligence (AI) and in particular deep learning models have been applied successfully in solving computational pathology image analysis tasks. The You Only Look Once (YOLO) object detection framework, which is based on a convolutional neural network (CNN) architecture has gained traction across various domains for its real-time processing capabilities. This systematic review aims to comprehensively explore and evaluate the advancements, challenges, and applications of YOLO-based methodologies in nuclei segmentation within the domain of histopathological images. The review encompasses a structured analysis of recent literature, focusing on the utilization of YOLO variants for nuclei segmentation. Key methodologies, training strategies, dataset specifics, and performance metrics are evaluated to elucidate the strengths and limitations of YOLO in this context. Additionally, the review highlights the unique characteristics of YOLO that enable efficient object detection and delineation of nuclei structures, offering a comparative analysis against traditional segmentation approaches. This systematic review underscores the promising outcomes achieved through YOLO-based architectures, emphasizing their potential for accurate and rapid nuclei segmentation. Furthermore, it identifies persistent challenges such as handling variances in nuclei appearances, optimizing model architectures for histopathological images, and improving generalization across diverse datasets. Insights derived from this review can provide a foundation for future research directions and enhancements in nuclei segmentation methodologies using YOLO within histopathology, fostering advancements in disease diagnosis and biomedical research.
Collapse
Affiliation(s)
- Shyam Sundar Debsarkar
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, OH, 45229, USA; Department of Computer Science, University of Cincinnati, OH, 45221, USA.
| | - Bruce Aronow
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, OH, 45229, USA; Department of Pediatrics, College of Medicine, University of Cincinnati, OH, 45257, USA; Department of Biomedical Informatics, College of Medicine, University of Cincinnati, OH, 45267, USA.
| | - V B Surya Prasath
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, OH, 45229, USA; Department of Pediatrics, College of Medicine, University of Cincinnati, OH, 45257, USA; Department of Biomedical Informatics, College of Medicine, University of Cincinnati, OH, 45267, USA; Department of Computer Science, University of Cincinnati, OH, 45221, USA.
| |
Collapse
|
2
|
Chauhan NK, Singh K, Kumar A, Mishra A, Gupta SK, Mahajan S, Kadry S, Kim J. A hybrid learning network with progressive resizing and PCA for diagnosis of cervical cancer on WSI slides. Sci Rep 2025; 15:12801. [PMID: 40229435 PMCID: PMC11997219 DOI: 10.1038/s41598-025-97719-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 04/07/2025] [Indexed: 04/16/2025] Open
Abstract
Current artificial intelligence (AI) trends are revolutionizing medical image processing, greatly improving cervical cancer diagnosis. Machine learning (ML) algorithms can discover patterns and anomalies in medical images, whereas deep learning (DL) methods, specifically convolutional neural networks (CNNs), are extremely accurate at identifying malignant lesions. Deep models that have been pre-trained and tailored through transfer learning and fine-tuning become faster and more effective, even when data is scarce. This paper implements a state-of-the-art Hybrid Learning Network that combines the Progressive Resizing approach and Principal Component Analysis (PCA) for enhanced cervical cancer diagnostics of whole slide images (WSI) slides. ResNet-152 and VGG-16, two fine-tuned DL models, are employed together with transfer learning to train on augmented and progressively resized training data with dimensions of 224 × 224, 512 × 512, and 1024 × 1024 pixels for enhanced feature extraction. Principal component analysis (PCA) is subsequently employed to process the combined features extracted from two DL models and reduce the dimensional space of the feature set. Furthermore, two ML methods, Support Vector Machine (SVM) and Random Forest (RF) models, are trained on this reduced feature set, and their predictions are integrated using a majority voting approach for evaluating the final classification results, thereby enhancing overall accuracy and reliability. The accuracy of the suggested framework on SIPaKMeD data is 99.29% for two-class classification and 98.47% for five-class classification. Furthermore, it achieves 100% accuracy for four-class categorization on the LBC dataset.
Collapse
Affiliation(s)
- Nitin Kumar Chauhan
- Department of ECE, Indore Institute of Science & Technology, Indore, 453331, India
| | - Krishna Singh
- DSEU Okhla Campus-I, Formerly G. B. Pant Engineering College, New Delhi, 110020, India
| | - Amit Kumar
- Department of Electronics Engineering, Indian Institute of Technology (BHU), Varanasi, 221005, India
| | - Ashutosh Mishra
- Department of Electrical and Electronics Engineering, Birla Institute of Technology and Science (BITS), Pilani Dubai Campus, 345055, Dubai International Academic City, Dubai, United Arab Emirates
| | - Sachin Kumar Gupta
- Department of Electronics and Communication Engineering, Central University of Jammu, Samba, Jammu, 181143, India
| | - Shubham Mahajan
- Amity School of Engineering & Technology, Amity University, Haryana, India.
| | - Seifedine Kadry
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon
- Noroff University College, Kristiansand, Norway
| | - Jungeun Kim
- Department of Computer Engineering, Inha University, Incheon, Republic of Korea.
| |
Collapse
|
3
|
Shirae S, Debsarkar SS, Kawanaka H, Aronow B, Prasath VBS. Multimodal Ensemble Fusion Deep Learning Using Histopathological Images and Clinical Data for Glioma Subtype Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2025; 13:57780-57797. [PMID: 40260100 PMCID: PMC12011355 DOI: 10.1109/access.2025.3556713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2025]
Abstract
Glioma is the most common malignant tumor of the central nervous system, and diffuse Glioma is classified as grades II-IV by world health organization (WHO). In the the cancer genome atlas (TCGA) Glioma dataset, grade II and III Gliomas are classified as low-grade glioma (LGG), and grade IV Gliomas as glioblastoma multiforme (GBM). In clinical practice, the survival and treatment process with Glioma patients depends on properly diagnosing the subtype. With this background, there has been much research on Glioma over the years. Among these researches, the origin and evolution of whole slide images (WSIs) have led to many attempts to support diagnosis by image analysis. On the other hand, due to the disease complexities of Glioma patients, multimodal analysis using various types of data rather than a single data set has been attracting attention. In our proposed method, multiple deep learning models are used to extract features from histopathology images, and the features of the obtained images are concatenated with those of the clinical data in a fusion approach. Then, we perform patch-level classification by machine learning (ML) using the concatenated features. Based on the performances of the deep learning models, we ensemble feature sets from top three models and perform further classifications. In the experiments with our proposed ensemble fusion AI (EFAI) approach using WSIs and clinical data of diffuse Glioma patients on TCGA dataset, the classification accuracy of the proposed multimodal ensemble fusion method is 0.936 with an area under the curve (AUC) value of 0.967 when tested on a balanced dataset of 240 GBM, 240 LGG patients. On an imbalanced dataset of 141 GBM, 242 LGG patients the proposed method obtained the accuracy of 0.936 and AUC of 0.967. Our proposed ensemble fusion approach significantly outperforms the classification using only histopathology images alone with deep learning models. Therefore, our approach can be used to support the diagnosis of Glioma patients and can lead to better diagnosis.
Collapse
Affiliation(s)
- Satoshi Shirae
- Graduate School of Engineering, Mie University, Tsu, Mie 514-8507, Japan
| | | | - Hiroharu Kawanaka
- Graduate School of Engineering, Mie University, Tsu, Mie 514-8507, Japan
| | - Bruce Aronow
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH 45257, USA
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, OH 45267, USA
| | - V B Surya Prasath
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229, USA
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH 45257, USA
- Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, OH 45267, USA
| |
Collapse
|
4
|
Zehnder P, Feng J, Nguyen T, Shen P, Sullivan R, Fuji RN, Hu F. Diagnostic classification in toxicologic pathology using attention-guided weak supervision and whole slide image features: a pilot study in rat livers. Sci Rep 2025; 15:4202. [PMID: 39905121 PMCID: PMC11794696 DOI: 10.1038/s41598-025-86615-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 01/13/2025] [Indexed: 02/06/2025] Open
Abstract
The diagnostic classification of digitized tissue images based on histopathologic lesions present in whole slide images (WSI) is a significant task that eludes modern image classification techniques. Even with advanced methods designed for digital histopathology, the domain of toxicologic pathology presents challenges in that histopathologic features may be at times complex, subtle, and/or rare. We propose an innovative weakly supervised learning method that leverages minimal annotations, a state-of-the-art self-supervised vision transformer for embedding extraction, and a novel guided attention mechanism that is better suited for heavily imbalanced datasets typical in toxicologic pathology. Our model demonstrates improvements in diagnostic classification and attention heatmap quality over the previously described clustering-constrained-attention multiple-instance learning method on several lesion classes in rat livers (38% improvement in AUC). We also demonstrate how an ensemble of binary classifiers improves interpretability and allows for multiclass classification and the classification of diagnostic regions of interest in each slide. The improved classification performance and higher contrast heatmaps better support toxicologic pathologists' histopathology analysis and will enable more efficient workflows as they are further refined and integrated into routine use.
Collapse
Affiliation(s)
- Philip Zehnder
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | - Jeffrey Feng
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | - Trung Nguyen
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | - Philip Shen
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | - Ruth Sullivan
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | - Reina N Fuji
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | - Fangyao Hu
- Department of Safety Assessment, Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA.
| |
Collapse
|
5
|
Nunes JD, Montezuma D, Oliveira D, Pereira T, Cardoso JS. A survey on cell nuclei instance segmentation and classification: Leveraging context and attention. Med Image Anal 2025; 99:103360. [PMID: 39383642 DOI: 10.1016/j.media.2024.103360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 08/26/2024] [Accepted: 09/27/2024] [Indexed: 10/11/2024]
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&E-stains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
Collapse
Affiliation(s)
- João D Nunes
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal.
| | - Diana Montezuma
- IMP Diagnostics, Praça do Bom Sucesso, 4150-146 Porto, Portugal; Cancer Biology and Epigenetics Group, Research Center of IPO Porto (CI-IPOP)/[RISE@CI-IPOP], Portuguese Oncology Institute of Porto (IPO Porto)/Porto Comprehensive Cancer Center (Porto.CCC), R. Dr. António Bernardino de Almeida, 4200-072, Porto, Portugal; Doctoral Programme in Medical Sciences, School of Medicine and Biomedical Sciences - University of Porto (ICBAS-UP), Porto, Portugal
| | | | - Tania Pereira
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; FCTUC - Faculty of Science and Technology, University of Coimbra, Coimbra, 3004-516, Portugal
| | - Jaime S Cardoso
- INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, R. Dr. Roberto Frias, Porto, 4200-465, Portugal; University of Porto - Faculty of Engineering, R. Dr. Roberto Frias, Porto, 4200-465, Portugal
| |
Collapse
|
6
|
Gao W, Bai Y, Yang Y, Jia L, Mi Y, Cui W, Liu D, Shakoor A, Zhao L, Li J, Luo T, Sun D, Jiang Z. Intelligent sensing for the autonomous manipulation of microrobots toward minimally invasive cell surgery. APPLIED PHYSICS REVIEWS 2024; 11. [DOI: 10.1063/5.0211141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2025]
Abstract
The physiology and pathogenesis of biological cells have drawn enormous research interest. Benefiting from the rapid development of microfabrication and microelectronics, miniaturized robots with a tool size below micrometers have widely been studied for manipulating biological cells in vitro and in vivo. Traditionally, the complex physiological environment and biological fragility require human labor interference to fulfill these tasks, resulting in high risks of irreversible structural or functional damage and even clinical risk. Intelligent sensing devices and approaches have been recently integrated within robotic systems for environment visualization and interaction force control. As a consequence, microrobots can be autonomously manipulated with visual and interaction force feedback, greatly improving accuracy, efficiency, and damage regulation for minimally invasive cell surgery. This review first explores advanced tactile sensing in the aspects of sensing principles, design methodologies, and underlying physics. It also comprehensively discusses recent progress on visual sensing, where the imaging instruments and processing methods are summarized and analyzed. It then introduces autonomous micromanipulation practices utilizing visual and tactile sensing feedback and their corresponding applications in minimally invasive surgery. Finally, this work highlights and discusses the remaining challenges of current robotic micromanipulation and their future directions in clinical trials, providing valuable references about this field.
Collapse
Affiliation(s)
- Wendi Gao
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Yunfei Bai
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Yujie Yang
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Lanlan Jia
- Department of Electronic Engineering, Ocean University of China 2 , Qingdao 266400,
| | - Yingbiao Mi
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Wenji Cui
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Dehua Liu
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Adnan Shakoor
- Department of Control and Instrumentation Engineering, King Fahd University of Petroleum and Minerals 3 , Dhahran 31261,
| | - Libo Zhao
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Junyang Li
- Department of Electronic Engineering, Ocean University of China 2 , Qingdao 266400,
| | - Tao Luo
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University 4 , Xiamen 361102,
| | - Dong Sun
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
- Department of Biomedical Engineering, City University of Hong Kong 5 , Hong Kong 999099,
| | - Zhuangde Jiang
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| |
Collapse
|
7
|
Fiorin A, López Pablo C, Lejeune M, Hamza Siraj A, Della Mea V. Enhancing AI Research for Breast Cancer: A Comprehensive Review of Tumor-Infiltrating Lymphocyte Datasets. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2996-3008. [PMID: 38806950 PMCID: PMC11612116 DOI: 10.1007/s10278-024-01043-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/19/2024] [Accepted: 02/07/2024] [Indexed: 05/30/2024]
Abstract
The field of immunology is fundamental to our understanding of the intricate dynamics of the tumor microenvironment. In particular, tumor-infiltrating lymphocyte (TIL) assessment emerges as essential aspect in breast cancer cases. To gain comprehensive insights, the quantification of TILs through computer-assisted pathology (CAP) tools has become a prominent approach, employing advanced artificial intelligence models based on deep learning techniques. The successful recognition of TILs requires the models to be trained, a process that demands access to annotated datasets. Unfortunately, this task is hampered not only by the scarcity of such datasets, but also by the time-consuming nature of the annotation phase required to create them. Our review endeavors to examine publicly accessible datasets pertaining to the TIL domain and thereby become a valuable resource for the TIL community. The overall aim of the present review is thus to make it easier to train and validate current and upcoming CAP tools for TIL assessment by inspecting and evaluating existing publicly available online datasets.
Collapse
Affiliation(s)
- Alessio Fiorin
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Carlos López Pablo
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain.
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain.
| | - Marylène Lejeune
- Oncological Pathology and Bioinformatics Research Group, Institut d'Investigació Sanitària Pere Virgili (IISPV), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Pathology, Hospital de Tortosa Verge de la Cinta (HTVC), Institut Català de la Salut (ICS), C/Esplanetes no 14, 43500, Tortosa, Spain
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili (URV), Tarragona, Spain
| | - Ameer Hamza Siraj
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| |
Collapse
|
8
|
Nakagaki R, Debsarkar SS, Kawanaka H, Aronow BJ, Prasath VBS. Deep learning-based IDH1 gene mutation prediction using histopathological imaging and clinical data. Comput Biol Med 2024; 179:108902. [PMID: 39038392 DOI: 10.1016/j.compbiomed.2024.108902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 06/27/2024] [Accepted: 07/14/2024] [Indexed: 07/24/2024]
Abstract
In the field of histopathology, many studies on the classification of whole slide images (WSIs) using artificial intelligence (AI) technology have been reported. We have studied the disease progression assessment of glioma. Adult-type diffuse gliomas, a type of brain tumor, are classified into astrocytoma, oligodendroglioma, and glioblastoma. Astrocytoma and oligodendroglioma are also called low grade glioma (LGG), and glioblastoma is also called glioblastoma multiforme (GBM). LGG patients frequently have isocitrate dehydrogenase (IDH) mutations. Patients with IDH mutations have been reported to have a better prognosis than patients without IDH mutations. Therefore, IDH mutations are an essential indicator for the classification of glioma. That is why we focused on the IDH1 mutation. In this paper, we aimed to classify the presence or absence of the IDH1 mutation using WSIs and clinical data of glioma patients. Ensemble learning between the WSIs model and the clinical data model is used to classify the presence or absence of IDH1 mutation. By using slide level labels, we combined patch-based imaging information from hematoxylin and eosin (H & E) stained WSIs, along with clinical data using deep image feature extraction and machine learning classifier for predicting IDH1 gene mutation prediction versus wild-type across cohort of 546 patients. We experimented with different deep learning (DL) models including attention-based multiple instance learning (ABMIL) models on imaging data along with gradient boosting machine (LightGBM) for the clinical variables. Further, we used hyperparameter optimization to find the best overall model in terms of classification accuracy. We obtained the highest area under the curve (AUC) of 0.823 for WSIs, 0.782 for clinical data, and 0.852 for ensemble results using MaxViT and LightGBM combination, respectively. Our experimental results indicate that the overall accuracy of the AI models can be improved by using both clinical data and images.
Collapse
Affiliation(s)
- Riku Nakagaki
- Graduate School of Engineering, Mie University, 1577 Kurima-machiya, Tsu, Mie 514-8507, Japan.
| | | | - Hiroharu Kawanaka
- Graduate School of Engineering, Mie University, 1577 Kurima-machiya, Tsu, Mie 514-8507, Japan.
| | - Bruce J Aronow
- Department of Computer Science, University of Cincinnati, OH 45221, USA; Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229, USA; Department of Pediatrics, University of Cincinnati, OH 45267, USA; Department of Biomedical Informatics, College of Medicine, University of Cincinnati, OH 45267, USA.
| | - V B Surya Prasath
- Department of Computer Science, University of Cincinnati, OH 45221, USA; Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229, USA; Department of Pediatrics, University of Cincinnati, OH 45267, USA; Department of Biomedical Informatics, College of Medicine, University of Cincinnati, OH 45267, USA.
| |
Collapse
|
9
|
Silva AB, Martins AS, Tosta TAA, Loyola AM, Cardoso SV, Neves LA, de Faria PR, do Nascimento MZ. OralEpitheliumDB: A Dataset for Oral Epithelial Dysplasia Image Segmentation and Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1691-1710. [PMID: 38409608 PMCID: PMC11589032 DOI: 10.1007/s10278-024-01041-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 02/03/2024] [Accepted: 02/06/2024] [Indexed: 02/28/2024]
Abstract
Early diagnosis of potentially malignant disorders, such as oral epithelial dysplasia, is the most reliable way to prevent oral cancer. Computational algorithms have been used as an auxiliary tool to aid specialists in this process. Usually, experiments are performed on private data, making it difficult to reproduce the results. There are several public datasets of histological images, but studies focused on oral dysplasia images use inaccessible datasets. This prevents the improvement of algorithms aimed at this lesion. This study introduces an annotated public dataset of oral epithelial dysplasia tissue images. The dataset includes 456 images acquired from 30 mouse tongues. The images were categorized among the lesion grades, with nuclear structures manually marked by a trained specialist and validated by a pathologist. Also, experiments were carried out in order to illustrate the potential of the proposed dataset in classification and segmentation processes commonly explored in the literature. Convolutional neural network (CNN) models for semantic and instance segmentation were employed on the images, which were pre-processed with stain normalization methods. Then, the segmented and non-segmented images were classified with CNN architectures and machine learning algorithms. The data obtained through these processes is available in the dataset. The segmentation stage showed the F1-score value of 0.83, obtained with the U-Net model using the ResNet-50 as a backbone. At the classification stage, the most expressive result was achieved with the Random Forest method, with an accuracy value of 94.22%. The results show that the segmentation contributed to the classification results, but studies are needed for the improvement of these stages of automated diagnosis. The original, gold standard, normalized, and segmented images are publicly available and may be used for the improvement of clinical applications of CAD methods on oral epithelial dysplasia tissue images.
Collapse
Affiliation(s)
- Adriano Barbosa Silva
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil.
| | - Alessandro Santana Martins
- Federal Institute of Triângulo Mineiro (IFTM), R. Belarmino Vilela Junqueira, S/N, 38305-200, Ituiutaba, MG, Brazil
| | - Thaína Aparecida Azevedo Tosta
- Science and Technology Institute, Federal University of São Paulo (UNIFESP), Av. Cesare Mansueto Giulio Lattes, 1201, 12247-014, São José dos Campos, SP, Brazil
| | - Adriano Mota Loyola
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Sérgio Vitorino Cardoso
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Leandro Alves Neves
- Department of Computer Science and Statistics (DCCE), São Paulo State University (UNESP), R. Cristóvão Colombo, 2265, 38305-200, São José do Rio Preto, SP, Brazil
| | - Paulo Rogério de Faria
- Department of Histology and Morphology, Institute of Biomedical Science, Federal University of Uberlândia (UFU), Av. Amazonas, S/N, 38405-320, Uberlândia, MG, Brazil
| | - Marcelo Zanchetta do Nascimento
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil
| |
Collapse
|
10
|
Jiang X, Wang S, Guo L, Zhu B, Wen Z, Jia L, Xu L, Xiao G, Li Q. iIMPACT: integrating image and molecular profiles for spatial transcriptomics analysis. Genome Biol 2024; 25:147. [PMID: 38844966 PMCID: PMC11514947 DOI: 10.1186/s13059-024-03289-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 05/23/2024] [Indexed: 07/04/2024] Open
Abstract
Current clustering analysis of spatial transcriptomics data primarily relies on molecular information and fails to fully exploit the morphological features present in histology images, leading to compromised accuracy and interpretability. To overcome these limitations, we have developed a multi-stage statistical method called iIMPACT. It identifies and defines histology-based spatial domains based on AI-reconstructed histology images and spatial context of gene expression measurements, and detects domain-specific differentially expressed genes. Through multiple case studies, we demonstrate iIMPACT outperforms existing methods in accuracy and interpretability and provides insights into the cellular spatial organization and landscape of functional genes within spatial transcriptomics data.
Collapse
Affiliation(s)
- Xi Jiang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Statistics and Data Science, Southern Methodist University, Dallas, TX, USA
| | - Shidan Wang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Lei Guo
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bencong Zhu
- Department of Statistics, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Mathematical Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Liwei Jia
- Department of Pathology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Lin Xu
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX, USA.
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, The University of Texas Southwestern Medical Center, Dallas, TX, USA.
| | - Qiwei Li
- Department of Mathematical Sciences, The University of Texas at Dallas, Richardson, TX, USA.
| |
Collapse
|
11
|
Yücel Z, Akal F, Oltulu P. Automated AI-based grading of neuroendocrine tumors using Ki-67 proliferation index: comparative evaluation and performance analysis. Med Biol Eng Comput 2024; 62:1899-1909. [PMID: 38409645 DOI: 10.1007/s11517-024-03045-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 02/03/2024] [Indexed: 02/28/2024]
Abstract
Early detection is critical for successfully diagnosing cancer, and timely analysis of diagnostic tests is increasingly important. In the context of neuroendocrine tumors, the Ki-67 proliferation index serves as a fundamental biomarker, aiding pathologists in grading and diagnosing these tumors based on histopathological images. The appropriate treatment plan for the patient is determined based on the tumor grade. An artificial intelligence-based method is proposed to aid pathologists in the automated calculation and grading of the Ki-67 proliferation index. The proposed system first performs preprocessing to enhance image quality. Then, segmentation process is performed using the U-Net architecture, which is a deep learning algorithm, to separate the nuclei from the background. The identified nuclei are then evaluated as Ki-67 positive or negative based on basic color space information and other features. The Ki-67 proliferation index is then calculated, and the neuroendocrine tumor is graded accordingly. The proposed system's performance was evaluated on a dataset obtained from the Department of Pathology at Meram Faculty of Medicine Hospital, Necmettin Erbakan University. The results of the pathologist and the proposed system were compared, and the proposed system was found to have an accuracy of 95% in tumor grading when compared to the pathologist's report.
Collapse
Affiliation(s)
- Zehra Yücel
- Necmettin Erbakan University, Department of Computer Technologies, Konya, Turkey.
- Hacettepe University, Graduate School of Science and Engineering, Ankara, Turkey.
| | - Fuat Akal
- Hacettepe University, Faculty of Engineering, Department of Computer Engineering, Ankara, Turkey
| | - Pembe Oltulu
- Necmettin Erbakan University, Faculty of Medicine, Department of Pathology, Konya, Turkey
| |
Collapse
|
12
|
Han X, Liu Y, Zhang S, Li L, Zheng L, Qiu L, Chen J, Zhan Z, Wang S, Ma J, Kang D, Chen J. Improving the diagnosis of ductal carcinoma in situ with microinvasion without immunohistochemistry: An innovative method with H&E-stained and multiphoton microscopy images. Int J Cancer 2024; 154:1802-1813. [PMID: 38268429 DOI: 10.1002/ijc.34855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 12/12/2023] [Accepted: 12/21/2023] [Indexed: 01/26/2024]
Abstract
Ductal carcinoma in situ with microinvasion (DCISM) is a challenging subtype of breast cancer with controversial invasiveness and prognosis. Accurate diagnosis of DCISM from ductal carcinoma in situ (DCIS) is crucial for optimal treatment and improved clinical outcomes. However, there are often some suspicious small cancer nests in DCIS, and it is difficult to diagnose the presence of intact myoepithelium by conventional hematoxylin and eosin (H&E) stained images. Although a variety of biomarkers are available for immunohistochemical (IHC) staining of myoepithelial cells, no single biomarker is consistently sensitive to all tumor lesions. Here, we introduced a new diagnostic method that provides rapid and accurate diagnosis of DCISM using multiphoton microscopy (MPM). Suspicious foci in H&E-stained images were labeled as regions of interest (ROIs), and the nuclei within these ROIs were segmented using a deep learning model. MPM was used to capture images of the ROIs in H&E-stained sections. The intensity of two-photon excitation fluorescence (TPEF) in the myoepithelium was significantly different from that in tumor parenchyma and tumor stroma. Through the use of MPM, the myoepithelium and basement membrane can be easily observed via TPEF and second-harmonic generation (SHG), respectively. By fusing the nuclei in H&E-stained images with MPM images, DCISM can be differentiated from suspicious small cancer clusters in DCIS. The proposed method demonstrated good consistency with the cytokeratin 5/6 (CK5/6) myoepithelial staining method (kappa coefficient = 0.818).
Collapse
Affiliation(s)
- Xiahui Han
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| | - Yulan Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| | - Shichao Zhang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| | - Lianhuang Li
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| | - Liqin Zheng
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| | - Lida Qiu
- College of Physics and Electronic Information Engineering, Minjiang University, Fuzhou, China
| | - Jianhua Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
- College of Life Science, Fujian Normal University, Fuzhou, China
| | - Zhenlin Zhan
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| | - Shu Wang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, China
| | - Jianli Ma
- Department of Radiation Oncology, Harbin Medical University Cancer Hospital, Harbin, China
| | - Deyong Kang
- Department of Pathology, Fujian Medical University Union Hospital, Fuzhou, China
| | - Jianxin Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| |
Collapse
|
13
|
Fernandez G, Zeineh J, Prastawa M, Scott R, Madduri AS, Shtabsky A, Jaffer S, Feliz A, Veremis B, Mejias JC, Charytonowicz E, Gladoun N, Koll G, Cruz K, Malinowski D, Donovan MJ. Analytical Validation of the PreciseDx Digital Prognostic Breast Cancer Test in Early-Stage Breast Cancer. Clin Breast Cancer 2024; 24:93-102.e6. [PMID: 38114366 DOI: 10.1016/j.clbc.2023.10.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 10/19/2023] [Accepted: 10/29/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND PreciseDx Breast (PDxBr) is a digital test that predicts early-stage breast cancer recurrence within 6-years of diagnosis. MATERIALS AND METHODS Using hematoxylin and eosin-stained whole slide images of invasive breast cancer (IBC) and artificial intelligence-enabled morphology feature array, microanatomic features are generated. Morphometric attributes in combination with patient's age, tumor size, stage, and lymph node status predict disease free survival using a proprietary algorithm. Here, analytical validation of the automated annotation process and extracted histologic digital features of the PDxBr test, including impact of methodologic variability on the composite risk score is presented. Studies of precision, repeatability, reproducibility and interference were performed on morphology feature array-derived features. The final risk score was assessed over 20-days with 2-operators, 2-runs/day, and 2-replicates across 8-patients, allowing for calculation of within-run repeatability, between-run and within-laboratory reproducibility. RESULTS Analytical validation of features derived from whole slide images demonstrated a high degree of precision for tumor segmentation (0.98, 0.98), lymphocyte detection (0.91, 0.93), and mitotic figures (0.85, 0.84). Correlation of variation of the assay risk score for both reproducibility and repeatability were less than 2%, and interference from variation in hematoxylin and eosin staining or tumor thickness was not observed demonstrating assay robustness across standard histopathology preparations. CONCLUSION In summary, the analytical validation of the digital IBC risk assessment test demonstrated a strong performance across all features in the model and complimented the clinical validation of the assay previously shown to accurately predict recurrence within 6-years in early-stage invasive breast cancer patients.
Collapse
Affiliation(s)
- Gerardo Fernandez
- PreciseDx, New York, NY; Icahn School of Medicine at Mount Sinai, New York, NY
| | | | | | | | | | | | | | | | - Brandon Veremis
- PreciseDx, New York, NY; Icahn School of Medicine at Mount Sinai, New York, NY
| | | | | | - Nataliya Gladoun
- PreciseDx, New York, NY; Icahn School of Medicine at Mount Sinai, New York, NY
| | | | | | | | - Michael J Donovan
- PreciseDx, New York, NY; Icahn School of Medicine at Mount Sinai, New York, NY; University of Miami, Pathology, Miami, FL.
| |
Collapse
|
14
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
15
|
Rosa A, Narotamo H, Silveira M. Self-Supervised Segmentation of 3D Fluorescence Microscopy Images Using CycleGAN. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083101 DOI: 10.1109/embc40787.2023.10340248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In recent years, deep learning models have been extensively applied for the segmentation of microscopy images to efficiently and accurately quantify and characterize cells, nuclei, and other biological structures. However, typically these are supervised models that require large amounts of training data that are manually annotated to create the ground-truth. Since manual annotation of these segmentation masks is difficult and time-consuming, specially in 3D, we sought to develop a self-supervised segmentation method.Our method is based on an image-to-image translation model, the CycleGAN, which we use to learn the mapping from the fluorescence microscopy images domain to the segmentation domain. We exploit the fact that CycleGAN does not require paired data and train the model using synthetic masks, instead of manually labeled masks. These masks are created automatically based on the approximate shapes and sizes of the nuclei and Golgi, thus manual image segmentation is not needed in our proposed approach.The experimental results obtained with the proposed CycleGAN model are compared with two well-known supervised segmentation models: 3D U-Net [1] and Vox2Vox [2]. The CycleGAN model led to the following results: Dice coefficient of 78.07% for the nuclei class and 67.73% for the Golgi class with a difference of only 1.4% and 0.61% compared to the best results obtained with the supervised models Vox2Vox and 3D U-Net, respectively. Moreover, training and testing the CycleGAN model is about 5.78 times faster in comparison with the 3D U-Net model. Our results show that without manual annotation effort we can train a model that performs similarly to supervised models for the segmentation of organelles in 3D microscopy images.Clinical relevance- Segmentation of cell organelles in microscopy images is an important step to extract several features, such as the morphology, density, size, shape and texture of these organelles. These quantitative analyses provide valuable information to classify and diagnose diseases, and to study biological processes.
Collapse
|
16
|
Kanadath A, Jothi JAA, Urolagin S. Multilevel Colonoscopy Histopathology Image Segmentation Using Particle Swarm Optimization Techniques. SN COMPUTER SCIENCE 2023; 4:427. [PMID: 37304839 PMCID: PMC10245360 DOI: 10.1007/s42979-023-01915-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/16/2023] [Indexed: 06/13/2023]
Abstract
Histopathology image segmentation is a challenging task in medical image processing. This work aims to segment lesion regions from colonoscopy histopathology images. Initially, the images are preprocessed and then segmented using the multilevel image thresholding technique. Multilevel thresholding is considered an optimization problem. Particle swarm optimization (PSO) and its variants, darwinian particle swarm optimization (DPSO), and fractional order darwinian particle swarm optimization (FODPSO) are used to solve the optimization problem and they generate the threshold values. The threshold values obtained are used to segment the lesion regions from the images of the colonoscopy tissue data set. Segmented images containing the lesion regions are then postprocessed to remove unnecessary regions. Experimental results reveal that the FODPSO algorithm with Otsu's discriminant criterion as the objective function achieves the best accuracy, Dice and Jaccard values of 0.89, 0.68 and 0.52, respectively, for the colonoscopy data set. The FODPSO algorithm also outperforms other optimization methods such as artificial bee colony and the firefly algorithms in terms of the accuracy, Dice and Jaccard values.
Collapse
Affiliation(s)
- Anusree Kanadath
- Department of Computer Science, Birla Institute of Science and Technology Pilani, Dubai Campus, Dubai International Academic City, Dubai, 345055 United Arab Emirates
| | - J Angel Arul Jothi
- Department of Computer Science, Birla Institute of Science and Technology Pilani, Dubai Campus, Dubai International Academic City, Dubai, 345055 United Arab Emirates
| | - Siddhaling Urolagin
- Department of Computer Science, Birla Institute of Science and Technology Pilani, Dubai Campus, Dubai International Academic City, Dubai, 345055 United Arab Emirates
| |
Collapse
|
17
|
An imbalance-aware nuclei segmentation methodology for H&E stained histopathology images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
|
18
|
Kanadath A, Angel Arul Jothi J, Urolagin S. Multilevel Multiobjective Particle Swarm Optimization Guided Superpixel Algorithm for Histopathology Image Detection and Segmentation. J Imaging 2023; 9:jimaging9040078. [PMID: 37103229 PMCID: PMC10145642 DOI: 10.3390/jimaging9040078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 03/22/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Histopathology image analysis is considered as a gold standard for the early diagnosis of serious diseases such as cancer. The advancements in the field of computer-aided diagnosis (CAD) have led to the development of several algorithms for accurately segmenting histopathology images. However, the application of swarm intelligence for segmenting histopathology images is less explored. In this study, we introduce a Multilevel Multiobjective Particle Swarm Optimization guided Superpixel algorithm (MMPSO-S) for the effective detection and segmentation of various regions of interest (ROIs) from Hematoxylin and Eosin (H&E)-stained histopathology images. Several experiments are conducted on four different datasets such as TNBC, MoNuSeg, MoNuSAC, and LD to ascertain the performance of the proposed algorithm. For the TNBC dataset, the algorithm achieves a Jaccard coefficient of 0.49, a Dice coefficient of 0.65, and an F-measure of 0.65. For the MoNuSeg dataset, the algorithm achieves a Jaccard coefficient of 0.56, a Dice coefficient of 0.72, and an F-measure of 0.72. Finally, for the LD dataset, the algorithm achieves a precision of 0.96, a recall of 0.99, and an F-measure of 0.98. The comparative results demonstrate the superiority of the proposed method over the simple Particle Swarm Optimization (PSO) algorithm, its variants (Darwinian particle swarm optimization (DPSO), fractional order Darwinian particle swarm optimization (FODPSO)), Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D), non-dominated sorting genetic algorithm 2 (NSGA2), and other state-of-the-art traditional image processing methods.
Collapse
|
19
|
Wang J, Qin L, Chen D, Wang J, Han BW, Zhu Z, Qiao G. An improved Hover-net for nuclear segmentation and classification in histopathology images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08394-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
20
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
21
|
Verdicchio M, Brancato V, Cavaliere C, Isgrò F, Salvatore M, Aiello M. A pathomic approach for tumor-infiltrating lymphocytes classification on breast cancer digital pathology images. Heliyon 2023; 9:e14371. [PMID: 36950640 PMCID: PMC10025040 DOI: 10.1016/j.heliyon.2023.e14371] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/11/2023] Open
Abstract
Background and objectives The detection of tumor-infiltrating lymphocytes (TILs) could aid in the development of objective measures of the infiltration grade and can support decision-making in breast cancer (BC). However, manual quantification of TILs in BC histopathological whole slide images (WSI) is currently based on a visual assessment, thus resulting not standardized, not reproducible, and time-consuming for pathologists. In this work, a novel pathomic approach, aimed to apply high-throughput image feature extraction techniques to analyze the microscopic patterns in WSI, is proposed. In fact, pathomic features provide additional information concerning the underlying biological processes compared to the WSI visual interpretation, thus providing more easily interpretable and explainable results than the most frequently investigated Deep Learning based methods in the literature. Methods A dataset containing 1037 regions of interest with tissue compartments and TILs annotated on 195 TNBC and HER2+ BC hematoxylin and eosin (H&E)-stained WSI was used. After segmenting nuclei within tumor-associated stroma using a watershed-based approach, 71 pathomic features were extracted from each nucleus and reduced using a Spearman's correlation filter followed by a nonparametric Wilcoxon rank-sum test and least absolute shrinkage and selection operator. The relevant features were used to classify each candidate nucleus as either TILs or non-TILs using 5 multivariable machine learning classification models trained using 5-fold cross-validation (1) without resampling, (2) with the synthetic minority over-sampling technique and (3) with downsampling. The prediction performance of the models was assessed using ROC curves. Results 21 features were selected, with most of them related to the well-known TILs properties of having regular shape, clearer margins, high peak intensity, more homogeneous enhancement and different textural pattern than other cells. The best performance was obtained by Random-Forest with ROC AUC of 0.86, regardless of resampling technique. Conclusions The presented approach holds promise for the classification of TILs in BC H&E-stained WSI and could provide support to pathologists for a reliable, rapid and interpretable clinical assessment of TILs in BC.
Collapse
Affiliation(s)
| | | | - Carlo Cavaliere
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
| | - Francesco Isgrò
- Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Claudio 21, Naples, 80125, Italy
| | - Marco Salvatore
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
| | - Marco Aiello
- IRCCS SYNLAB SDN, Via E. Gianturco 113, Naples, 80143, Italy
| |
Collapse
|
22
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
23
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
24
|
Alabdaly AA, El-Sayed WG, Hassan YF. RAMRU-CAM: Residual-Atrous MultiResUnet with Channel Attention Mechanism for cell segmentation. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-222631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The task of cell segmentation in microscope images is difficult and popular. In recent years, deep learning-based techniques have made incredible progress in medical and microscopy image segmentation applications. In this paper, we propose a novel deep learning approach called Residual-Atrous MultiResUnet with Channel Attention Mechanism (RAMRU-CAM) for cell segmentation, which combines MultiResUnet architecture with Channel Attention Mechanism (CAM) and Residual-Atrous connections. The Residual-Atrous path mitigates the semantic gap between the encoder and decoder stages and manages the spatial dimension of feature maps. Furthermore, the Channel Attention Mechanism (CAM) blocks are used in the decoder stages to better maintain the spatial details before concatenating the feature maps from the encoder phases to the decoder phases. We evaluated our proposed model on the PhC-C2DH-U373 and Fluo-N2DH-GOWT1 datasets. The experimental results show that our proposed model outperforms recent variants of the U-Net model and the state-of-the-art approaches. We have demonstrated how our model can segment cells precisely while using fewer parameters and low computational complexity.
Collapse
Affiliation(s)
- Ammar A. Alabdaly
- Department of Mathematics and Computer Science, Alexandria University, Alexandria, Egypt
| | - Wagdy G. El-Sayed
- Department of Mathematics and Computer Science, Alexandria University, Alexandria, Egypt
| | - Yasser F. Hassan
- Faculty of Computer and Data Science, Alexandria University, Alexandria, Egypt
| |
Collapse
|
25
|
Mahbod A, Schaefer G, Dorffner G, Hatamikia S, Ecker R, Ellinger I. A dual decoder U-Net-based model for nuclei instance segmentation in hematoxylin and eosin-stained histological images. Front Med (Lausanne) 2022; 9:978146. [PMID: 36438040 PMCID: PMC9691672 DOI: 10.3389/fmed.2022.978146] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 10/28/2022] [Indexed: 11/03/2023] Open
Abstract
Even in the era of precision medicine, with various molecular tests based on omics technologies available to improve the diagnosis process, microscopic analysis of images derived from stained tissue sections remains crucial for diagnostic and treatment decisions. Among other cellular features, both nuclei number and shape provide essential diagnostic information. With the advent of digital pathology and emerging computerized methods to analyze the digitized images, nuclei detection, their instance segmentation and classification can be performed automatically. These computerized methods support human experts and allow for faster and more objective image analysis. While methods ranging from conventional image processing techniques to machine learning-based algorithms have been proposed, supervised convolutional neural network (CNN)-based techniques have delivered the best results. In this paper, we propose a CNN-based dual decoder U-Net-based model to perform nuclei instance segmentation in hematoxylin and eosin (H&E)-stained histological images. While the encoder path of the model is developed to perform standard feature extraction, the two decoder heads are designed to predict the foreground and distance maps of all nuclei. The outputs of the two decoder branches are then merged through a watershed algorithm, followed by post-processing refinements to generate the final instance segmentation results. Moreover, to additionally perform nuclei classification, we develop an independent U-Net-based model to classify the nuclei predicted by the dual decoder model. When applied to three publicly available datasets, our method achieves excellent segmentation performance, leading to average panoptic quality values of 50.8%, 51.3%, and 62.1% for the CryoNuSeg, NuInsSeg, and MoNuSAC datasets, respectively. Moreover, our model is the top-ranked method in the MoNuSAC post-challenge leaderboard.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough, United Kingdom
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| | - Rupert Ecker
- Department of Research and Development, TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
26
|
Yan R, Yang Z, Li J, Zheng C, Zhang F. Divide-and-Attention Network for HE-Stained Pathological Image Classification. BIOLOGY 2022; 11:982. [PMID: 36101363 PMCID: PMC9311575 DOI: 10.3390/biology11070982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 06/20/2022] [Accepted: 06/27/2022] [Indexed: 11/19/2022]
Abstract
Since pathological images have some distinct characteristics that are different from natural images, the direct application of a general convolutional neural network cannot achieve good classification performance, especially for fine-grained classification problems (such as pathological image grading). Inspired by the clinical experience that decomposing a pathological image into different components is beneficial for diagnosis, in this paper, we propose a Divide-and-Attention Network (DANet) for Hematoxylin-and-Eosin (HE)-stained pathological image classification. The DANet utilizes a deep-learning method to decompose a pathological image into nuclei and non-nuclei parts. With such decomposed pathological images, the DANet first performs feature learning independently in each branch, and then focuses on the most important feature representation through the branch selection attention module. In this way, the DANet can learn representative features with respect to different tissue structures and adaptively focus on the most important ones, thereby improving classification performance. In addition, we introduce deep canonical correlation analysis (DCCA) constraints in the feature fusion process of different branches. The DCCA constraints play the role of branch fusion attention, so as to maximize the correlation of different branches and ensure that the fused branches emphasize specific tissue structures. The experimental results of three datasets demonstrate the superiority of the DANet, with an average classification accuracy of 92.5% on breast cancer classification, 95.33% on colorectal cancer grading, and 91.6% on breast cancer grading tasks.
Collapse
Affiliation(s)
- Rui Yan
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Zhidong Yang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| | - Jintao Li
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| | - Chunhou Zheng
- School of Artificial Intelligence, Anhui University, Hefei 230093, China
| | - Fa Zhang
- High Performance Computer Research Center, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100045, China; (R.Y.); (Z.Y.); (J.L.)
| |
Collapse
|
27
|
Computational knowledge vision: paradigmatic knowledge based prescriptive learning and reasoning for perception and vision. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10166-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
28
|
Wahab N, Miligy IM, Dodd K, Sahota H, Toss M, Lu W, Jahanifar M, Bilal M, Graham S, Park Y, Hadjigeorghiou G, Bhalerao A, Lashen AG, Ibrahim AY, Katayama A, Ebili HO, Parkin M, Sorell T, Raza SEA, Hero E, Eldaly H, Tsang YW, Gopalakrishnan K, Snead D, Rakha E, Rajpoot N, Minhas F. Semantic annotation for computational pathology: multidisciplinary experience and best practice recommendations. J Pathol Clin Res 2022; 8:116-128. [PMID: 35014198 PMCID: PMC8822374 DOI: 10.1002/cjp2.256] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/25/2021] [Accepted: 12/10/2021] [Indexed: 02/06/2023]
Abstract
Recent advances in whole-slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence-based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilise information embedded in pathology WSIs beyond what can be obtained through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue, and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms that are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well-defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large-scale annotation exercise involving a multidisciplinary team of pathologists, ML experts, and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real-world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary, and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.
Collapse
Affiliation(s)
- Noorul Wahab
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | - Islam M Miligy
- PathologyUniversity of NottinghamNottinghamUK
- Department of Pathology, Faculty of MedicineMenoufia UniversityShebin El‐KomEgypt
| | - Katherine Dodd
- HistopathologyUniversity Hospital Coventry and WarwickshireCoventryUK
| | - Harvir Sahota
- HistopathologyUniversity Hospital Coventry and WarwickshireCoventryUK
| | | | - Wenqi Lu
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | | | - Mohsin Bilal
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | - Simon Graham
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | - Young Park
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | | | - Abhir Bhalerao
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | | | | | - Ayaka Katayama
- Graduate School of MedicineGunma UniversityMaebashiJapan
| | | | | | - Tom Sorell
- Department of Politics and International StudiesUniversity of WarwickCoventryUK
| | | | - Emily Hero
- HistopathologyUniversity Hospital Coventry and WarwickshireCoventryUK
- Leicester Royal Infirmary, HistopathologyUniversity Hospitals LeicesterLeicesterUK
| | - Hesham Eldaly
- HistopathologyUniversity Hospital Coventry and WarwickshireCoventryUK
| | - Yee Wah Tsang
- HistopathologyUniversity Hospital Coventry and WarwickshireCoventryUK
| | | | - David Snead
- HistopathologyUniversity Hospital Coventry and WarwickshireCoventryUK
| | - Emad Rakha
- PathologyUniversity of NottinghamNottinghamUK
| | - Nasir Rajpoot
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics CentreUniversity of WarwickCoventryUK
| |
Collapse
|
29
|
Automated human cell classification in sparse datasets using few-shot learning. Sci Rep 2022; 12:2924. [PMID: 35190567 PMCID: PMC8861170 DOI: 10.1038/s41598-022-06718-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 01/31/2022] [Indexed: 11/23/2022] Open
Abstract
Classifying and analyzing human cells is a lengthy procedure, often involving a trained professional. In an attempt to expedite this process, an active area of research involves automating cell classification through use of deep learning-based techniques. In practice, a large amount of data is required to accurately train these deep learning models. However, due to the sparse human cell datasets currently available, the performance of these models is typically low. This study investigates the feasibility of using few-shot learning-based techniques to mitigate the data requirements for accurate training. The study is comprised of three parts: First, current state-of-the-art few-shot learning techniques are evaluated on human cell classification. The selected techniques are trained on a non-medical dataset and then tested on two out-of-domain, human cell datasets. The results indicate that, overall, the test accuracy of state-of-the-art techniques decreased by at least 30% when transitioning from a non-medical dataset to a medical dataset. Reptile and EPNet were the top performing techniques tested on the BCCD dataset and HEp-2 dataset respectively. Second, this study evaluates the potential benefits, if any, to varying the backbone architecture and training schemes in current state-of-the-art few-shot learning techniques when used in human cell classification. To this end, the best technique identified in the first part of this study, EPNet, is used for experimentation. In particular, the study used 6 different network backbones, 5 data augmentation methodologies, and 2 model training schemes. Even with these additions, the overall test accuracy of EPNet decreased from 88.66% on non-medical datasets to 44.13% at best on the medical datasets. Third, this study presents future directions for using few-shot learning in human cell classification. In general, few-shot learning in its current state performs poorly on human cell classification. The study proves that attempts to modify existing network architectures are not effective and concludes that future research effort should be focused on improving robustness towards out-of-domain testing using optimization-based or self-supervised few-shot learning techniques.
Collapse
|
30
|
Hollandi R, Moshkov N, Paavolainen L, Tasnadi E, Piccinini F, Horvath P. Nucleus segmentation: towards automated solutions. Trends Cell Biol 2022; 32:295-310. [DOI: 10.1016/j.tcb.2021.12.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/30/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
|
31
|
Mehrvar S, Himmel LE, Babburi P, Goldberg AL, Guffroy M, Janardhan K, Krempley AL, Bawa B. Deep Learning Approaches and Applications in Toxicologic Histopathology: Current Status and Future Perspectives. J Pathol Inform 2021; 12:42. [PMID: 34881097 PMCID: PMC8609289 DOI: 10.4103/jpi.jpi_36_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/18/2021] [Indexed: 12/13/2022] Open
Abstract
Whole slide imaging enables the use of a wide array of digital image analysis tools that are revolutionizing pathology. Recent advances in digital pathology and deep convolutional neural networks have created an enormous opportunity to improve workflow efficiency, provide more quantitative, objective, and consistent assessments of pathology datasets, and develop decision support systems. Such innovations are already making their way into clinical practice. However, the progress of machine learning - in particular, deep learning (DL) - has been rather slower in nonclinical toxicology studies. Histopathology data from toxicology studies are critical during the drug development process that is required by regulatory bodies to assess drug-related toxicity in laboratory animals and its impact on human safety in clinical trials. Due to the high volume of slides routinely evaluated, low-throughput, or narrowly performing DL methods that may work well in small-scale diagnostic studies or for the identification of a single abnormality are tedious and impractical for toxicologic pathology. Furthermore, regulatory requirements around good laboratory practice are a major hurdle for the adoption of DL in toxicologic pathology. This paper reviews the major DL concepts, emerging applications, and examples of DL in toxicologic pathology image analysis. We end with a discussion of specific challenges and directions for future research.
Collapse
Affiliation(s)
- Shima Mehrvar
- Preclinical Safety, AbbVie Inc., North Chicago, IL, USA
| | | | - Pradeep Babburi
- Business Technology Solutions, AbbVie Inc., North Chicago, IL, USA
| | | | | | | | | | | |
Collapse
|
32
|
Duanmu H, Wang F, Teodoro G, Kong J. Foveal blur-boosted segmentation of nuclei in histopathology images with shape prior knowledge and probability map constraints. Bioinformatics 2021; 37:3905-3913. [PMID: 34081103 PMCID: PMC11025700 DOI: 10.1093/bioinformatics/btab418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/07/2021] [Accepted: 06/02/2021] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION In most tissue-based biomedical research, the lack of sufficient pathology training images with well-annotated ground truth inevitably limits the performance of deep learning systems. In this study, we propose a convolutional neural network with foveal blur enriching datasets with multiple local nuclei regions of interest derived from original pathology images. We further propose a human-knowledge boosted deep learning system by inclusion to the convolutional neural network new loss function terms capturing shape prior knowledge and imposing smoothness constraints on the predicted probability maps. RESULTS Our proposed system outperforms all state-of-the-art deep learning and non-deep learning methods by Jaccard coefficient, Dice coefficient, Accuracy and Panoptic Quality in three independent datasets. The high segmentation accuracy and execution speed suggest its promising potential for automating histopathology nuclei segmentation in biomedical research and clinical settings. AVAILABILITY AND IMPLEMENTATION The codes, the documentation and example data are available on an open source at: https://github.com/HongyiDuanmu26/FovealBoosted. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Hongyi Duanmu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - George Teodoro
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Jun Kong
- Department of Mathematics and Statistics and Computer Science, Georgia State University, Atlanta, GA 30303, USA
- Department of Computer Science and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
33
|
Applying Self-Supervised Learning to Medicine: Review of the State of the Art and Medical Implementations. INFORMATICS 2021. [DOI: 10.3390/informatics8030059] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Machine learning has become an increasingly ubiquitous technology, as big data continues to inform and influence everyday life and decision-making. Currently, in medicine and healthcare, as well as in most other industries, the two most prevalent machine learning paradigms are supervised learning and transfer learning. Both practices rely on large-scale, manually annotated datasets to train increasingly complex models. However, the requirement of data to be manually labeled leaves an excess of unused, unlabeled data available in both public and private data repositories. Self-supervised learning (SSL) is a growing area of machine learning that can take advantage of unlabeled data. Contrary to other machine learning paradigms, SSL algorithms create artificial supervisory signals from unlabeled data and pretrain algorithms on these signals. The aim of this review is two-fold: firstly, we provide a formal definition of SSL, divide SSL algorithms into their four unique subsets, and review the state of the art published in each of those subsets between the years of 2014 and 2020. Second, this work surveys recent SSL algorithms published in healthcare, in order to provide medical experts with a clearer picture of how they can integrate SSL into their research, with the objective of leveraging unlabeled data.
Collapse
|