1
|
Marra A, Morganti S, Pareja F, Campanella G, Bibeau F, Fuchs T, Loda M, Parwani A, Scarpa A, Reis-Filho JS, Curigliano G, Marchiò C, Kather JN. Artificial intelligence entering the pathology arena in oncology: current applications and future perspectives. Ann Oncol 2025:S0923-7534(25)00112-7. [PMID: 40307127 DOI: 10.1016/j.annonc.2025.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 02/19/2025] [Accepted: 03/07/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) is rapidly transforming the fields of pathology and oncology, offering novel opportunities for advancing diagnosis, prognosis, and treatment of cancer. METHODS Through a systematic review-based approach, the representatives from the European Society for Medical Oncology (ESMO) Precision Oncology Working Group (POWG) and international experts identified studies in pathology and oncology that applied AI-based algorithms for tumour diagnosis, molecular biomarker detection, and cancer prognosis assessment. These findings were synthesised to provide a comprehensive overview of current AI applications and future directions in cancer pathology. RESULTS The integration of AI tools in digital pathology is markedly improving the accuracy and efficiency of image analysis, allowing for automated tumour detection and classification, identification of prognostic molecular biomarkers, and prediction of treatment response and patient outcomes. Several barriers for the adoption of AI in clinical workflows, such as data availability, explainability, and regulatory considerations, still persist. There are currently no prognostic or predictive AI-based biomarkers supported by level IA or IB evidence. The ongoing advancements in AI algorithms, particularly foundation models, generalist models and transformer-based deep learning, offer immense promise for the future of cancer research and care. AI is also facilitating the integration of multi-omics data, leading to more precise patient stratification and personalised treatment strategies. CONCLUSIONS The application of AI in pathology is poised to not only enhance the accuracy and efficiency of cancer diagnosis and prognosis but also facilitate the development of personalised treatment strategies. Although barriers to implementation remain, ongoing research and development in this field coupled with addressing ethical and regulatory considerations will likely lead to a future where AI plays an integral role in cancer management and precision medicine. The continued evolution and adoption of AI in pathology and oncology are anticipated to reshape the landscape of cancer care, heralding a new era of precision medicine and improved patient outcomes.
Collapse
Affiliation(s)
- A Marra
- Division of Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy
| | - S Morganti
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, USA; Department of Medicine, Harvard Medical School, Boston, USA; Gerstner Center for Cancer Diagnostics, Broad Institute of MIT and Harvard, Boston, USA
| | - F Pareja
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, USA
| | - G Campanella
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, USA
| | - F Bibeau
- Department of Pathology, University Hospital of Besançon, Besancon, France
| | - T Fuchs
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, USA
| | - M Loda
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, USA; Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK; Department of Oncologic Pathology, Dana-Farber Cancer Institute and Harvard Medical School, Boston, USA
| | - A Parwani
- Department of Pathology, Wexner Medical Center, Ohio State University, Columbus, USA
| | - A Scarpa
- Department of Diagnostics and Public Health, Section of Pathology, University and Hospital Trust of Verona, Verona, Italy; ARC-Net Research Center, University of Verona, Verona, Italy
| | - J S Reis-Filho
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, USA
| | - G Curigliano
- Division of Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy; Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - C Marchiò
- Candiolo Cancer Institute, FPO IRCCS, Candiolo, Italy; Department of Medical Sciences, University of Turin, Turin, Italy
| | - J N Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany; Department of Medicine I, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
2
|
Kabir S, Sarmun R, Al Saady RM, Vranic S, Murugappan M, Chowdhury MEH. Automating Prostate Cancer Grading: A Novel Deep Learning Framework for Automatic Prostate Cancer Grade Assessment using Classification and Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01429-2. [PMID: 39913023 DOI: 10.1007/s10278-025-01429-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 01/23/2025] [Accepted: 01/24/2025] [Indexed: 02/07/2025]
Abstract
Prostate Cancer (PCa) is the second most common cancer in men and affects more than a million people each year. Grading prostate cancer is based on the Gleason grading system, a subjective and labor-intensive method for evaluating prostate tissue samples. The variability in diagnostic approaches underscores the urgent need for more reliable methods. By integrating deep learning technologies and developing automated systems, diagnostic precision can be improved, and human error minimized. The present work introduces a three-stage framework-based innovative deep-learning system for assessing PCa severity using the PANDA challenge dataset. After a meticulous selection process, 2699 usable cases were narrowed down from the initial 5160 cases after extensive data cleaning. There are three stages in the proposed framework: classification of PCa grades using deep neural networks (DNNs), segmentation of PCa grades, and computation of International Society for Urological Pathology (ISUP) grades using machine learning classifiers. Four classes of patches were classified and segmented (benign, Gleason 3, Gleason 4, and Gleason 5). Patch sampling at different sizes (500 × 500 and 1000 × 1000 pixels) was used to optimize the classification and segmentation processes. The segmentation performance of the proposed network is enhanced by a Self-organized operational neural network (Self-ONN) based DeepLabV3 architecture. Based on these predictions, the distribution percentages of each cancer grade within the whole slide images (WSI) were calculated. These features were then concatenated into machine learning classifiers to predict the final ISUP PCa grade. EfficientNet_b0 achieved the highest F1-score of 83.83% for classification, while DeepLabV3 + architecture based on self-ONN and EfficientNet encoder achieved the highest Dice Similarity Coefficient (DSC) score of 84.9% for segmentation. Using the RandomForest (RF) classifier, the proposed framework achieved a quadratic weighted kappa (QWK) score of 0.9215. Deep learning frameworks are being developed to grade PCa automatically and have shown promising results. In addition, it provides a prospective approach to a prognostic tool that can produce clinically significant results efficiently and reliably. Further investigations are needed to evaluate the framework's adaptability and effectiveness across various clinical scenarios.
Collapse
Affiliation(s)
- Saidul Kabir
- Department of Electrical and Electronic Engineering, University of Dhaka, Dhaka, 1000, Bangladesh
| | - Rusab Sarmun
- Department of Electrical and Electronic Engineering, University of Dhaka, Dhaka, 1000, Bangladesh
| | | | - Semir Vranic
- College of Medicine, QU Health, Qatar University, Doha, Qatar
| | - M Murugappan
- Intelligent Signal Processing (ISP) Research Lab, Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, Block 4, Doha, Kuwait.
- Department of Electronics and Communication Engineering, Vels Institute of Sciences, Technology, and Advanced Studies, Chennai, Tamil Nadu, India.
| | | |
Collapse
|
3
|
Martino F, Ilardi G, Varricchio S, Russo D, Di Crescenzo RM, Staibano S, Merolla F. A deep learning model to predict Ki-67 positivity in oral squamous cell carcinoma. J Pathol Inform 2024; 15:100354. [PMID: 38148967 PMCID: PMC10750186 DOI: 10.1016/j.jpi.2023.100354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 08/14/2023] [Accepted: 11/16/2023] [Indexed: 12/28/2023] Open
Abstract
Anatomical pathology is undergoing its third revolution, transitioning from analogical to digital pathology and incorporating new artificial intelligence technologies into clinical practice. Aside from classification, detection, and segmentation models, predictive models are gaining traction since they can impact diagnostic processes and laboratory activity, lowering consumable usage and turnaround time. Our research aimed to create a deep-learning model to generate synthetic Ki-67 immunohistochemistry from Haematoxylin and Eosin (H&E) stained images. We used 175 oral squamous cell carcinoma (OSCC) from the University Federico II's Pathology Unit's archives to train our model to generate 4 Tissue Micro Arrays (TMAs). We sectioned one slide from each TMA, first stained with H&E and then re-stained with anti-Ki-67 immunohistochemistry (IHC). In digitised slides, cores were disarrayed, and the matching cores of the 2 stained were aligned to construct a dataset to train a Pix2Pix algorithm to convert H&E images to IHC. Pathologists could recognise the synthetic images in only half of the cases in a specially designed likelihood test. Hence, our model produced realistic synthetic images. We next used QuPath to quantify IHC positivity, achieving remarkable levels of agreement between genuine and synthetic IHC. Furthermore, a categorical analysis employing 3 Ki-67 positivity cut-offs (5%, 10%, and 15%) revealed high positive-predictive values. Our model is a promising tool for collecting Ki-67 positivity information directly on H&E slides, reducing laboratory demand and improving patient management. It is also a valuable option for smaller laboratories to easily and quickly screen bioptic samples and prioritise them in a digital pathology workflow.
Collapse
Affiliation(s)
- Francesco Martino
- Dedalus HealthCare, Division of Diagnostic Imaging IT, Gertrude-Frohlich-Sandner-Straße 1, Wien 1100, Austria
- Department of Advanced Biomedical Sciences, University of Naples, Via Pansini, 5, Naples 80131, Italy
| | - Gennaro Ilardi
- Department of Advanced Biomedical Sciences, University of Naples, Via Pansini, 5, Naples 80131, Italy
| | - Silvia Varricchio
- Department of Advanced Biomedical Sciences, University of Naples, Via Pansini, 5, Naples 80131, Italy
| | - Daniela Russo
- Department of Advanced Biomedical Sciences, University of Naples, Via Pansini, 5, Naples 80131, Italy
| | - Rosa Maria Di Crescenzo
- Department of Advanced Biomedical Sciences, University of Naples, Via Pansini, 5, Naples 80131, Italy
| | - Stefania Staibano
- Department of Advanced Biomedical Sciences, University of Naples, Via Pansini, 5, Naples 80131, Italy
| | - Francesco Merolla
- Department of Medicine and Health Sciences “V. Tiberio”, University of Molise, Via De Sanctis, Campobasso 86100, Italy
| |
Collapse
|
4
|
Elazab N, Gab Allah W, Elmogy M. Computer-aided diagnosis system for grading brain tumor using histopathology images based on color and texture features. BMC Med Imaging 2024; 24:177. [PMID: 39030508 PMCID: PMC11264763 DOI: 10.1186/s12880-024-01355-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 07/03/2024] [Indexed: 07/21/2024] Open
Abstract
BACKGROUND Cancer pathology shows disease development and associated molecular features. It provides extensive phenotypic information that is cancer-predictive and has potential implications for planning treatment. Based on the exceptional performance of computational approaches in the field of digital pathogenic, the use of rich phenotypic information in digital pathology images has enabled us to identify low-level gliomas (LGG) from high-grade gliomas (HGG). Because the differences between the textures are so slight, utilizing just one feature or a small number of features produces poor categorization results. METHODS In this work, multiple feature extraction methods that can extract distinct features from the texture of histopathology image data are used to compare the classification outcomes. The successful feature extraction algorithms GLCM, LBP, multi-LBGLCM, GLRLM, color moment features, and RSHD have been chosen in this paper. LBP and GLCM algorithms are combined to create LBGLCM. The LBGLCM feature extraction approach is extended in this study to multiple scales using an image pyramid, which is defined by sampling the image both in space and scale. The preprocessing stage is first used to enhance the contrast of the images and remove noise and illumination effects. The feature extraction stage is then carried out to extract several important features (texture and color) from histopathology images. Third, the feature fusion and reduction step is put into practice to decrease the number of features that are processed, reducing the computation time of the suggested system. The classification stage is created at the end to categorize various brain cancer grades. We performed our analysis on the 821 whole-slide pathology images from glioma patients in the Cancer Genome Atlas (TCGA) dataset. Two types of brain cancer are included in the dataset: GBM and LGG (grades II and III). 506 GBM images and 315 LGG images are included in our analysis, guaranteeing representation of various tumor grades and histopathological features. RESULTS The fusion of textural and color characteristics was validated in the glioma patients using the 10-fold cross-validation technique with an accuracy equals to 95.8%, sensitivity equals to 96.4%, DSC equals to 96.7%, and specificity equals to 97.1%. The combination of the color and texture characteristics produced significantly better accuracy, which supported their synergistic significance in the predictive model. The result indicates that the textural characteristics can be an objective, accurate, and comprehensive glioma prediction when paired with conventional imagery. CONCLUSION The results outperform current approaches for identifying LGG from HGG and provide competitive performance in classifying four categories of glioma in the literature. The proposed model can help stratify patients in clinical studies, choose patients for targeted therapy, and customize specific treatment schedules.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, 35516, Mansoura, Egypt
| | - Wael Gab Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, 35516, Mansoura, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, 35516, Mansoura, Egypt.
| |
Collapse
|
5
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
6
|
Sun K, Zheng Y, Yang X, Jia W. A novel transformer-based aggregation model for predicting gene mutations in lung adenocarcinoma. Med Biol Eng Comput 2024; 62:1427-1440. [PMID: 38233683 DOI: 10.1007/s11517-023-03004-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 12/11/2023] [Indexed: 01/19/2024]
Abstract
In recent years, predicting gene mutations on whole slide imaging (WSI) has gained prominence. The primary challenge is extracting global information and achieving unbiased semantic aggregation. To address this challenge, we propose a novel Transformer-based aggregation model, employing a self-learning weight aggregation mechanism to mitigate semantic bias caused by the abundance of features in WSI. Additionally, we adopt a random patch training method, which enhances model learning richness by randomly extracting feature vectors from WSI, thus addressing the issue of limited data. To demonstrate the model's effectiveness in predicting gene mutations, we leverage the lung adenocarcinoma dataset from Shandong Provincial Hospital for prior knowledge learning. Subsequently, we assess TP53, CSMD3, LRP1B, and TTN gene mutations using lung adenocarcinoma tissue pathology images and clinical data from The Cancer Genome Atlas (TCGA). The results indicate a notable increase in the AUC (Area Under the ROC Curve) value, averaging 4%, attesting to the model's performance improvement. Our research offers an efficient model to explore the correlation between pathological image features and molecular characteristics in lung adenocarcinoma patients. This model introduces a novel approach to clinical genetic testing, expected to enhance the efficiency of identifying molecular features and genetic testing in lung adenocarcinoma patients, ultimately providing more accurate and reliable results for related studies.
Collapse
Affiliation(s)
- Kai Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China.
| | - Xinbo Yang
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China
| | - Weikuan Jia
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China.
| |
Collapse
|
7
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
8
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
9
|
Álvarez VE, Quiroga MP, Centrón D. Identification of a Specific Biomarker of Acinetobacter baumannii Global Clone 1 by Machine Learning and PCR Related to Metabolic Fitness of ESKAPE Pathogens. mSystems 2023:e0073422. [PMID: 37184409 DOI: 10.1128/msystems.00734-22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2023] Open
Abstract
Since the emergence of high-risk clones worldwide, constant investigations have been undertaken to comprehend the molecular basis that led to their prevalent dissemination in nosocomial settings over time. So far, the complex and multifactorial genetic traits of this type of epidemic clones have allowed only the identification of biomarkers with low specificity. A machine learning algorithm was able to recognize unequivocally a biomarker for early and accurate detection of Acinetobacter baumannii global clone 1 (GC1), one of the most disseminated high-risk clones. A support vector machine model identified the U1 sequence with a length of 367 nucleotides that matched a fragment of the moaCB gene, which encodes the molybdenum cofactor biosynthesis C and B proteins. U1 differentiates specifically between A. baumannii GC1 and non-GC1 strains, becoming a suitable biomarker capable of being translated into clinical settings as a molecular typing method for early diagnosis based on PCR as shown here. Since the metabolic pathways of Mo enzymes have been recognized as putative therapeutic targets for ESKAPE (Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species) pathogens, our findings highlight that machine learning can also be useful in knowledge gaps of high-risk clones and provides noteworthy support to the literature to identify relevant nosocomial biomarkers for other multidrug-resistant high-risk clones. IMPORTANCE A. baumannii GC1 is an important high-risk clone that rapidly develops extreme drug resistance in the nosocomial niche. Furthermore, several strains have been identified worldwide in environmental samples, exacerbating the risk of human interactions. Early diagnosis is mandatory to limit its dissemination and to outline appropriate antibiotic stewardship schedules. A region with a length of 367 bp (U1) within the moaCB gene that is not subjected to lateral genetic transfer or to antibiotic pressures was successfully found by a support vector machine model that predicts A. baumannii GC1 strains. At the same time, research on the group of Mo enzymes proposed this metabolic pathway related to the superbug's metabolism as a potential future drug target site for ESKAPE pathogens due to its central role in bacterial fitness during infection. These findings confirm that machine learning used for the identification of biomarkers of high-risk lineages can also serve to identify putative novel therapeutic target sites.
Collapse
Affiliation(s)
- Verónica Elizabeth Álvarez
- Laboratorio de Investigaciones en Mecanismos de Resistencia a Antibióticos (LIMRA), Instituto de Investigaciones en Microbiología y Parasitología Médica, Facultad de Medicina, Universidad de Buenos Aires-Consejo Nacional de Investigaciones Científicas y Tecnológicas (IMPaM, UBA-CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - María Paula Quiroga
- Laboratorio de Investigaciones en Mecanismos de Resistencia a Antibióticos (LIMRA), Instituto de Investigaciones en Microbiología y Parasitología Médica, Facultad de Medicina, Universidad de Buenos Aires-Consejo Nacional de Investigaciones Científicas y Tecnológicas (IMPaM, UBA-CONICET), Ciudad Autónoma de Buenos Aires, Argentina
- Nodo de Bioinformática. Instituto de Investigaciones en Microbiología y Parasitología Médica, Facultad de Medicina, Universidad de Buenos Aires-Consejo Nacional de Investigaciones Científicas y Técnicas (IMPaM, UBA-CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Daniela Centrón
- Laboratorio de Investigaciones en Mecanismos de Resistencia a Antibióticos (LIMRA), Instituto de Investigaciones en Microbiología y Parasitología Médica, Facultad de Medicina, Universidad de Buenos Aires-Consejo Nacional de Investigaciones Científicas y Tecnológicas (IMPaM, UBA-CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| |
Collapse
|
10
|
Wen Z, Wang S, Yang DM, Xie Y, Chen M, Bishop J, Xiao G. Deep learning in digital pathology for personalized treatment plans of cancer patients. Semin Diagn Pathol 2023; 40:109-119. [PMID: 36890029 DOI: 10.1053/j.semdp.2023.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/22/2023] [Indexed: 02/27/2023]
Abstract
Over the past decade, many new cancer treatments have been developed and made available to patients. However, in most cases, these treatments only benefit a specific subgroup of patients, making the selection of treatment for a specific patient an essential but challenging task for oncologists. Although some biomarkers were found to associate with treatment response, manual assessment is time-consuming and subjective. With the rapid developments and expanded implementation of artificial intelligence (AI) in digital pathology, many biomarkers can be quantified automatically from histopathology images. This approach allows for a more efficient and objective assessment of biomarkers, aiding oncologists in formulating personalized treatment plans for cancer patients. This review presents an overview and summary of the recent studies on biomarker quantification and treatment response prediction using hematoxylin-eosin (H&E) stained pathology images. These studies have shown that an AI-based digital pathology approach can be practical and will become increasingly important in improving the selection of cancer treatments for patients.
Collapse
Affiliation(s)
- Zhuoyu Wen
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Mingyi Chen
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Justin Bishop
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
11
|
Wang X, Yu G, Yan Z, Wan L, Wang W, Cui L. Lung Cancer Subtype Diagnosis by Fusing Image-Genomics Data and Hybrid Deep Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:512-523. [PMID: 34855599 DOI: 10.1109/tcbb.2021.3132292] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate diagnosis of cancer subtypes is crucial for precise treatment, because different cancer subtypes are involved with different pathology and require different therapies. Although deep learning techniques have made great success in computer vision and other fields, they do not work well on Lung cancer subtype diagnosis, due to the distinction of slide images between different cancer subtypes is ambiguous. Furthermore, they often over-fit to high-dimensional genomics data with limited samples, and do not fuse the image and genomics data in a sensible way. In this paper, we propose a hybrid deep network based approach LungDIG for Lung cancer subtype Diagnosis by fusing Image-Genomics data. LungDIG first tiles the tissue slide image into small patches and extracts the patch-level features by fine-tuning an Inception-V3 model. Since the patches may contain some false positives in non-diagnostic regions, it further designs a patch-level feature combination strategy to integrate the extracted patch features and maintain the diversity between different cancer subtypes. At the same time, it extracts the genomics features from Copy Number Variation data by an attention based nonlinear extractor. Next, it fuses the image and genomics features by an attention based multilayer perceptron (MLP) to diagnose cancer subtype. Experiments on TCGA lung cancer data show that LungDIG can not only achieve higher accuracy for cancer subtype diagnosis than state-of-the-art methods, but also have a high authenticity and good interpretability.
Collapse
|
12
|
Tavolara TE, Gurcan MN, Niazi MKK. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels. Cancers (Basel) 2022; 14:5778. [PMID: 36497258 PMCID: PMC9738801 DOI: 10.3390/cancers14235778] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/16/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022] Open
Abstract
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
Collapse
Affiliation(s)
- Thomas E. Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA
| | | | | |
Collapse
|
13
|
Xu H, Clemenceau JR, Park S, Choi J, Lee SH, Hwang TH. Spatial heterogeneity and organization of tumor mutation burden with immune infiltrates within tumors based on whole slide images correlated with patient survival in bladder cancer. J Pathol Inform 2022; 13:100105. [PMID: 36268064 PMCID: PMC9577053 DOI: 10.1016/j.jpi.2022.100105] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 05/11/2022] [Accepted: 05/17/2022] [Indexed: 02/07/2023] Open
Abstract
Background High tumor mutation burden (TMB-H) could result in an increased number of neoepitopes from somatic mutations expressed by a patient's own tumor cell which can be recognized and targeted by neighboring tumor-infiltrating lymphocytes (TILs). Deeper understanding of spatial heterogeneity and organization of tumor cells and their neighboring immune infiltrates within tumors could provide new insights into tumor progression and treatment response. Methods Here we first developed computational approaches using whole slide images (WSIs) to predict bladder cancer patients' TMB status and TILs across tumor regions, and then investigate spatial heterogeneity and organization of regions harboring TMB-H tumor cells and TILs within tumors, as well as their prognostic utility. Results: In experiments using WSIs from The Cancer Genome Atlas (TCGA) bladder cancer (BLCA), our findings show that computational pathology can reliably predict patient-level TMB status and delineate spatial TMB heterogeneity and co-organization with TILs. TMB-H patients with low spatial heterogeneity enriched with high TILs show improved overall survival. Conclusions Computational approaches using WSIs have the potential to provide rapid and cost-effective TMB testing and TILs detection. Survival analysis illuminates potential clinical utility of spatial heterogeneity and co-organization of TMB and TILs as a prognostic biomarker in BLCA which warrants further validation in future studies.
Collapse
Affiliation(s)
- Hongming Xu
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
- Liaoning Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian University of Technology, Dalian 116024, China
| | - Jean René Clemenceau
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Sunho Park
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Jinhwan Choi
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Sung Hak Lee
- Department of Hospital Pathology, Seoul St.Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, South Korea
| | - Tae Hyun Hwang
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Jacksonville, FL 32224, USA
| |
Collapse
|
14
|
A Deep Learning Model for Prostate Adenocarcinoma Classification in Needle Biopsy Whole-Slide Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:diagnostics12030768. [PMID: 35328321 PMCID: PMC8947489 DOI: 10.3390/diagnostics12030768] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/04/2023] Open
Abstract
The histopathological diagnosis of prostate adenocarcinoma in needle biopsy specimens is of pivotal importance for determining optimum prostate cancer treatment. Since diagnosing a large number of cases containing 12 core biopsy specimens by pathologists using a microscope is time-consuming manual system and limited in terms of human resources, it is necessary to develop new techniques that can rapidly and accurately screen large numbers of histopathological prostate needle biopsy specimens. Computational pathology applications that can assist pathologists in detecting and classifying prostate adenocarcinoma from whole-slide images (WSIs) would be of great benefit for routine pathological practice. In this paper, we trained deep learning models capable of classifying needle biopsy WSIs into adenocarcinoma and benign (non-neoplastic) lesions. We evaluated the models on needle biopsy, transurethral resection of the prostate (TUR-P), and The Cancer Genome Atlas (TCGA) public dataset test sets, achieving an ROC-AUC up to 0.978 in needle biopsy test sets and up to 0.9873 in TCGA test sets for adenocarcinoma.
Collapse
|
15
|
Prabhu S, Prasad K, Robels-Kelly A, Lu X. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput Biol Med 2022; 142:105209. [DOI: 10.1016/j.compbiomed.2022.105209] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 02/07/2023]
|
16
|
Artificial intelligence as a tool for diagnosis in digital pathology whole slide images: A systematic review. J Pathol Inform 2022; 13:100138. [PMID: 36268059 PMCID: PMC9577128 DOI: 10.1016/j.jpi.2022.100138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 09/04/2022] [Accepted: 09/04/2022] [Indexed: 12/22/2022] Open
Abstract
Digital pathology had a recent growth, stimulated by the implementation of digital whole slide images (WSIs) in clinical practice, and the pathology field faces shortage of pathologists in the last few years. This scenario created fronts of research applying artificial intelligence (AI) to help pathologists. One of them is the automated diagnosis, helping in the clinical decision support, increasing efficiency and quality of diagnosis. However, the complexity nature of the WSIs requires special treatments to create a reliable AI model for diagnosis. Therefore, we systematically reviewed the literature to analyze and discuss all the methods and results in AI in digital pathology performed in WSIs on H&E stain, investigating the capacity of AI as a diagnostic support tool for the pathologist in the routine real-world scenario. This review analyzes 26 studies, reporting in detail all the best methods to apply AI as a diagnostic tool, as well as the main limitations, and suggests new ideas to improve the AI field in digital pathology as a whole. We hope that this study could lead to a better use of AI as a diagnostic tool in pathology, helping future researchers in the development of new studies and projects.
Collapse
|
17
|
Mi H, Bivalacqua TJ, Kates M, Seiler R, Black PC, Popel AS, Baras AS. Predictive models of response to neoadjuvant chemotherapy in muscle-invasive bladder cancer using nuclear morphology and tissue architecture. Cell Rep Med 2021; 2:100382. [PMID: 34622225 PMCID: PMC8484511 DOI: 10.1016/j.xcrm.2021.100382] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/30/2021] [Accepted: 07/29/2021] [Indexed: 12/20/2022]
Abstract
Characterizing likelihood of response to neoadjuvant chemotherapy (NAC) in muscle-invasive bladder cancer (MIBC) is an important yet unmet challenge. In this study, a machine-learning framework is developed using imaging of biopsy pathology specimens to generate models of likelihood of NAC response. Developed using cross-validation (evaluable N = 66) and an independent validation cohort (evaluable N = 56), our models achieve promising results (65%-73% accuracy). Interestingly, one model-using features derived from hematoxylin and eosin (H&E)-stained tissues in conjunction with clinico-demographic features-is able to stratify the cohort into likely responders in cross-validation and the validation cohort (response rate of 65% for predicted responder compared with the 41% baseline response rate in the validation cohort). The results suggest that computational approaches applied to routine pathology specimens of MIBC can capture differences between responders and non-responders to NAC and should therefore be considered in the future design of precision oncology for MIBC.
Collapse
Affiliation(s)
- Haoyang Mi
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Trinity J. Bivalacqua
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, MD, USA
- James Buchanan Brady Urological Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Max Kates
- James Buchanan Brady Urological Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Roland Seiler
- Department of Urology, University Hospital Bern, Bern, Switzerland
| | - Peter C. Black
- Department of Urologic Sciences, University of British Columbia Faculty of Medicine, Vancouver, BC, Canada
| | - Aleksander S. Popel
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, MD, USA
| | - Alexander S. Baras
- Department of Oncology, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore, MD, USA
- James Buchanan Brady Urological Institute, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
18
|
Pham TD. Time-frequency time-space long short-term memory networks for image classification of histopathological tissue. Sci Rep 2021; 11:13703. [PMID: 34211077 PMCID: PMC8249635 DOI: 10.1038/s41598-021-93160-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 06/22/2021] [Indexed: 02/07/2023] Open
Abstract
Image analysis in histopathology provides insights into the microscopic examination of tissue for disease diagnosis, prognosis, and biomarker discovery. Particularly for cancer research, precise classification of histopathological images is the ultimate objective of the image analysis. Here, the time-frequency time-space long short-term memory network (TF-TS LSTM) developed for classification of time series is applied for classifying histopathological images. The deep learning is empowered by the use of sequential time-frequency and time-space features extracted from the images. Furthermore, unlike conventional classification practice, a strategy for class modeling is designed to leverage the learning power of the TF-TS LSTM. Tests on several datasets of histopathological images of haematoxylin-and-eosin and immunohistochemistry stains demonstrate the strong capability of the artificial intelligence (AI)-based approach for producing very accurate classification results. The proposed approach has the potential to be an AI tool for robust classification of histopathological images.
Collapse
Affiliation(s)
- Tuan D Pham
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar, 31952, Saudi Arabia.
| |
Collapse
|
19
|
Brady L, Wang YN, Rombokas E, Ledoux WR. Comparison of texture-based classification and deep learning for plantar soft tissue histology segmentation. Comput Biol Med 2021; 134:104491. [PMID: 34090017 PMCID: PMC8263502 DOI: 10.1016/j.compbiomed.2021.104491] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 05/10/2021] [Accepted: 05/10/2021] [Indexed: 11/22/2022]
Abstract
Histomorphological measurements can be used to identify microstructural changes related to disease pathomechanics, in particular, plantar soft tissue changes with diabetes. However, these measurements are time-consuming and susceptible to sampling and human measurement error. We investigated two approaches to automate segmentation of plantar soft tissue stained with modified Hart's stain for elastin with the eventual goal of subsequent morphological analysis. The first approach used multiple texture- and color-based features with tile-wise classification. The second approach used a convolutional neural network modified from the U-Net architecture with fewer channel dimensions and additional downsampling steps. A hybrid color and texture feature, Fourier reduced histogram of uniform improved opponent color local binary patterns (f-IOCLBP), yielded the best feature-based segmentation, but still performed 3.6% worse on average than the modified U-Net. The texture-based method was sensitive to changes in illumination and stain intensity, and segmentation errors were often in large regions of single tissues or at tissue boundaries. The U-Net was able to segment small, few-pixel tissue boundaries, and errors were often trivial to clean up with post-processing. A U-Net approach outperforms hand-crafted features for segmentation of plantar soft tissue stained with modified Hart's stain for elastin.
Collapse
Affiliation(s)
- Lynda Brady
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA
| | - Yak-Nam Wang
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Center for Industrial and Medical Ultrasound, Applied Physics Laboratory, University of Washington, Seattle, WA, 98195, USA
| | - Eric Rombokas
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA; Department of Electrical Engineering, University of Washington, Seattle, WA, 98195, USA
| | - William R Ledoux
- Center for Limb Loss and MoBility (CLiMB), VA Puget Sound, Seattle, WA, 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA, 98195, USA; Department of Orthopaedics and Sports Medicine, University of Washington, Seattle, WA, 98195, USA.
| |
Collapse
|
20
|
A Multi-Channel and Multi-Spatial Attention Convolutional Neural Network for Prostate Cancer ISUP Grading. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104321] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Prostate cancer (PCa) is one of the most prevalent cancers worldwide. As the demand for prostate biopsies increases, a worldwide shortage and an uneven geographical distribution of proficient pathologists place a strain on the efficacy of pathological diagnosis. Deep learning (DL) is able to automatically extract features from whole-slide images of prostate biopsies annotated by skilled pathologists and to classify the severity of PCa. A whole-slide image of biopsies has many irrelevant features that weaken the performance of DL models. To enable DL models to focus more on cancerous tissues, we propose a Multi-Channel and Multi-Spatial (MCMS) Attention module that can be easily plugged into any backbone CNN to enhance feature extraction. Specifically, MCMS learns a channel attention vector to assign weights to channels in the feature map by pooling from multiple attention branches with different reduction ratios; similarly, it also learns a spatial attention matrix to focus on more relevant areas of the image, by pooling from multiple convolutional layers with different kernel sizes. The model is verified on the most extensive multi-center PCa dataset that consists of 11,000 H&E-stained histopathology whole-slide images. Experimental results demonstrate that an MCMS-assisted CNN can effectively boost prediction performance in accuracy (ACC) and quadratic weighted kappa (QWK), compared with prior studies. The proposed model and results can serve as a credible benchmark for future research in automated PCa grading.
Collapse
|
21
|
Wessels F, Schmitt M, Krieghoff-Henning E, Jutzi T, Worst TS, Waldbillig F, Neuberger M, Maron RC, Steeg M, Gaiser T, Hekler A, Utikal JS, von Kalle C, Fröhling S, Michel MS, Nuhn P, Brinker TJ. Deep learning approach to predict lymph node metastasis directly from primary tumour histology in prostate cancer. BJU Int 2021; 128:352-360. [PMID: 33706408 DOI: 10.1111/bju.15386] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE To develop a new digital biomarker based on the analysis of primary tumour tissue by a convolutional neural network (CNN) to predict lymph node metastasis (LNM) in a cohort matched for already established risk factors. PATIENTS AND METHODS Haematoxylin and eosin (H&E) stained primary tumour slides from 218 patients (102 N+; 116 N0), matched for Gleason score, tumour size, venous invasion, perineural invasion and age, who underwent radical prostatectomy were selected to train a CNN and evaluate its ability to predict LN status. RESULTS With 10 models trained with the same data, a mean area under the receiver operating characteristic curve (AUROC) of 0.68 (95% confidence interval [CI] 0.678-0.682) and a mean balanced accuracy of 61.37% (95% CI 60.05-62.69%) was achieved. The mean sensitivity and specificity was 53.09% (95% CI 49.77-56.41%) and 69.65% (95% CI 68.21-71.1%), respectively. These results were confirmed via cross-validation. The probability score for LNM prediction was significantly higher on image sections from N+ samples (mean [SD] N+ probability score 0.58 [0.17] vs 0.47 [0.15] N0 probability score, P = 0.002). In multivariable analysis, the probability score of the CNN (odds ratio [OR] 1.04 per percentage probability, 95% CI 1.02-1.08; P = 0.04) and lymphovascular invasion (OR 11.73, 95% CI 3.96-35.7; P < 0.001) proved to be independent predictors for LNM. CONCLUSION In our present study, CNN-based image analyses showed promising results as a potential novel low-cost method to extract relevant prognostic information directly from H&E histology to predict the LN status of patients with prostate cancer. Our ubiquitously available technique might contribute to an improved LN status prediction.
Collapse
Affiliation(s)
- Frederik Wessels
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany.,Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Max Schmitt
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tanja Jutzi
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Thomas S Worst
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Frank Waldbillig
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Manuel Neuberger
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Roman C Maron
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Matthias Steeg
- Institute of Pathology, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Timo Gaiser
- Institute of Pathology, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Achim Hekler
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jochen S Utikal
- Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany.,Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, University of Heidelberg, Heidelberg, Germany
| | - Christof von Kalle
- Department of Clinical-Translational Sciences, Berlin Institute of Health (BIH), Charité University Medicine, Berlin, Germany
| | - Stefan Fröhling
- National Center for Tumor Diseases, German Cancer Research Center, Heidelberg, Germany
| | - Maurice S Michel
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Philipp Nuhn
- Department of Urology and Urological Surgery, Medical Faculty Mannheim of Heidelberg University, University Medical Center Mannheim, Mannheim, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
22
|
Echle A, Rindtorff NT, Brinker TJ, Luedde T, Pearson AT, Kather JN. Deep learning in cancer pathology: a new generation of clinical biomarkers. Br J Cancer 2021; 124:686-696. [PMID: 33204028 PMCID: PMC7884739 DOI: 10.1038/s41416-020-01122-x] [Citation(s) in RCA: 290] [Impact Index Per Article: 72.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 09/06/2020] [Accepted: 09/30/2020] [Indexed: 12/14/2022] Open
Abstract
Clinical workflows in oncology rely on predictive and prognostic molecular biomarkers. However, the growing number of these complex biomarkers tends to increase the cost and time for decision-making in routine daily oncology practice; furthermore, biomarkers often require tumour tissue on top of routine diagnostic material. Nevertheless, routinely available tumour tissue contains an abundance of clinically relevant information that is currently not fully exploited. Advances in deep learning (DL), an artificial intelligence (AI) technology, have enabled the extraction of previously hidden information directly from routine histology images of cancer, providing potentially clinically useful information. Here, we outline emerging concepts of how DL can extract biomarkers directly from histology images and summarise studies of basic and advanced image analysis for cancer histology. Basic image analysis tasks include detection, grading and subtyping of tumour tissue in histology images; they are aimed at automating pathology workflows and consequently do not immediately translate into clinical decisions. Exceeding such basic approaches, DL has also been used for advanced image analysis tasks, which have the potential of directly affecting clinical decision-making processes. These advanced approaches include inference of molecular features, prediction of survival and end-to-end prediction of therapy response. Predictions made by such DL systems could simplify and enrich clinical decision-making, but require rigorous external validation in clinical settings.
Collapse
Affiliation(s)
- Amelie Echle
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | | | - Titus Josef Brinker
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tom Luedde
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Duesseldorf, Düsseldorf, Germany
| | - Alexander Thomas Pearson
- Section of Hematology/Oncology, Department of Medicine, The University of Chicago, Chicago, IL, USA
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany.
- German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
23
|
Deep Learning-Based Pixel-Wise Lesion Segmentation on Oral Squamous Cell Carcinoma Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10228285] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Oral squamous cell carcinoma is the most common oral cancer. In this paper, we present a performance analysis of four different deep learning-based pixel-wise methods for lesion segmentation on oral carcinoma images. Two diverse image datasets, one for training and another one for testing, are used to generate and evaluate the models used for segmenting the images, thus allowing to assess the generalization capability of the considered deep network architectures. An important contribution of this work is the creation of the Oral Cancer Annotated (ORCA) dataset, containing ground-truth data derived from the well-known Cancer Genome Atlas (TCGA) dataset.
Collapse
|
24
|
Bándi P, Balkenhol M, van Ginneken B, van der Laak J, Litjens G. Resolution-agnostic tissue segmentation in whole-slide histopathology images with convolutional neural networks. PeerJ 2019; 7:e8242. [PMID: 31871843 PMCID: PMC6924324 DOI: 10.7717/peerj.8242] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 11/19/2019] [Indexed: 12/20/2022] Open
Abstract
Modern pathology diagnostics is being driven toward large scale digitization of microscopic tissue sections. A prerequisite for its safe implementation is the guarantee that all tissue present on a glass slide can also be found back in the digital image. Whole-slide scanners perform a tissue segmentation in a low resolution overview image to prevent inefficient high-resolution scanning of empty background areas. However, currently applied algorithms can fail in detecting all tissue regions. In this study, we developed convolutional neural networks to distinguish tissue from background. We collected 100 whole-slide images of 10 tissue samples—staining categories from five medical centers for development and testing. Additionally, eight more images of eight unfamiliar categories were collected for testing only. We compared our fully-convolutional neural networks to three traditional methods on a range of resolution levels using Dice score and sensitivity. We also tested whether a single neural network can perform equivalently to multiple networks, each specialized in a single resolution. Overall, our solutions outperformed the traditional methods on all the tested resolutions. The resolution-agnostic network achieved average Dice scores between 0.97 and 0.98 across the tested resolution levels, only 0.0069 less than the resolution-specific networks. Finally, its excellent generalization performance was demonstrated by achieving averages of 0.98 Dice score and 0.97 sensitivity on the eight unfamiliar images. A future study should test this network prospectively.
Collapse
Affiliation(s)
- Péter Bándi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Maschenka Balkenhol
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|