1
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
2
|
DeVoe K, Takahashi G, Tarshizi E, Sacker A. Evaluation of the precision and accuracy in the classification of breast histopathology images using the MobileNetV3 model. J Pathol Inform 2024; 15:100377. [PMID: 38706514 PMCID: PMC11066512 DOI: 10.1016/j.jpi.2024.100377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 04/05/2024] [Accepted: 04/08/2024] [Indexed: 05/07/2024] Open
Abstract
Accurate surgical pathological assessment of breast biopsies is essential to the proper management of breast lesions. Identifying histological features, such as nuclear pleomorphism, increased mitotic activity, cellular atypia, patterns of architectural disruption, as well as invasion through basement membranes into surrounding stroma and normal structures, including invasion of vascular and lymphatic spaces, help to classify lesions as malignant. This visual assessment is repeated on numerous slides taken at various sections through the resected tumor, each at different magnifications. Computer vision models have been proposed to assist human pathologists in classification tasks such as these. Using MobileNetV3, a convolutional architecture designed to achieve high accuracy with a compact parameter footprint, we attempted to classify breast cancer images in the BreakHis_v1 breast pathology dataset to determine the performance of this model out-of-the-box. Using transfer learning to take advantage of ImageNet embeddings without special feature extraction, we were able to correctly classify histopathology images broadly as benign or malignant with 0.98 precision, 0.97 recall, and an F1 score of 0.98. The ability to classify into histological subcategories was varied, with the greatest success being with classifying ductal carcinoma (accuracy 0.95), and the lowest success being with lobular carcinoma (accuracy 0.59). Multiclass ROC assessment of performance as a multiclass classifier yielded AUC values ≥0.97 in both benign and malignant subsets. In comparison with previous efforts, using older and larger convolutional network architectures with feature extraction pre-processing, our work highlights that modern, resource-efficient architectures can classify histopathological images with accuracy that at least matches that of previous efforts, without the need for labor-intensive feature extraction protocols. Suggestions to further refine the model are discussed.
Collapse
Affiliation(s)
- Kenneth DeVoe
- Shiley-Marcos School of Engineering, Applied Artificial Intelligence MS Program, University of San Diego, 5998 Alcalá Park, San Diego, CA 92110, USA
| | - Gary Takahashi
- Shiley-Marcos School of Engineering, Applied Artificial Intelligence MS Program, University of San Diego, 5998 Alcalá Park, San Diego, CA 92110, USA
| | - Ebrahim Tarshizi
- Shiley-Marcos School of Engineering, Applied Artificial Intelligence MS Program, University of San Diego, 5998 Alcalá Park, San Diego, CA 92110, USA
| | - Allan Sacker
- Department of Pathology, Providence St. Vincent Medical Center, 9205 SW Barnes Road, Portland, OR 97225, USA
| |
Collapse
|
3
|
Jiang S, Hondelink L, Suriawinata AA, Hassanpour S. Masked pre-training of transformers for histology image analysis. J Pathol Inform 2024; 15:100386. [PMID: 39006998 PMCID: PMC11246055 DOI: 10.1016/j.jpi.2024.100386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 04/02/2024] [Accepted: 05/28/2024] [Indexed: 07/16/2024] Open
Abstract
In digital pathology, whole-slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction. Vision transformer (ViT) models have recently emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches. However, due to the large number of model parameters and limited labeled data, applying transformer models to WSIs remains challenging. In this study, we propose a pretext task to train the transformer model in a self-supervised manner. Our model, MaskHIT, uses the transformer output to reconstruct masked patches, measured by contrastive loss. We pre-trained MaskHIT model using over 7000 WSIs from TCGA and extensively evaluated its performance in multiple experiments, covering survival prediction, cancer subtype classification, and grade prediction tasks. Our experiments demonstrate that the pre-training procedure enables context-aware understanding of WSIs, facilitates the learning of representative histological features based on patch positions and visual patterns, and is essential for the ViT model to achieve optimal results on WSI-level tasks. The pre-trained MaskHIT surpasses various multiple instance learning approaches by 3% and 2% on survival prediction and cancer subtype classification tasks, and also outperforms recent state-of-the-art transformer-based methods. Finally, a comparison between the attention maps generated by the MaskHIT model with pathologist's annotations indicates that the model can accurately identify clinically relevant histological structures on the whole slide for each task.
Collapse
Affiliation(s)
- Shuai Jiang
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Liesbeth Hondelink
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Arief A. Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
- Department of Epidemiology, Geisel School of Medicine at Dartmouth and the Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
4
|
Ren S, Li J, Dorado J, Sierra A, González-Díaz H, Duardo A, Shen B. From molecular mechanisms of prostate cancer to translational applications: based on multi-omics fusion analysis and intelligent medicine. Health Inf Sci Syst 2024; 12:6. [PMID: 38125666 PMCID: PMC10728428 DOI: 10.1007/s13755-023-00264-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 11/28/2023] [Indexed: 12/23/2023] Open
Abstract
Prostate cancer is the most common cancer in men worldwide and has a high mortality rate. The complex and heterogeneous development of prostate cancer has become a core obstacle in the treatment of prostate cancer. Simultaneously, the issues of overtreatment in early-stage diagnosis, oligometastasis and dormant tumor recognition, as well as personalized drug utilization, are also specific concerns that require attention in the clinical management of prostate cancer. Some typical genetic mutations have been proved to be associated with prostate cancer's initiation and progression. However, single-omic studies usually are not able to explain the causal relationship between molecular alterations and clinical phenotypes. Exploration from a systems genetics perspective is also lacking in this field, that is, the impact of gene network, the environmental factors, and even lifestyle behaviors on disease progression. At the meantime, current trend emphasizes the utilization of artificial intelligence (AI) and machine learning techniques to process extensive multidimensional data, including multi-omics. These technologies unveil the potential patterns, correlations, and insights related to diseases, thereby aiding the interpretable clinical decision making and applications, namely intelligent medicine. Therefore, there is a pressing need to integrate multidimensional data for identification of molecular subtypes, prediction of cancer progression and aggressiveness, along with perosonalized treatment performing. In this review, we systematically elaborated the landscape from molecular mechanism discovery of prostate cancer to clinical translational applications. We discussed the molecular profiles and clinical manifestations of prostate cancer heterogeneity, the identification of different states of prostate cancer, as well as corresponding precision medicine practices. Taking multi-omics fusion, systems genetics, and intelligence medicine as the main perspectives, the current research results and knowledge-driven research path of prostate cancer were summarized.
Collapse
Affiliation(s)
- Shumin Ren
- Department of Urology and Institutes for Systems Genetics, West China Hospital, Sichuan University, Chengdu, 610041 China
- Department of Computer Science and Information Technology, University of A Coruña, 15071 A Coruña, Spain
| | - Jiakun Li
- Department of Urology and Institutes for Systems Genetics, West China Hospital, Sichuan University, Chengdu, 610041 China
| | - Julián Dorado
- Department of Computer Science and Information Technology, University of A Coruña, 15071 A Coruña, Spain
| | - Alejandro Sierra
- Department of Computer Science and Information Technology, University of A Coruña, 15071 A Coruña, Spain
- IKERDATA S.L., ZITEK, University of Basque Country UPVEHU, Rectorate Building, 48940 Leioa, Spain
| | - Humbert González-Díaz
- Department of Computer Science and Information Technology, University of A Coruña, 15071 A Coruña, Spain
- IKERDATA S.L., ZITEK, University of Basque Country UPVEHU, Rectorate Building, 48940 Leioa, Spain
| | - Aliuska Duardo
- Department of Computer Science and Information Technology, University of A Coruña, 15071 A Coruña, Spain
- IKERDATA S.L., ZITEK, University of Basque Country UPVEHU, Rectorate Building, 48940 Leioa, Spain
| | - Bairong Shen
- Department of Urology and Institutes for Systems Genetics, West China Hospital, Sichuan University, Chengdu, 610041 China
| |
Collapse
|
5
|
Patkar S, Harmon S, Sesterhenn I, Lis R, Merino M, Young D, Brown GT, Greenfield KM, McGeeney JD, Elsamanoudi S, Tan SH, Schafer C, Jiang J, Petrovics G, Dobi A, Rentas FJ, Pinto PA, Chesnut GT, Choyke P, Turkbey B, Moncur JT. A selective CutMix approach improves generalizability of deep learning-based grading and risk assessment of prostate cancer. J Pathol Inform 2024; 15:100381. [PMID: 38953042 PMCID: PMC11215954 DOI: 10.1016/j.jpi.2024.100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/27/2024] [Accepted: 04/29/2024] [Indexed: 07/03/2024] Open
Abstract
The Gleason score is an important predictor of prognosis in prostate cancer. However, its subjective nature can result in over- or under-grading. Our objective was to train an artificial intelligence (AI)-based algorithm to grade prostate cancer in specimens from patients who underwent radical prostatectomy (RP) and to assess the correlation of AI-estimated proportions of different Gleason patterns with biochemical recurrence-free survival (RFS), metastasis-free survival (MFS), and overall survival (OS). Training and validation of algorithms for cancer detection and grading were completed with three large datasets containing a total of 580 whole-mount prostate slides from 191 RP patients at two centers and 6218 annotated needle biopsy slides from the publicly available Prostate Cancer Grading Assessment dataset. A cancer detection model was trained using MobileNetV3 on 0.5 mm × 0.5 mm cancer areas (tiles) captured at 10× magnification. For cancer grading, a Gleason pattern detector was trained on tiles using a ResNet50 convolutional neural network and a selective CutMix training strategy involving a mixture of real and artificial examples. This strategy resulted in improved model generalizability in the test set compared with three different control experiments when evaluated on both needle biopsy slides and whole-mount prostate slides from different centers. In an additional test cohort of RP patients who were clinically followed over 30 years, quantitative Gleason pattern AI estimates achieved concordance indexes of 0.69, 0.72, and 0.64 for predicting RFS, MFS, and OS times, outperforming the control experiments and International Society of Urological Pathology system (ISUP) grading by pathologists. Finally, unsupervised clustering of test RP patient specimens into low-, medium-, and high-risk groups based on AI-estimated proportions of each Gleason pattern resulted in significantly improved RFS and MFS stratification compared with ISUP grading. In summary, deep learning-based quantitative Gleason scoring using a selective CutMix training strategy may improve prognostication after prostate cancer surgery.
Collapse
Affiliation(s)
- Sushant Patkar
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Stephanie Harmon
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | - Rosina Lis
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Maria Merino
- Laboratory of Pathology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Denise Young
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - G. Thomas Brown
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | | | - Sally Elsamanoudi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Shyh-Han Tan
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Cara Schafer
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Jiji Jiang
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Gyorgy Petrovics
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Albert Dobi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | | | - Peter A. Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Gregory T. Chesnut
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
- Urology Service, Walter Reed National Military Medical Center, Bethesda, MD 20814, USA
| | - Peter Choyke
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Baris Turkbey
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Joel T. Moncur
- The Joint Pathology Center, Silver Spring, MD 20910, USA
| |
Collapse
|
6
|
Santa-Rosario JC, Gustafson EA, Sanabria Bellassai DE, Gustafson PE, de Socarraz M. Validation and three years of clinical experience in using an artificial intelligence algorithm as a second read system for prostate cancer diagnosis-real-world experience. J Pathol Inform 2024; 15:100378. [PMID: 38868487 PMCID: PMC11166872 DOI: 10.1016/j.jpi.2024.100378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/23/2024] [Accepted: 04/23/2024] [Indexed: 06/14/2024] Open
Abstract
Background Prostate cancer ranks as the most frequently diagnosed cancer in men in the USA, with significant mortality rates. Early detection is pivotal for optimal patient outcomes, providing increased treatment options and potentially less invasive interventions. There remain significant challenges in prostate cancer histopathology, including the potential for missed diagnoses due to pathologist variability and subjective interpretations. Methods To address these challenges, this study investigates the ability of artificial intelligence (AI) to enhance diagnostic accuracy. The Galen™ Prostate AI algorithm was validated on a cohort of Puerto Rican men to demonstrate its efficacy in cancer detection and Gleason grading. Subsequently, the AI algorithm was integrated into routine clinical practice during a 3-year period at a CLIA certified precision pathology laboratory. Results The Galen™ Prostate AI algorithm showed a 96.7% (95% CI 95.6-97.8) specificity and a 96.6% (95% CI 93.3-98.8) sensitivity for prostate cancer detection and 82.1% specificity (95% CI 73.9-88.5) and 81.1% sensitivity (95% CI 73.7-87.2) for distinction of Gleason Grade Group 1 from Grade Group 2+. The subsequent AI integration into routine clinical use examined prostate cancer diagnoses on >122,000 slides and 9200 cases over 3 years and had an overall AI Impact ™ factor of 1.8%. Conclusions The potential of AI to be a powerful, reliable, and effective diagnostic tool for pathologists is highlighted, while the AI Impact™ in a real-world setting demonstrates the ability of AI to standardize prostate cancer diagnosis at a high level of performance across pathologists.
Collapse
Affiliation(s)
- Juan Carlos Santa-Rosario
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Erik A. Gustafson
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Dario E. Sanabria Bellassai
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Phillip E. Gustafson
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Mariano de Socarraz
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| |
Collapse
|
7
|
Eigbire-Molen OJ, Cassol CA, Kenan DJ, Napier JO, Burdine LJ, Coley SM, Sharma SG. Smartphone-based machine learning model for real-time assessment of medical kidney biopsy. J Pathol Inform 2024; 15:100385. [PMID: 39071542 PMCID: PMC11283020 DOI: 10.1016/j.jpi.2024.100385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/08/2024] [Accepted: 05/27/2024] [Indexed: 07/30/2024] Open
Abstract
Background Kidney biopsy is the gold-standard for diagnosing medical renal diseases, but the accuracy of the diagnosis greatly depends on the quality of the biopsy specimen, particularly the amount of renal cortex obtained. Inadequate biopsies, characterized by insufficient cortex or predominant medulla, can lead to inconclusive or incorrect diagnoses, and repeat biopsy. Unfortunately, there has been a concerning increase in the rate of inadequate kidney biopsies, and not all medical centers have access to trained professionals who can assess biopsy adequacy in real time. In response to this challenge, we aimed to develop a machine learning model capable of assessing the percentage cortex of each biopsy pass using smartphone images of the kidney biopsy tissue at the time of biopsy. Methods 747 kidney biopsy cores and corresponding smartphone macro images were collected from five unused deceased donor kidneys. Each core was imaged, formalin-fixed, sectioned, and stained with Periodic acid-Schiff (PAS) to determine cortex percentage. The fresh unfixed core images were captured using the macro camera on an iPhone 13 Pro. Two experienced renal pathologists independently reviewed the PAS-stained sections to determine the cortex percentage. For the purpose of this study, the biopsies with less than 30% cortex were labeled as inadequate, while those with 30% or more cortex were classified as adequate. The dataset was divided into training (n=643), validation (n=30), and test (n=74) sets. Preprocessing steps involved converting High-Efficiency Image Container iPhone format images to JPEG, normalization, and renal tissue segmentation using a U-Net deep learning model. Subsequently, a classification deep learning model was trained on the renal tissue region of interest and corresponding class label. Results The deep learning model achieved an accuracy of 85% on the training data. On the independent test dataset, the model exhibited an accuracy of 81%. For inadequate samples in the test dataset, the model showed a sensitivity of 71%, suggesting its capability to identify cases with inadequate cortical representation. The area under the receiver-operating curve (AUC-ROC) on the test dataset was 0.80. Conclusion We successfully developed and tested a machine learning model for classifying smartphone images of kidney biopsies as either adequate or inadequate, based on the amount of cortex determined by expert renal pathologists. The model's promising results suggest its potential as a smartphone application to assist real-time assessment of kidney biopsy tissue, particularly in settings with limited access to trained personnel. Further refinements and validations are warranted to optimize the model's performance.
Collapse
Affiliation(s)
| | - Clarissa A. Cassol
- Arkana Laboratories, 10810 Executive Center Dr. Suite 100, Little Rock, AR 72211, USA
| | - Daniel J. Kenan
- Arkana Laboratories, 10810 Executive Center Dr. Suite 100, Little Rock, AR 72211, USA
| | - Johnathan O.H. Napier
- Arkana Laboratories, 10810 Executive Center Dr. Suite 100, Little Rock, AR 72211, USA
| | - Lyle J. Burdine
- Department of Surgery, University of Arkansas for Medical Sciences, Little Rock, AR 72205, USA
| | - Shana M. Coley
- Arkana Laboratories, 10810 Executive Center Dr. Suite 100, Little Rock, AR 72211, USA
| | - Shree G. Sharma
- Arkana Laboratories, 10810 Executive Center Dr. Suite 100, Little Rock, AR 72211, USA
| |
Collapse
|
8
|
Budginaite E, Magee DR, Kloft M, Woodruff HC, Grabsch HI. Computational methods for metastasis detection in lymph nodes and characterization of the metastasis-free lymph node microarchitecture: A systematic-narrative hybrid review. J Pathol Inform 2024; 15:100367. [PMID: 38455864 PMCID: PMC10918266 DOI: 10.1016/j.jpi.2024.100367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 01/31/2024] [Accepted: 01/31/2024] [Indexed: 03/09/2024] Open
Abstract
Background Histological examination of tumor draining lymph nodes (LNs) plays a vital role in cancer staging and prognostication. However, as soon as a LN is classed as metastasis-free, no further investigation will be performed and thus, potentially clinically relevant information detectable in tumor-free LNs is currently not captured. Objective To systematically study and critically assess methods for the analysis of digitized histological LN images described in published research. Methods A systematic search was conducted in several public databases up to December 2023 using relevant search terms. Studies using brightfield light microscopy images of hematoxylin and eosin or immunohistochemically stained LN tissue sections aiming to detect and/or segment LNs, their compartments or metastatic tumor using artificial intelligence (AI) were included. Dataset, AI methodology, cancer type, and study objective were compared between articles. Results A total of 7201 articles were collected and 73 articles remained for detailed analyses after article screening. Of the remaining articles, 86% aimed at LN metastasis identification, 8% aimed at LN compartment segmentation, and remaining focused on LN contouring. Furthermore, 78% of articles used patch classification and 22% used pixel segmentation models for analyses. Five out of six studies (83%) of metastasis-free LNs were performed on publicly unavailable datasets, making quantitative article comparison impossible. Conclusions Multi-scale models mimicking multiple microscopy zooms show promise for computational LN analysis. Large-scale datasets are needed to establish the clinical relevance of analyzing metastasis-free LN in detail. Further research is needed to identify clinically interpretable metrics for LN compartment characterization.
Collapse
Affiliation(s)
- Elzbieta Budginaite
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | | | - Maximilian Kloft
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Internal Medicine, Justus-Liebig-University, Giessen, Germany
| | - Henry C. Woodruff
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Heike I. Grabsch
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James’s, University of Leeds, Leeds, UK
| |
Collapse
|
9
|
Kong F, Wang X, Xiang J, Yang S, Wang X, Yue M, Zhang J, Zhao J, Han X, Dong Y, Zhu B, Wang F, Liu Y. Federated attention consistent learning models for prostate cancer diagnosis and Gleason grading. Comput Struct Biotechnol J 2024; 23:1439-1449. [PMID: 38623561 PMCID: PMC11016961 DOI: 10.1016/j.csbj.2024.03.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/29/2024] [Accepted: 03/29/2024] [Indexed: 04/17/2024] Open
Abstract
Artificial intelligence (AI) holds significant promise in transforming medical imaging, enhancing diagnostics, and refining treatment strategies. However, the reliance on extensive multicenter datasets for training AI models poses challenges due to privacy concerns. Federated learning provides a solution by facilitating collaborative model training across multiple centers without sharing raw data. This study introduces a federated attention-consistent learning (FACL) framework to address challenges associated with large-scale pathological images and data heterogeneity. FACL enhances model generalization by maximizing attention consistency between local clients and the server model. To ensure privacy and validate robustness, we incorporated differential privacy by introducing noise during parameter transfer. We assessed the effectiveness of FACL in cancer diagnosis and Gleason grading tasks using 19,461 whole-slide images of prostate cancer from multiple centers. In the diagnosis task, FACL achieved an area under the curve (AUC) of 0.9718, outperforming seven centers with an average AUC of 0.9499 when categories are relatively balanced. For the Gleason grading task, FACL attained a Kappa score of 0.8463, surpassing the average Kappa score of 0.7379 from six centers. In conclusion, FACL offers a robust, accurate, and cost-effective AI training model for prostate cancer pathology while maintaining effective data safeguards.
Collapse
Affiliation(s)
- Fei Kong
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiyue Wang
- College of Biomedical Engineering, Sichuan University, Chengdu, 610065, China
| | | | - Sen Yang
- AI Lab, Tencent, Shenzhen, 518057, China
| | - Xinran Wang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, 050035, China
| | - Meng Yue
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, 050035, China
| | - Jun Zhang
- AI Lab, Tencent, Shenzhen, 518057, China
| | - Junhan Zhao
- Massachusetts General Hospital, Boston, MA, 02114, United States
- Harvard T.H. Chan School of Public Health, Boston, MA, 02115, United States
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, 02115, United States
| | - Xiao Han
- AI Lab, Tencent, Shenzhen, 518057, China
| | - Yuhan Dong
- Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Biyue Zhu
- Department of Pharmacy, Children's Hospital of Chongqing Medical University, Chongqing, 400014, China
| | - Fang Wang
- Department of Pathology, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264000, China
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, 050035, China
| |
Collapse
|
10
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
11
|
Hua S, Yan F, Shen T, Ma L, Zhang X. PathoDuet: Foundation models for pathological slide analysis of H&E and IHC stains. Med Image Anal 2024; 97:103289. [PMID: 39106763 DOI: 10.1016/j.media.2024.103289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 07/19/2024] [Accepted: 07/24/2024] [Indexed: 08/09/2024]
Abstract
Large amounts of digitized histopathological data display a promising future for developing pathological foundation models via self-supervised learning methods. Foundation models pretrained with these methods serve as a good basis for downstream tasks. However, the gap between natural and histopathological images hinders the direct application of existing methods. In this work, we present PathoDuet, a series of pretrained models on histopathological images, and a new self-supervised learning framework in histopathology. The framework is featured by a newly-introduced pretext token and later task raisers to explicitly utilize certain relations between images, like multiple magnifications and multiple stains. Based on this, two pretext tasks, cross-scale positioning and cross-stain transferring, are designed to pretrain the model on Hematoxylin and Eosin (H&E) images and transfer the model to immunohistochemistry (IHC) images, respectively. To validate the efficacy of our models, we evaluate the performance over a wide variety of downstream tasks, including patch-level colorectal cancer subtyping and whole slide image (WSI)-level classification in H&E field, together with expression level prediction of IHC marker, tumor identification and slide-level qualitative analysis in IHC field. The experimental results show the superiority of our models over most tasks and the efficacy of proposed pretext tasks. The codes and models are available at https://github.com/openmedlab/PathoDuet.
Collapse
Affiliation(s)
- Shengyi Hua
- Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Fang Yan
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
| | - Tianle Shen
- Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lei Ma
- National Biomedical Imaging Center, College of Future Technology, Peking University, Beijing 100871, China
| | - Xiaofan Zhang
- Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China.
| |
Collapse
|
12
|
Marini N, Marchesin S, Wodzinski M, Caputo A, Podareanu D, Guevara BC, Boytcheva S, Vatrano S, Fraggetta F, Ciompi F, Silvello G, Müller H, Atzori M. Multimodal representations of biomedical knowledge from limited training whole slide images and reports using deep learning. Med Image Anal 2024; 97:103303. [PMID: 39154617 DOI: 10.1016/j.media.2024.103303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 08/08/2024] [Accepted: 08/09/2024] [Indexed: 08/20/2024]
Abstract
The increasing availability of biomedical data creates valuable resources for developing new deep learning algorithms to support experts, especially in domains where collecting large volumes of annotated data is not trivial. Biomedical data include several modalities containing complementary information, such as medical images and reports: images are often large and encode low-level information, while reports include a summarized high-level description of the findings identified within data and often only concerning a small part of the image. However, only a few methods allow to effectively link the visual content of images with the textual content of reports, preventing medical specialists from properly benefitting from the recent opportunities offered by deep learning models. This paper introduces a multimodal architecture creating a robust biomedical data representation encoding fine-grained text representations within image embeddings. The architecture aims to tackle data scarcity (combining supervised and self-supervised learning) and to create multimodal biomedical ontologies. The architecture is trained on over 6,000 colon whole slide Images (WSI), paired with the corresponding report, collected from two digital pathology workflows. The evaluation of the multimodal architecture involves three tasks: WSI classification (on data from pathology workflow and from public repositories), multimodal data retrieval, and linking between textual and visual concepts. Noticeably, the latter two tasks are available by architectural design without further training, showing that the multimodal architecture that can be adopted as a backbone to solve peculiar tasks. The multimodal data representation outperforms the unimodal one on the classification of colon WSIs and allows to halve the data needed to reach accurate performance, reducing the computational power required and thus the carbon footprint. The combination of images and reports exploiting self-supervised algorithms allows to mine databases without needing new annotations provided by experts, extracting new information. In particular, the multimodal visual ontology, linking semantic concepts to images, may pave the way to advancements in medicine and biomedical analysis domains, not limited to histopathology.
Collapse
Affiliation(s)
- Niccolò Marini
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.
| | - Stefano Marchesin
- Department of Information Engineering, University of Padua, Padua, Italy.
| | - Marek Wodzinski
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland; Department of Measurement and Electronics, AGH University of Kraków, Krakow, Poland
| | - Alessandro Caputo
- Department of Pathology, Ruggi University Hospital, Salerno, Italy; Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy
| | | | | | - Svetla Boytcheva
- Ontotext, Sofia, Bulgaria; Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria
| | - Simona Vatrano
- Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy
| | - Filippo Fraggetta
- Pathology Unit, Gravina Hospital Caltagirone ASP, Catania, Italy; Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Gianmaria Silvello
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland; Medical faculty, University of Geneva, 1211 Geneva, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland; Department of Neurosciences, University of Padua, Padua, Italy
| |
Collapse
|
13
|
Zhao R, Xi Z, Liu H, Jian X, Zhang J, Zhang Z, Li S. MIST: Multi-instance selective transformer for histopathological subtype prediction. Med Image Anal 2024; 97:103251. [PMID: 38954942 DOI: 10.1016/j.media.2024.103251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 01/24/2024] [Accepted: 06/21/2024] [Indexed: 07/04/2024]
Abstract
Accurate histopathological subtype prediction is clinically significant for cancer diagnosis and tumor microenvironment analysis. However, achieving accurate histopathological subtype prediction is a challenging task due to (1) instance-level discrimination of histopathological images, (2) low inter-class and large intra-class variances among histopathological images in their shape and chromatin texture, and (3) heterogeneous feature distribution over different images. In this paper, we formulate subtype prediction as fine-grained representation learning and propose a novel multi-instance selective transformer (MIST) framework, effectively achieving accurate histopathological subtype prediction. The proposed MIST designs an effective selective self-attention mechanism with multi-instance learning (MIL) and vision transformer (ViT) to adaptive identify informative instances for fine-grained representation. Innovatively, the MIST entrusts each instance with different contributions to the bag representation based on its interactions with instances and bags. Specifically, a SiT module with selective multi-head self-attention (S-MSA) is well-designed to identify the representative instances by modeling the instance-to-instance interactions. On the contrary, a MIFD module with the information bottleneck is proposed to learn the discriminative fine-grained representation for histopathological images by modeling instance-to-bag interactions with the selected instances. Substantial experiments on five clinical benchmarks demonstrate that the MIST achieves accurate histopathological subtype prediction and obtains state-of-the-art performance with an accuracy of 0.936. The MIST shows great potential to handle fine-grained medical image analysis, such as histopathological subtype prediction in clinical applications.
Collapse
Affiliation(s)
- Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Zijun Xi
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Huanchi Liu
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Xiangkun Jian
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Jian Zhang
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Zijian Zhang
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Shuo Li
- School of Computer Science and Engineering, Central South University, Changsha, China; Department of Computer and Data Science and Department of Biomedical Engineering, Case Western Reserve University, Cleveland, USA.
| |
Collapse
|
14
|
Zhang Z, Yin W, Wang S, Zheng X, Dong S. MBFusion: Multi-modal balanced fusion and multi-task learning for cancer diagnosis and prognosis. Comput Biol Med 2024; 181:109042. [PMID: 39180856 DOI: 10.1016/j.compbiomed.2024.109042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 07/11/2024] [Accepted: 08/17/2024] [Indexed: 08/27/2024]
Abstract
Pathological images and molecular omics are important information for predicting diagnosis and prognosis. The two kinds of heterogeneous modal data contain complementary information, and the effective fusion of the two modals can better reveal the complex mechanisms of cancer. However, due to the different representation learning methods, the expression strength of different modals in different tasks varies greatly, so that many multimodal fusions do not achieve the best results. In this paper, MBFusion is proposed, to achieve multiple tasks such as prediction of diagnosis and prognosis through multi-modal balanced fusion. The MBFusion framework uses two kinds of specially constructed graph convolutional network to extract the features of molecular omics data, and uses ResNet to extract the features of pathological image data and retain important deep features by using attention and clustering, which effectively improves both kinds of the features representation, making their expressive ability balanced and comparable. The features of these two modal data are then fused through cross-attention Transformer, and the fused features are used to learn both tasks of cancer subtype classification and survival analysis by using multi-task learning. In this paper, MBFusion and other state of the art methods are compared on two public cancer datasets, and MBFusion shows an improvement of up to 10.1% by three kinds of evaluation metrics. In the ablation experiment, MBFusion explores the contribution of each modal data and each framework module to the performance. Furthermore, the interpretability of MBFusion is explained in detail to show the value of application.
Collapse
Affiliation(s)
- Ziye Zhang
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Wendong Yin
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Shijin Wang
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Xiaorou Zheng
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China
| | - Shoubin Dong
- Guangdong Provincial Key Laboratory of Multimodal Big Data Intelligent Analysis, School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510641, Guangdong, China.
| |
Collapse
|
15
|
Kludt C, Wang Y, Ahmad W, Bychkov A, Fukuoka J, Gaisa N, Kühnel M, Jonigk D, Pryalukhin A, Mairinger F, Klein F, Schultheis AM, Seper A, Hulla W, Brägelmann J, Michels S, Klein S, Quaas A, Büttner R, Tolkach Y. Next-generation lung cancer pathology: Development and validation of diagnostic and prognostic algorithms. Cell Rep Med 2024; 5:101697. [PMID: 39178857 DOI: 10.1016/j.xcrm.2024.101697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 06/25/2024] [Accepted: 07/31/2024] [Indexed: 08/26/2024]
Abstract
Non-small cell lung cancer (NSCLC) is one of the most common malignant tumors. In this study, we develop a clinically useful computational pathology platform for NSCLC that can be a foundation for multiple downstream applications and provide immediate value for patient care optimization and individualization. We train the primary multi-class tissue segmentation algorithm on a substantial, high-quality, manually annotated dataset of whole-slide images with lung adenocarcinoma and squamous cell carcinomas. We investigate two downstream applications. NSCLC subtyping algorithm is trained and validated using a large, multi-institutional (n = 6), multi-scanner (n = 5), international cohort of NSCLC cases (slides/patients 4,097/1,527). Moreover, we develop four AI-derived, fully explainable, quantitative, prognostic parameters (based on tertiary lymphoid structure and necrosis assessment) and validate them for different clinical endpoints. The computational platform enables the high-precision, quantitative analysis of H&E-stained slides. The developed prognostic parameters facilitate robust and independent risk stratification of patients with NSCLC.
Collapse
Affiliation(s)
- Carina Kludt
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany
| | - Yuan Wang
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany
| | - Waleed Ahmad
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany
| | - Andrey Bychkov
- Department of Pathology, Kameda Medical Center, Kamogawa 296-0041, Japan; Department of Pathology Informatics, Nagasaki University, Nagasaki 852-8131, Japan
| | - Junya Fukuoka
- Department of Pathology, Kameda Medical Center, Kamogawa 296-0041, Japan; Department of Pathology Informatics, Nagasaki University, Nagasaki 852-8131, Japan
| | - Nadine Gaisa
- Institute of Pathology, University Hospital Aachen, 52074 Aachen, Germany; Institute of Pathology, University Hospital Ulm, 89081 Ulm, Germany
| | - Mark Kühnel
- Institute of Pathology, University Hospital Aachen, 52074 Aachen, Germany
| | - Danny Jonigk
- Institute of Pathology, University Hospital Aachen, 52074 Aachen, Germany; German Center for Lung Research, DZL, BREATH, 30625 Hanover, Germany
| | - Alexey Pryalukhin
- Institute of Clinical Pathology and Molecular Pathology, Wiener Neustadt State Hospital, 2700 Wiener Neustadt, Austria
| | - Fabian Mairinger
- Institute of Pathology, University Hospital Essen, 45147 Essen, Germany
| | - Franziska Klein
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany
| | - Anne Maria Schultheis
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany; Medical Faculty University of Cologne, 50937 Cologne, Germany
| | - Alexander Seper
- Institute of Clinical Pathology and Molecular Pathology, Wiener Neustadt State Hospital, 2700 Wiener Neustadt, Austria; Danube Private University, 3500 Krems an der Donau, Austria
| | - Wolfgang Hulla
- Institute of Clinical Pathology and Molecular Pathology, Wiener Neustadt State Hospital, 2700 Wiener Neustadt, Austria
| | - Johannes Brägelmann
- University of Cologne, Faculty of Medicine and University Hospital Cologne, Department of Translational Genomics, 50937 Cologne, Germany; Mildred Scheel School of Oncology, Faculty of Medicine and University Hospital Cologne, University of Cologne, 50937 Cologne, Germany; University of Cologne, Faculty of Medicine and University Hospital Cologne, Center for Molecular Medicine Cologne, 50937 Cologne, Germany
| | - Sebastian Michels
- University of Cologne, Faculty of Medicine and University Hospital of Colone, Lung Cancer Group Cologne, Department I for Internal Medicine and Center for Integrated Oncology Aachen Bonn Cologne Dusseldorf, 50937 Cologne, Germany
| | - Sebastian Klein
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany; Medical Faculty University of Cologne, 50937 Cologne, Germany
| | - Alexander Quaas
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany; Medical Faculty University of Cologne, 50937 Cologne, Germany
| | - Reinhard Büttner
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany; Medical Faculty University of Cologne, 50937 Cologne, Germany.
| | - Yuri Tolkach
- Institute of Pathology, University Hospital Cologne, 50937 Cologne, Germany; Medical Faculty University of Cologne, 50937 Cologne, Germany.
| |
Collapse
|
16
|
Khalili N, Ciompi F. Scaling data toward pan-cancer foundation models. Trends Cancer 2024:S2405-8033(24)00189-4. [PMID: 39266446 DOI: 10.1016/j.trecan.2024.08.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 08/29/2024] [Accepted: 08/30/2024] [Indexed: 09/14/2024]
Abstract
Recent advances in artificial intelligence (AI) have revolutionized computational pathology (CPath), particularly through deep learning (DL) and neural networks (NNs). In a recent study, Vorontsov et al. introduced Virchow, a new foundation model (FM) for CPath, which has shown promising results in cancer detection and biomarker prediction.
Collapse
Affiliation(s)
- Nadieh Khalili
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Francesco Ciompi
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
17
|
Syrykh C, DI Proietto V, Brion E, Copie-Bergman C, Jardin F, Dartigues P, Gaulard P, Jo Molina T, Briere J, Oberic L, Haioun C, Tilly H, Maussion C, Morel M, Schiratti JB, Laurent C. MYC Rearrangement Prediction from LYSA Whole Slide Images in Large B-cell Lymphoma: A Multi-centric Validation of Self-supervised Deep Learning Models. Mod Pathol 2024:100610. [PMID: 39265953 DOI: 10.1016/j.modpat.2024.100610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 07/25/2024] [Accepted: 09/03/2024] [Indexed: 09/14/2024]
Abstract
Large B-cell lymphoma (LBCL) is a heterogeneous lymphoid malignancy in which MYC gene rearrangement (MYC-R) is associated with a poor prognosis, prompting the recommendation for more intensive treatment. MYC-R detection relies on fluorescence in situ hybridization (FISH) method which is time consuming, expensive and not available in all laboratories. Automating MYC-R detection on hematoxylin and eosin (HE) stained whole slide images (WSI) of LBCL would decrease the need for costly molecular testing and improve pathologists' productivity. We developed an interpretable deep learning (DL) algorithm to detect MYC-R considering recent advances in self-supervised learning and providing an extensive comparison of seven feature extractors and six multiple instance learning models, themselves. Four different multicentric cohorts, including 1 247 LBCL patients, were used for training and validation. The best DL model reached an average ROC AUC score of 81.9% during cross-validation on the largest LBCL cohort, and ROC AUC scores ranging from 62.2% to 74.5% when evaluated on other unseen cohorts. In addition, we demonstrated that using this model as a pre-screening tool (with a false-negative rate of 0%), FISH testing would be avoided in 35% of cases. This work demonstrates the feasibility of developing a medical device to efficiently detect MYC gene rearrangement on HE WSI in daily practice.
Collapse
Affiliation(s)
| | | | | | - Christiane Copie-Bergman
- LYSA (The Lymphoma Study Association) and LYSARC (The Lymphoma Academic Research Organisation), Pierre-Bénite, France
| | - Fabrice Jardin
- Department of Hematology and U1245, Henri Becquerel Center, IRIB, Normandy University, Rouen, France
| | - Peggy Dartigues
- Department of Pathology, Gustave Roussy, Université Paris-Saclay, Villejuif, France
| | - Philippe Gaulard
- Department of Pathology, University Hospital Henri Mondor, Assistance Publique-Hôpitaux de Paris (AP-HP), Créteil, France; Mondor Institute for Biomedical Research, INSERM U955, Faculty of Medicine, University of Paris-Est Créteil, Créteil, France
| | - Thierry Jo Molina
- Department of Pathology, Necker Enfants Malades Hospital, Université Paris Descartes, Assistance Publique-Hôpitaux de Paris (AP-HP), Paris, France; Institut Imagine, Unité INSERM 1163, Paris, France
| | - Josette Briere
- Department of Hematology, Hôpital Saint-Louis, Assistance Publique-Hôpitaux de Paris (AP-HP), Université Paris Diderot, Paris, France
| | - Lucie Oberic
- Department of Hematology, IUCT Oncopole, Toulouse, France
| | - Corine Haioun
- Department of Hematology, University Hospital Henri Mondor, Assistance Publique-Hôpitaux de Paris (AP-HP), Créteil, France
| | - Hervé Tilly
- Department of Hematology and U1245, Henri Becquerel Center, IRIB, Normandy University, Rouen, France
| | | | | | | | - Camille Laurent
- Department of Pathology, IUCT Oncopole, Toulouse, France; INSERM, U1037, Research Center In Cancer of Toulouse, laboratoire d'excellence TOUCAN, Toulouse, France.
| |
Collapse
|
18
|
Agosti V, Munari E. Histopathological evaluation and grading for prostate cancer: current issues and crucial aspects. Asian J Androl 2024:00129336-990000000-00244. [PMID: 39254403 DOI: 10.4103/aja202440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 06/05/2024] [Indexed: 09/11/2024] Open
Abstract
A crucial aspect of prostate cancer grading, especially in low- and intermediate-risk cancer, is the accurate identification of Gleason pattern 4 glands, which includes ill-formed or fused glands. However, there is notable inconsistency among pathologists in recognizing these glands, especially when mixed with pattern 3 glands. This inconsistency has significant implications for patient management and treatment decisions. Conversely, the recognition of glomeruloid and cribriform architecture has shown higher reproducibility. Cribriform architecture, in particular, has been linked to the worst prognosis among pattern 4 subtypes. Intraductal carcinoma of the prostate (IDC-P) is also associated with high-grade cancer and poor prognosis. Accurate identification, classification, and tumor size evaluation by pathologists are vital for determining patient treatment. This review emphasizes the importance of prostate cancer grading, highlighting challenges like distinguishing between pattern 3 and pattern 4 and the prognostic implications of cribriform architecture and intraductal proliferations. It also addresses the inherent grading limitations due to interobserver variability and explores the potential of computational pathology to enhance pathologist accuracy and consistency.
Collapse
Affiliation(s)
- Vittorio Agosti
- Section of Pathology, Department of Molecular and Translational Medicine, University of Brescia, Brescia 25121, Italy
| | - Enrico Munari
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, Verona 37126, Italy
| |
Collapse
|
19
|
Wang CW, Firdi NP, Chu TC, Faiz MFI, Iqbal MZ, Li Y, Yang B, Mallya M, Bashashati A, Li F, Wang H, Lu M, Xia Y, Chao TK. ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images. Med Image Anal 2024; 99:103342. [PMID: 39260034 DOI: 10.1016/j.media.2024.103342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 09/02/2024] [Accepted: 09/03/2024] [Indexed: 09/13/2024]
Abstract
Ovarian cancer, predominantly epithelial ovarian cancer (EOC), is a global health concern due to its high mortality rate. Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70% of advanced patients are with recurrent cancer and disease. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by the FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Unfortunately, Bevacizumab may also induce harmful adverse effects, such as hypertension, bleeding, arterial thromboembolism, poor wound healing and gastrointestinal perforation. Given the expensive cost and unwanted toxicities, there is an urgent need for predictive methods to identify who could benefit from bevacizumab. Of the 18 (approved) requests from 5 countries, 6 teams using 284 whole section WSIs for training to develop fully automated systems submitted their predictions on a test set of 180 tissue core images, with the corresponding ground truth labels kept private. This paper summarizes the 5 qualified methods successfully submitted to the international challenge of automated prediction of treatment effectiveness in ovarian cancer using the histopathologic images (ATEC23) held at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023 and evaluates the methods in comparison with 5 state of the art deep learning approaches. This study further assesses the effectiveness of the presented prediction models as indicators for patient selection utilizing both Cox proportional hazards analysis and Kaplan-Meier survival analysis. A robust and cost-effective deep learning pipeline for digital histopathology tasks has become a necessity within the context of the medical community. This challenge highlights the limitations of current MIL methods, particularly within the context of prognosis-based classification tasks, and the importance of DCNNs like inception that has nonlinear convolutional modules at various resolutions to facilitate processing the data in multiple resolutions, which is a key feature required for pathology related prediction tasks. This further suggests the use of feature reuse at various scales to improve models for future research directions. In particular, this paper releases the labels of the testing set and provides applications for future research directions in precision oncology to predict ovarian cancer treatment effectiveness and facilitate patient selection via histopathological images.
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan.
| | - Nabila Puspita Firdi
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Tzu-Chiao Chu
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | | | | | | | - Bo Yang
- AIFUTURE Lab, Beijing, China
| | - Mayur Mallya
- AIM Lab, Biomedical Research Center, University of British Columbia, Vancouver, Canada
| | - Ali Bashashati
- AIM Lab, Biomedical Research Center, University of British Columbia, Vancouver, Canada
| | - Fei Li
- Shenzhen University, Shenzhen, China
| | | | - Mengkang Lu
- Northwestern Polytechnical University, Shaanxi, China
| | - Yong Xia
- Northwestern Polytechnical University, Shaanxi, China
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
20
|
Wang X, Zhao J, Marostica E, Yuan W, Jin J, Zhang J, Li R, Tang H, Wang K, Li Y, Wang F, Peng Y, Zhu J, Zhang J, Jackson CR, Zhang J, Dillon D, Lin NU, Sholl L, Denize T, Meredith D, Ligon KL, Signoretti S, Ogino S, Golden JA, Nasrallah MP, Han X, Yang S, Yu KH. A pathology foundation model for cancer diagnosis and prognosis prediction. Nature 2024:10.1038/s41586-024-07894-z. [PMID: 39232164 DOI: 10.1038/s41586-024-07894-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 08/01/2024] [Indexed: 09/06/2024]
Abstract
Histopathology image evaluation is indispensable for cancer diagnoses and subtype classification. Standard artificial intelligence methods for histopathology image analyses have focused on optimizing specialized models for each diagnostic task1,2. Although such methods have achieved some success, they often have limited generalizability to images generated by different digitization protocols or samples collected from different populations3. Here, to address this challenge, we devised the Clinical Histopathology Imaging Evaluation Foundation (CHIEF) model, a general-purpose weakly supervised machine learning framework to extract pathology imaging features for systematic cancer evaluation. CHIEF leverages two complementary pretraining methods to extract diverse pathology representations: unsupervised pretraining for tile-level feature identification and weakly supervised pretraining for whole-slide pattern recognition. We developed CHIEF using 60,530 whole-slide images spanning 19 anatomical sites. Through pretraining on 44 terabytes of high-resolution pathology imaging datasets, CHIEF extracted microscopic representations useful for cancer cell detection, tumour origin identification, molecular profile characterization and prognostic prediction. We successfully validated CHIEF using 19,491 whole-slide images from 32 independent slide sets collected from 24 hospitals and cohorts internationally. Overall, CHIEF outperformed the state-of-the-art deep learning methods by up to 36.1%, showing its ability to address domain shifts observed in samples from diverse populations and processed by different slide preparation methods. CHIEF provides a generalizable foundation for efficient digital pathology evaluation for patients with cancer.
Collapse
Affiliation(s)
- Xiyue Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Eliana Marostica
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Division of Health Sciences and Technology, Harvard-Massachusetts Institute of Technology, Boston, MA, USA
| | - Wei Yuan
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Jietian Jin
- Department of Pathology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jiayu Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Ruijiang Li
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | - Hongping Tang
- Department of Pathology, Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Kanran Wang
- Department of Radiation Oncology, Chongqing University Cancer Hospital, Chongqing, China
| | - Yu Li
- Department of Pathology, Chongqing University Cancer Hospital, Chongqing, China
| | - Fang Wang
- Department of Pathology, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, China
| | - Yulong Peng
- Department of Pathology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Junyou Zhu
- Department of Burn, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jing Zhang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Christopher R Jackson
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Pathology and Laboratory Medicine, Pennsylvania State University, Hummelstown, PA, USA
- Department of Pathology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Deborah Dillon
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Nancy U Lin
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lynette Sholl
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
- Department of Pathology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Thomas Denize
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
- Department of Pathology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - David Meredith
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
| | - Keith L Ligon
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
- Department of Pathology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Sabina Signoretti
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
- Department of Pathology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Shuji Ogino
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Jeffrey A Golden
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA
- Department of Pathology, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - MacLean P Nasrallah
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | | | - Sen Yang
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA.
| | - Kun-Hsing Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Brigham and Women's Hospital, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
21
|
Cazzaniga G, Del Carro F, Eccher A, Becker JU, Gambaro G, Rossi M, Pieruzzi F, Fraggetta F, Pagni F, L'Imperio V. Improving the Annotation Process in Computational Pathology: A Pilot Study with Manual and Semi-automated Approaches on Consumer and Medical Grade Devices. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01248-x. [PMID: 39231887 DOI: 10.1007/s10278-024-01248-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Revised: 08/21/2024] [Accepted: 08/22/2024] [Indexed: 09/06/2024]
Abstract
The development of reliable artificial intelligence (AI) algorithms in pathology often depends on ground truth provided by annotation of whole slide images (WSI), a time-consuming and operator-dependent process. A comparative analysis of different annotation approaches is performed to streamline this process. Two pathologists annotated renal tissue using semi-automated (Segment Anything Model, SAM)) and manual devices (touchpad vs mouse). A comparison was conducted in terms of working time, reproducibility (overlap fraction), and precision (0 to 10 accuracy rated by two expert nephropathologists) among different methods and operators. The impact of different displays on mouse performance was evaluated. Annotations focused on three tissue compartments: tubules (57 annotations), glomeruli (53 annotations), and arteries (58 annotations). The semi-automatic approach was the fastest and had the least inter-observer variability, averaging 13.6 ± 0.2 min with a difference (Δ) of 2%, followed by the mouse (29.9 ± 10.2, Δ = 24%), and the touchpad (47.5 ± 19.6 min, Δ = 45%). The highest reproducibility in tubules and glomeruli was achieved with SAM (overlap values of 1 and 0.99 compared to 0.97 for the mouse and 0.94 and 0.93 for the touchpad), though SAM had lower reproducibility in arteries (overlap value of 0.89 compared to 0.94 for both the mouse and touchpad). No precision differences were observed between operators (p = 0.59). Using non-medical monitors increased annotation times by 6.1%. The future employment of semi-automated and AI-assisted approaches can significantly speed up the annotation process, improving the ground truth for AI tool development.
Collapse
Affiliation(s)
- Giorgio Cazzaniga
- Department of Medicine and Surgery, Pathology, IRCCS Fondazione San Gerardo Dei Tintori, University of Milano-Bicocca, Via Pergolesi, 33, 20900, Monza, Italy
| | - Fabio Del Carro
- Department of Medicine and Surgery, Pathology, IRCCS Fondazione San Gerardo Dei Tintori, University of Milano-Bicocca, Via Pergolesi, 33, 20900, Monza, Italy
| | - Albino Eccher
- Department of Medical and Surgical Sciences for Children and Adults, University of Modena and Reggio Emilia, University Hospital of Modena, Modena, Italy
| | - Jan Ulrich Becker
- Institute of Pathology, University Hospital of Cologne, Cologne, Germany
| | - Giovanni Gambaro
- Division of Nephrology, Department of Medicine, University of Verona, Verona, Italy
| | - Mattia Rossi
- Division of Nephrology, Department of Medicine, University of Verona, Verona, Italy
| | - Federico Pieruzzi
- Clinical Nephrology, Fondazione IRCCS San Gerardo Dei Tintori, Monza, Italy
- School of Medicine and Surgery, University of Milano-Bicocca, Milan, Italy
| | - Filippo Fraggetta
- Pathology Unit, Azienda Sanitaria Provinciale (ASP) Catania, "Gravina" Hospital, Caltagirone, Italy
| | - Fabio Pagni
- Department of Medicine and Surgery, Pathology, IRCCS Fondazione San Gerardo Dei Tintori, University of Milano-Bicocca, Via Pergolesi, 33, 20900, Monza, Italy
| | - Vincenzo L'Imperio
- Department of Medicine and Surgery, Pathology, IRCCS Fondazione San Gerardo Dei Tintori, University of Milano-Bicocca, Via Pergolesi, 33, 20900, Monza, Italy.
| |
Collapse
|
22
|
Topuz Y, Yıldız S, Varlı S. ConvNext Mitosis Identification-You Only Look Once (CNMI-YOLO): Domain Adaptive and Robust Mitosis Identification in Digital Pathology. J Transl Med 2024; 104:102130. [PMID: 39233013 DOI: 10.1016/j.labinv.2024.102130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 08/23/2024] [Accepted: 08/28/2024] [Indexed: 09/06/2024] Open
Abstract
In digital pathology, accurate mitosis detection in histopathological images is critical for cancer diagnosis and prognosis. However, this remains challenging due to the inherent variability in cell morphology and the domain shift problem. This study introduces ConvNext Mitosis Identification-You Only Look Once (CNMI-YOLO), a new 2-stage deep learning method that uses the YOLOv7 architecture for cell detection and the ConvNeXt architecture for cell classification. The goal is to improve the identification of mitosis in different types of cancers. We utilized the Mitosis Domain Generalization Challenge 2022 data set in the experiments to ensure the model's robustness and success across various scanners, species, and cancer types. The CNMI-YOLO model demonstrates superior performance in accurately detecting mitotic cells, significantly outperforming existing models in terms of precision, recall, and F1 score. The CNMI-YOLO model achieved an F1 score of 0.795 on the Mitosis Domain Generalization Challenge 2022 and demonstrated robust generalization with F1 scores of 0.783 and 0.759 on the external melanoma and sarcoma test sets, respectively. Additionally, the study included ablation studies to evaluate various object detection and classification models, such as Faster-RCNN and Swin Transformer. Furthermore, we assessed the model's robustness performance on unseen data, confirming its ability to generalize and its potential for real-world use in digital pathology, using soft tissue sarcoma and melanoma samples not included in the training data set.
Collapse
Affiliation(s)
- Yasemin Topuz
- Department of Computer Engineering, Yıldız Technical University, Istanbul, Türkiye; Health Institutes of Türkiye, Istanbul, Türkiye.
| | - Serdar Yıldız
- Department of Computer Engineering, Yıldız Technical University, Istanbul, Türkiye; BILGEM TUBITAK, Kocaeli, Türkiye
| | - Songül Varlı
- Department of Computer Engineering, Yıldız Technical University, Istanbul, Türkiye; Health Institutes of Türkiye, Istanbul, Türkiye
| |
Collapse
|
23
|
Zhu R, He H, Chen Y, Yi M, Ran S, Wang C, Wang Y. Deep learning for rapid virtual H&E staining of label-free glioma tissue from hyperspectral images. Comput Biol Med 2024; 180:108958. [PMID: 39094325 DOI: 10.1016/j.compbiomed.2024.108958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/02/2024] [Accepted: 07/26/2024] [Indexed: 08/04/2024]
Abstract
Hematoxylin and eosin (H&E) staining is a crucial technique for diagnosing glioma, allowing direct observation of tissue structures. However, the H&E staining workflow necessitates intricate processing, specialized laboratory infrastructures, and specialist pathologists, rendering it expensive, labor-intensive, and time-consuming. In view of these considerations, we combine the deep learning method and hyperspectral imaging technique, aiming at accurately and rapidly converting the hyperspectral images into virtual H&E staining images. The method overcomes the limitations of H&E staining by capturing tissue information at different wavelengths, providing comprehensive and detailed tissue composition information as the realistic H&E staining. In comparison with various generator structures, the Unet exhibits substantial overall advantages, as evidenced by a mean structure similarity index measure (SSIM) of 0.7731 and a peak signal-to-noise ratio (PSNR) of 23.3120, as well as the shortest training and inference time. A comprehensive software system for virtual H&E staining, which integrates CCD control, microscope control, and virtual H&E staining technology, is developed to facilitate fast intraoperative imaging, promote disease diagnosis, and accelerate the development of medical automation. The platform reconstructs large-scale virtual H&E staining images of gliomas at a high speed of 3.81 mm2/s. This innovative approach will pave the way for a novel, expedited route in histological staining.
Collapse
Affiliation(s)
- Ruohua Zhu
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Haiyang He
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Yuzhe Chen
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Ming Yi
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Shengdong Ran
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China
| | - Chengde Wang
- Department of Neurosurgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou 325000, China.
| | - Yi Wang
- National Engineering Research Center of Ophthalmology and Optometry, School of Biomedical Engineering, Eye Hospital, Wenzhou Medical University, Xueyuan Road 270, Wenzhou, 325027, China; Wenzhou Institute, University of Chinese Academy of Sciences, Jinlian Road 1, Wenzhou, 325001, China.
| |
Collapse
|
24
|
Hsu YC, Lin KT, Lee MS, Shen LS, Yeh TH, Lin YT. Multiple instance learning for eosinophil quantification of sinonasal histopathology images: A hierarchical determination on whole slide images. Int Forum Allergy Rhinol 2024; 14:1513-1516. [PMID: 38767581 DOI: 10.1002/alr.23365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 04/26/2024] [Accepted: 05/04/2024] [Indexed: 05/22/2024]
Abstract
KEY POINTS We proposed a hierarchical framework including an unsupervised candidate image selection and a weakly supervised patch image detection based on multiple instance learning (MIL) to effectively estimate eosinophil quantities in tissue samples from whole slide images. MIL is an innovative approach that can help deal with the variability in cell distribution detection and enable automated eosinophil quantification from sinonasal histopathological images with a high degree of accuracy. The study lays the foundation for further research and development in the field of automated histopathological image analysis, and validation on more extensive and diverse datasets will contribute to real-world application.
Collapse
Affiliation(s)
- Yen-Chi Hsu
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Kao-Tsung Lin
- Department of Otolaryngology, National Taiwan University Hospital, Taipei, Taiwan
| | - Ming-Sui Lee
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Li-Sung Shen
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan
| | - Te-Huei Yeh
- Department of Otolaryngology, National Taiwan University Hospital, Taipei, Taiwan
| | - Yi-Tsen Lin
- Department of Otolaryngology, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|
25
|
Lv H, Li W, Lu Z, Gao X, Zhang Q, Bao Y, Fu Y, Xiao J. SPMLD: A skin pathological image dataset for non-melanoma with detailed lesion area annotation. Comput Biol Med 2024; 179:108793. [PMID: 38955126 DOI: 10.1016/j.compbiomed.2024.108793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 06/05/2024] [Accepted: 06/18/2024] [Indexed: 07/04/2024]
Abstract
Skin tumors are the most common tumors in humans and the clinical characteristics of three common non-melanoma tumors (IDN, SK, BCC) are similar, resulting in a high misdiagnosis rate. The accurate differential diagnosis of these tumors needs to be judged based on pathological images. However, a shortage of experienced dermatological pathologists leads to bias in the diagnostic accuracy of these skin tumors in China. In this paper, we establish a skin pathological image dataset, SPMLD, for three non-melanoma to achieve automatic and accurate intelligent identification for them. Meanwhile, we propose a lesion-area-based enhanced classification network with the KLS module and an attention module. Specifically, we first collect thousands of H&E-stained tissue sections from patients with clinically and pathologically confirmed IDN, SK, and BCC from a single-center hospital. Then, we scan them to construct a pathological image dataset of these three skin tumors. Furthermore, we mark the complete lesion area of the entire pathology image to better learn the pathologist's diagnosis process. In addition, we applied the proposed network for lesion classification prediction on the SPMLD dataset. Finally, we conduct a series of experiments to demonstrate that this annotation and our network can effectively improve the classification results of various networks. The source dataset and code are available at https://github.com/efss24/SPMLD.git.
Collapse
Affiliation(s)
- Haozhen Lv
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Wentao Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zhengda Lu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Xiaoman Gao
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Qiuli Zhang
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Yingqiu Bao
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China.
| | - Yu Fu
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Jun Xiao
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| |
Collapse
|
26
|
Riaz IB, Khan MA, Haddad TC. Potential application of artificial intelligence in cancer therapy. Curr Opin Oncol 2024; 36:437-448. [PMID: 39007164 DOI: 10.1097/cco.0000000000001068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
PURPOSE OF REVIEW This review underscores the critical role and challenges associated with the widespread adoption of artificial intelligence in cancer care to enhance disease management, streamline clinical processes, optimize data retrieval of health information, and generate and synthesize evidence. RECENT FINDINGS Advancements in artificial intelligence models and the development of digital biomarkers and diagnostics are applicable across the cancer continuum from early detection to survivorship care. Additionally, generative artificial intelligence has promised to streamline clinical documentation and patient communications, generate structured data for clinical trial matching, automate cancer registries, and facilitate advanced clinical decision support. Widespread adoption of artificial intelligence has been slow because of concerns about data diversity and data shift, model reliability and algorithm bias, legal oversight, and high information technology and infrastructure costs. SUMMARY Artificial intelligence models have significant potential to transform cancer care. Efforts are underway to deploy artificial intelligence models in the cancer practice, evaluate their clinical impact, and enhance their fairness and explainability. Standardized guidelines for the ethical integration of artificial intelligence models in cancer care pathways and clinical operations are needed. Clear governance and oversight will be necessary to gain trust in artificial intelligence-assisted cancer care by clinicians, scientists, and patients.
Collapse
Affiliation(s)
- Irbaz Bin Riaz
- Department of AI and Informatics, Mayo Clinic, Minnesota
- Division of Hematology and Oncology, Mayo Clinic, Phoenix, Arizona
| | | | - Tufia C Haddad
- Department of Oncology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
27
|
Janssen BV, Oteman B, Ali M, Valkema PA, Adsay V, Basturk O, Chatterjee D, Chou A, Crobach S, Doukas M, Drillenburg P, Esposito I, Gill AJ, Hong SM, Jansen C, Kliffen M, Mittal A, Samra J, van Velthuysen MLF, Yavas A, Kazemier G, Verheij J, Steyerberg E, Besselink MG, Wang H, Verbeke C, Fariña A, de Boer OJ. Artificial Intelligence-based Segmentation of Residual Pancreatic Cancer in Resection Specimens Following Neoadjuvant Treatment (ISGPP-2): International Improvement and Validation Study. Am J Surg Pathol 2024; 48:1108-1116. [PMID: 38985503 PMCID: PMC11321604 DOI: 10.1097/pas.0000000000002270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/11/2024]
Abstract
Neoadjuvant therapy (NAT) has become routine in patients with borderline resectable pancreatic cancer. Pathologists examine pancreatic cancer resection specimens to evaluate the effect of NAT. However, an automated scoring system to objectively quantify residual pancreatic cancer (RPC) is currently lacking. Herein, we developed and validated the first automated segmentation model using artificial intelligence techniques to objectively quantify RPC. Digitized histopathological tissue slides were included from resected pancreatic cancer specimens from 14 centers in 7 countries in Europe, North America, Australia, and Asia. Four different scanner types were used: Philips (56%), Hamamatsu (27%), 3DHistech (10%), and Leica (7%). Regions of interest were annotated and classified as cancer, non-neoplastic pancreatic ducts, and others. A U-Net model was trained to detect RPC. Validation consisted of by-scanner internal-external cross-validation. Overall, 528 unique hematoxylin and eosin (H & E) slides from 528 patients were included. In the individual Philips, Hamamatsu, 3DHistech, and Leica scanner cross-validations, mean F1 scores of 0.81 (95% CI, 0.77-0.84), 0.80 (0.78-0.83), 0.76 (0.65-0.78), and 0.71 (0.65-0.78) were achieved, respectively. In the meta-analysis of the cross-validations, the mean F1 score was 0.78 (0.71-0.84). A final model was trained on the entire data set. This ISGPP model is the first segmentation model using artificial intelligence techniques to objectively quantify RPC following NAT. The internally-externally cross-validated model in this study demonstrated robust performance in detecting RPC in specimens. The ISGPP model, now made publically available, enables automated RPC segmentation and forms the basis for objective NAT response evaluation in pancreatic cancer.
Collapse
Affiliation(s)
- Boris V. Janssen
- Departments of Surgery
- Pathology, Amsterdam UMC, location University of Amsterdam
- Cancer Center Amsterdam
| | - Bart Oteman
- Departments of Surgery
- Pathology, Amsterdam UMC, location University of Amsterdam
- Cancer Center Amsterdam
| | - Mahsoem Ali
- Cancer Center Amsterdam
- Department of Surgery, Amsterdam UMC, location Vrije Universiteit
| | - Pieter A. Valkema
- Pathology, Amsterdam UMC, location University of Amsterdam
- Cancer Center Amsterdam
| | - Volkan Adsay
- Department of Pathology, Koc University and KUTTAM Research Center, Istanbul, Turkey
| | - Olca Basturk
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Deyali Chatterjee
- Department of Anatomical Pathology, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Angela Chou
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, NSW, Australia
- University of Sydney, Sydney, NSW, Australia
| | | | | | | | - Irene Esposito
- Institute of Pathology, Heinrich-Heine-University and University Hospital of Duesseldorf, Duesseldorf, Germany
| | - Anthony J. Gill
- Cancer Diagnosis and Pathology Group, Kolling Institute of Medical Research, Royal North Shore Hospital, St Leonards, NSW, Australia
- University of Sydney, Sydney, NSW, Australia
| | - Seung-Mo Hong
- Department of Pathology, Asan Medical Center, Seoul, Republic of Korea
| | - Casper Jansen
- Laboratorium Pathologie Oost-Nederland, Hengelo
- Department of Pathology, Medisch Spectrum Twente, Enschede, The Netherlands
| | - Mike Kliffen
- Department of Pathology, Maasstad ziekenhuis, Rotterdam
| | - Anubhav Mittal
- Department of Surgery of Medical Research, Royal North Shore Hospital, St Leonards, NSW, Australia
| | - Jas Samra
- University of Sydney, Sydney, NSW, Australia
- Department of Surgery of Medical Research, Royal North Shore Hospital, St Leonards, NSW, Australia
| | | | - Aslihan Yavas
- Institute of Pathology, Heinrich-Heine-University and University Hospital of Duesseldorf, Duesseldorf, Germany
| | - Geert Kazemier
- Cancer Center Amsterdam
- Department of Surgery, Amsterdam UMC, location Vrije Universiteit
| | - Joanne Verheij
- Pathology, Amsterdam UMC, location University of Amsterdam
- Cancer Center Amsterdam
| | - Ewout Steyerberg
- Biomedical Data Sciences, Leiden University Medical Center, Leiden
| | | | - Huamin Wang
- Department of Anatomical Pathology, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Caroline Verbeke
- Department of Pathology, Institute of Clinical Medicine, University of Oslo
- Department of Pathology, Oslo University Hospital, Oslo, Norway
| | - Arantza Fariña
- Pathology, Amsterdam UMC, location University of Amsterdam
- Cancer Center Amsterdam
| | - Onno J. de Boer
- Pathology, Amsterdam UMC, location University of Amsterdam
- Cancer Center Amsterdam
| |
Collapse
|
28
|
Ahmadvand P, Farahani H, Farnell D, Darbandsari A, Topham J, Karasinska J, Nelson J, Naso J, Jones SJM, Renouf D, Schaeffer DF, Bashashati A. A Deep Learning Approach for the Identification of the Molecular Subtypes of Pancreatic Ductal Adenocarcinoma Based on Whole Slide Pathology Images. THE AMERICAN JOURNAL OF PATHOLOGY 2024:S0002-9440(24)00325-0. [PMID: 39222907 DOI: 10.1016/j.ajpath.2024.08.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 08/12/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024]
Abstract
Delayed diagnosis and treatment resistance make pancreatic ductal adenocarcinoma (PDAC) mortality rates high. Identifying molecular subtypes can improve treatment, but current methods are costly and time-consuming. In this study, deep learning models were used to identify histologic features that classify PDAC molecular subtypes based on routine hematoxylin-eosin-stained histopathologic slides. A total of 97 histopathology slides associated with resectable PDAC from The Cancer Genome Atlas project were used to train a deep learning model and tested the performance on 44 needle biopsy material (110 slides) from a local annotated patient cohort. The model achieved balanced accuracy of 96.19% and 83.03% in identifying the classical and basal subtypes of PDAC in The Cancer Genome Atlas and the local cohort, respectively. This study provides a promising method to cost-effectively and rapidly classifying PDAC molecular subtypes based on routine hematoxylin-eosin-stained slides, potentially leading to more effective clinical management of this disease.
Collapse
Affiliation(s)
- Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada; Vancouver General Hospital, Vancouver, British Columbia, Canada
| | - Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | - James Topham
- Pancreas Centre BC, Vancouver, British Columbia, Canada
| | | | - Jessica Nelson
- British Columbia Cancer Research Center, Vancouver, British Columbia, Canada
| | - Julia Naso
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - Steven J M Jones
- Michael Smith Genome Sciences Center, British Columbia Cancer Research Center, Vancouver, British Columbia, Canada
| | - Daniel Renouf
- Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | - David F Schaeffer
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada; Vancouver General Hospital, Vancouver, British Columbia, Canada; Pancreas Centre BC, Vancouver, British Columbia, Canada.
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada; Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, British Columbia, Canada.
| |
Collapse
|
29
|
Reis-Filho JS, Scaltriti M, Kapil A, Sade H, Galbraith S. Shifting the paradigm in personalized cancer care through next-generation therapeutics and computational pathology. Mol Oncol 2024. [PMID: 39214683 DOI: 10.1002/1878-0261.13724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024] Open
Abstract
The incorporation of novel therapeutic agents such as antibody-drug conjugates, radio-conjugates, T-cell engagers, and chimeric antigen receptor cell therapies represents a paradigm shift in oncology. Cell-surface target quantification, quantitative assessment of receptor internalization, and changes in the tumor microenvironment (TME) are essential variables in the development of biomarkers for patient selection and therapeutic response. Assessing these parameters requires capabilities that transcend those of traditional biomarker approaches based on immunohistochemistry, in situ hybridization and/or sequencing assays. Computational pathology is emerging as a transformative solution in this new therapeutic landscape, enabling detailed assessment of not only target presence, expression levels, and intra-tumor distribution but also of additional phenotypic features of tumor cells and their surrounding TME. Here, we delineate the pivotal role of computational pathology in enhancing the efficacy and specificity of these advanced therapeutics, underscoring the integration of novel artificial intelligence models that promise to revolutionize biomarker discovery and drug development.
Collapse
Affiliation(s)
- Jorge S Reis-Filho
- Cancer Biomarker Development, Oncology Research and Development, AstraZeneca, Gaithersburg, MD, USA
| | - Maurizio Scaltriti
- Translational Medicine, Oncology Research and Development, AstraZeneca, Gaithersburg, MD, USA
| | - Ansh Kapil
- Oncology Research and Development, AstraZeneca Computational Pathology GmbH, AstraZeneca, Munich, Germany
| | - Hadassah Sade
- Oncology Research and Development, AstraZeneca Computational Pathology GmbH, AstraZeneca, Munich, Germany
| | - Susan Galbraith
- Oncology Research and Development, AstraZeneca, Gaithersburg, Maryland, USA
| |
Collapse
|
30
|
Aftab R, Yan Q, Zhao J, Yong G, Huajie Y, Urrehman Z, Mohammad Khalid F. Neighborhood attention transformer multiple instance learning for whole slide image classification. Front Oncol 2024; 14:1389396. [PMID: 39267847 PMCID: PMC11390382 DOI: 10.3389/fonc.2024.1389396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/20/2024] [Indexed: 09/15/2024] Open
Abstract
Introduction Pathologists rely on whole slide images (WSIs) to diagnose cancer by identifying tumor cells and subtypes. Deep learning models, particularly weakly supervised ones, classify WSIs using image tiles but may overlook false positives and negatives due to the heterogeneous nature of tumors. Both cancerous and healthy cells can proliferate in patterns that extend beyond individual tiles, leading to errors at the tile level that result in inaccurate tumor-level classifications. Methods To address this limitation, we introduce NATMIL (Neighborhood Attention Transformer Multiple Instance Learning), which utilizes the Neighborhood Attention Transformer to incorporate contextual dependencies among WSI tiles. NATMIL enhances multiple instance learning by integrating a broader tissue context into the model. Our approach enhances the accuracy of tumor classification by considering the broader tissue context, thus reducing errors associated with isolated tile analysis. Results We conducted a quantitative analysis to evaluate NATMIL's performance against other weakly supervised algorithms. When applied to subtyping non-small cell lung cancer (NSCLC) and lymph node (LN) tumors, NATMIL demonstrated superior accuracy. Specifically, NATMIL achieved accuracy values of 89.6% on the Camelyon dataset and 88.1% on the TCGA-LUSC dataset, outperforming existing methods. These results underscore NATMIL's potential as a robust tool for improving the precision of cancer diagnosis using WSIs. Discussion Our findings demonstrate that NATMIL significantly improves tumor classification accuracy by reducing errors associated with isolated tile analysis. The integration of contextual dependencies enhances the precision of cancer diagnosis using WSIs, highlighting NATMILs´ potential as a robust tool in pathology.
Collapse
Affiliation(s)
- Rukhma Aftab
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Qiang Yan
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
- School of Software, North University of China, Taiyuan, Shanxi, China
| | - Juanjuan Zhao
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Gao Yong
- Department of Respiratory and Critical Care Medicine, Sinopharm Tongmei General Hospital, Datong, Shanxi, China
| | - Yue Huajie
- First Hospital of Shanxi Medical University, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Zia Urrehman
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Faizi Mohammad Khalid
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
31
|
Bell RD, Brendel M, Konnaris MA, Xiang J, Otero M, Fontana MA, Bai Z, Krenitsky DM, Meednu N, Rangel-Moreno J, Scheel-Toellner D, Carr H, Nayar S, McMurray J, DiCarlo E, Anolik JH, Donlin LT, Orange DE, Kenney HM, Schwarz EM, Filer A, Ivashkiv LB, Wang F. Automated multi-scale computational pathotyping (AMSCP) of inflamed synovial tissue. Nat Commun 2024; 15:7503. [PMID: 39209814 PMCID: PMC11362542 DOI: 10.1038/s41467-024-51012-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 07/26/2024] [Indexed: 09/04/2024] Open
Abstract
Rheumatoid arthritis (RA) is a complex immune-mediated inflammatory disorder in which patients suffer from inflammatory-erosive arthritis. Recent advances on histopathology heterogeneity of RA synovial tissue revealed three distinct phenotypes based on cellular composition (pauci-immune, diffuse and lymphoid), suggesting that distinct etiologies warrant specific targeted therapy which motivates a need for cost effective phenotyping tools in preclinical and clinical settings. To this end, we developed an automated multi-scale computational pathotyping (AMSCP) pipeline for both human and mouse synovial tissue with two distinct components that can be leveraged together or independently: (1) segmentation of different tissue types to characterize tissue-level changes, and (2) cell type classification within each tissue compartment that assesses change across disease states. Here, we demonstrate the efficacy, efficiency, and robustness of the AMSCP pipeline as well as the ability to discover novel phenotypes. Taken together, we find AMSCP to be a valuable cost-effective method for both pre-clinical and clinical research.
Collapse
Affiliation(s)
- Richard D Bell
- Arthritis and Tissue Degeneration Program and Research Institute, Hospital for Special Surgery, New York, NY, USA.
- Weill Cornell Medical College, New York, NY, USA.
| | - Matthew Brendel
- Department of Population Health Sciences, Weill Cornell Medical College, New York, NY, USA
| | - Maxwell A Konnaris
- Huck Institute of the Life Sciences, Pennsylvania State University, State College, University Park, PA, USA
- Orthopedic Soft Tissue Research Program, Hospital for Special Surgery, New York, NY, USA
| | | | - Miguel Otero
- Weill Cornell Medical College, New York, NY, USA
- Orthopedic Soft Tissue Research Program, Hospital for Special Surgery, New York, NY, USA
| | - Mark A Fontana
- Arthritis and Tissue Degeneration Program and Research Institute, Hospital for Special Surgery, New York, NY, USA
- Department of Population Health Sciences, Weill Cornell Medical College, New York, NY, USA
| | - Zilong Bai
- Department of Population Health Sciences, Weill Cornell Medical College, New York, NY, USA
| | - Daria M Krenitsky
- Allergy, Immunology and Rheumatology Division, Department of Medicine, University of Rochester Medical Center, Rochester, NY, USA
| | - Nida Meednu
- Allergy, Immunology and Rheumatology Division, Department of Medicine, University of Rochester Medical Center, Rochester, NY, USA
| | - Javier Rangel-Moreno
- Allergy, Immunology and Rheumatology Division, Department of Medicine, University of Rochester Medical Center, Rochester, NY, USA
| | - Dagmar Scheel-Toellner
- Rheumatology Research Group, Institute for Inflammation and Ageing, University of Birmingham, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK
| | - Hayley Carr
- Rheumatology Research Group, Institute for Inflammation and Ageing, University of Birmingham, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK
| | - Saba Nayar
- Rheumatology Research Group, Institute for Inflammation and Ageing, University of Birmingham, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK
| | - Jack McMurray
- Rheumatology Research Group, Institute for Inflammation and Ageing, University of Birmingham, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK
| | - Edward DiCarlo
- Department of Pathology and Laboratory Medicine, Hospital for Special Surgery, New York, NY, USA
| | - Jennifer H Anolik
- Allergy, Immunology and Rheumatology Division, Department of Medicine, University of Rochester Medical Center, Rochester, NY, USA
- Center for Musculoskeletal Research, University of Rochester Medical Center, Rochester, NY, USA
| | - Laura T Donlin
- Arthritis and Tissue Degeneration Program and Research Institute, Hospital for Special Surgery, New York, NY, USA
| | - Dana E Orange
- Arthritis and Tissue Degeneration Program and Research Institute, Hospital for Special Surgery, New York, NY, USA
- The Rockefeller University, New York, NY, USA
| | - H Mark Kenney
- Center for Musculoskeletal Research, University of Rochester Medical Center, Rochester, NY, USA
| | - Edward M Schwarz
- Center for Musculoskeletal Research, University of Rochester Medical Center, Rochester, NY, USA
| | - Andrew Filer
- Rheumatology Research Group, Institute for Inflammation and Ageing, University of Birmingham, NIHR Birmingham Biomedical Research Center and Clinical Research Facility, University of Birmingham, Queen Elizabeth Hospital, Birmingham, UK
| | - Lionel B Ivashkiv
- Arthritis and Tissue Degeneration Program and Research Institute, Hospital for Special Surgery, New York, NY, USA
- Weill Cornell Medical College, New York, NY, USA
| | - Fei Wang
- Department of Population Health Sciences, Weill Cornell Medical College, New York, NY, USA
| |
Collapse
|
32
|
Mandal S, Balraj K, Kodamana H, Arora C, Clark JM, Kwon DS, Rathore AS. Weakly supervised large-scale pancreatic cancer detection using multi-instance learning. Front Oncol 2024; 14:1362850. [PMID: 39267824 PMCID: PMC11390448 DOI: 10.3389/fonc.2024.1362850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/01/2024] [Indexed: 09/15/2024] Open
Abstract
Introduction Early detection of pancreatic cancer continues to be a challenge due to the difficulty in accurately identifying specific signs or symptoms that might correlate with the onset of pancreatic cancer. Unlike breast or colon or prostate cancer where screening tests are often useful in identifying cancerous development, there are no tests to diagnose pancreatic cancers. As a result, most pancreatic cancers are diagnosed at an advanced stage, where treatment options, whether systemic therapy, radiation, or surgical interventions, offer limited efficacy. Methods A two-stage weakly supervised deep learning-based model has been proposed to identify pancreatic tumors using computed tomography (CT) images from Henry Ford Health (HFH) and publicly available Memorial Sloan Kettering Cancer Center (MSKCC) data sets. In the first stage, the nnU-Net supervised segmentation model was used to crop an area in the location of the pancreas, which was trained on the MSKCC repository of 281 patient image sets with established pancreatic tumors. In the second stage, a multi-instance learning-based weakly supervised classification model was applied on the cropped pancreas region to segregate pancreatic tumors from normal appearing pancreas. The model was trained, tested, and validated on images obtained from an HFH repository with 463 cases and 2,882 controls. Results The proposed deep learning model, the two-stage architecture, offers an accuracy of 0.907 ± 0.01, sensitivity of 0.905 ± 0.01, specificity of 0.908 ± 0.02, and AUC (ROC) 0.903 ± 0.01. The two-stage framework can automatically differentiate pancreatic tumor from non-tumor pancreas with improved accuracy on the HFH dataset. Discussion The proposed two-stage deep learning architecture shows significantly enhanced performance for predicting the presence of a tumor in the pancreas using CT images compared with other reported studies in the literature.
Collapse
Affiliation(s)
- Shyamapada Mandal
- Department of Chemical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Keerthiveena Balraj
- Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India
| | - Hariprasad Kodamana
- Department of Chemical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India
| | - Chetan Arora
- Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India
- Department of Computer Science and Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Julie M Clark
- Henry Ford Pancreatic Cancer Center, Henry Ford Health, Detroit, MI, United States
| | - David S Kwon
- Henry Ford Pancreatic Cancer Center, Henry Ford Health, Detroit, MI, United States
- Department of Surgery, Henry Ford Health, Detroit, MI, United States
| | - Anurag S Rathore
- Department of Chemical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India
| |
Collapse
|
33
|
Qin J, He Y, Liang Y, Kang L, Zhao J, Ding B. Cell comparative learning: A cervical cytopathology whole slide image classification method using normal and abnormal cells. Comput Med Imaging Graph 2024; 117:102427. [PMID: 39216344 DOI: 10.1016/j.compmedimag.2024.102427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 06/14/2024] [Accepted: 08/25/2024] [Indexed: 09/04/2024]
Abstract
Automated cervical cancer screening through computer-assisted diagnosis has shown considerable potential to improve screening accessibility and reduce associated costs and errors. However, classification performance on whole slide images (WSIs) remains suboptimal due to patient-specific variations. To improve the precision of the screening, pathologists not only analyze the characteristics of suspected abnormal cells, but also compare them with normal cells. Motivated by this practice, we propose a novel cervical cell comparative learning method that leverages pathologist knowledge to learn the differences between normal and suspected abnormal cells within the same WSI. Our method employs two pre-trained YOLOX models to detect suspected abnormal and normal cells in a given WSI. A self-supervised model then extracts features for the detected cells. Subsequently, a tailored Transformer encoder fuses the cell features to obtain WSI instance embeddings. Finally, attention-based multi-instance learning is applied to achieve classification. The experimental results show an AUC of 0.9319 for our proposed method. Moreover, the method achieved professional pathologist-level performance, indicating its potential for clinical applications.
Collapse
Affiliation(s)
- Jian Qin
- School of Computer Science and Technology, Anhui University of Technology, Maanshan, China.
| | - Yongjun He
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China.
| | - Yiqin Liang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Lanlan Kang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Jing Zhao
- College of Mechanical and Electrical Engineering, Northeast Forestry University, Harbin, China
| | - Bo Ding
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
34
|
Siarov J, Siarov A, Kumar D, Paoli J, Mölne J, Neittaanmäki N. Deep learning model shows pathologist-level detection of sentinel node metastasis of melanoma and intra-nodal nevi on whole slide images. Front Med (Lausanne) 2024; 11:1418013. [PMID: 39238597 PMCID: PMC11374739 DOI: 10.3389/fmed.2024.1418013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 07/29/2024] [Indexed: 09/07/2024] Open
Abstract
Introduction Nodal metastasis (NM) in sentinel node biopsies (SNB) is crucial for melanoma staging. However, an intra-nodal nevus (INN) may often be misclassified as NM, leading to potential misdiagnosis and incorrect staging. There is high discordance among pathologists in assessing SNB positivity, which may lead to false staging. Digital whole slide imaging offers the potential for implementing artificial intelligence (AI) in digital pathology. In this study, we assessed the capability of AI to detect NM and INN in SNBs. Methods A total of 485 hematoxylin and eosin whole slide images (WSIs), including NM and INN from 196 SNBs, were collected and divided into training (279 WSIs), validation (89 WSIs), and test sets (117 WSIs). A deep learning model was trained with 5,956 manual pixel-wise annotations. The AI and three blinded dermatopathologists assessed the test set, with immunohistochemistry serving as the reference standard. Results The AI model showed excellent performance with an area under the curve receiver operating characteristic (AUC) of 0.965 for detecting NM. In comparison, the AUC for NM detection among dermatopathologists ranged between 0.94 and 0.98. For the detection of INN, the AUC was lower for both AI (0.781) and dermatopathologists (range of 0.63-0.79). Discussion In conclusion, the deep learning AI model showed excellent accuracy in detecting NM, achieving dermatopathologist-level performance in detecting both NM and INN. Importantly, the AI model showed the potential to differentiate between these two entities. However, further validation is warranted.
Collapse
Affiliation(s)
- Jan Siarov
- Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Pathology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Angelica Siarov
- Department of Clinical Pathology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | | | - John Paoli
- Department of Dermatology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Dermatology and Venereology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Johan Mölne
- Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Pathology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| | - Noora Neittaanmäki
- Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Pathology, Sahlgrenska University Hospital, Region Västra Götaland, Gothenburg, Sweden
| |
Collapse
|
35
|
Carrillo-Perez F, Cramer EM, Pizurica M, Andor N, Gevaert O. Towards Digital Quantification of Ploidy from Pan-Cancer Digital Pathology Slides using Deep Learning. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.19.608555. [PMID: 39229200 PMCID: PMC11370345 DOI: 10.1101/2024.08.19.608555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
Abnormal DNA ploidy, found in numerous cancers, is increasingly being recognized as a contributor in driving chromosomal instability, genome evolution, and the heterogeneity that fuels cancer cell progression. Furthermore, it has been linked with poor prognosis of cancer patients. While next-generation sequencing can be used to approximate tumor ploidy, it has a high error rate for near-euploid states, a high cost and is time consuming, motivating alternative rapid quantification methods. We introduce PloiViT, a transformer-based model for tumor ploidy quantification that outperforms traditional machine learning models, enabling rapid and cost-effective quantification directly from pathology slides. We trained PloiViT on a dataset of fifteen cancer types from The Cancer Genome Atlas and validated its performance in multiple independent cohorts. Additionally, we explored the impact of self-supervised feature extraction on performance. PloiViT, using self-supervised features, achieved the lowest prediction error in multiple independent cohorts, exhibiting better generalization capabilities. Our findings demonstrate that PloiViT predicts higher ploidy values in aggressive cancer groups and patients with specific mutations, validating PloiViT potential as complementary for ploidy assessment to next-generation sequencing data. To further promote its use, we release our models as a user-friendly inference application and a Python package for easy adoption and use.
Collapse
Affiliation(s)
- Francisco Carrillo-Perez
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, Stanford, 94304, CA, USA
| | - Eric M Cramer
- Department of Biomedical Engineering, Oregon Health & Science University (OHSU), Portland, 97239, OR, USA
| | - Marija Pizurica
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, Stanford, 94304, CA, USA
- Internet technology and Data science Lab (IDLab), Ghent University, Ghent, 9052, Ghent, Belgium
| | - Noemi Andor
- Department of Integrated Mathematical Oncology, Moffitt Cancer Center, Tampa, 33612, FL, USA
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research (BMIR), Stanford University, Stanford, 94304, CA, USA
- Department of Biomedical Data Science (DBDS), Stanford University, Palo Alto, 94305, CA, USA
| |
Collapse
|
36
|
Hussain I, Boza J, Lukande R, Ayanga R, Semeere A, Cesarman E, Martin J, Maurer T, Erickson D. Automated detection of Kaposi sarcoma-associated herpesvirus infected cells in immunohistochemical images of skin biopsies. RESEARCH SQUARE 2024:rs.3.rs-4736178. [PMID: 39184072 PMCID: PMC11343169 DOI: 10.21203/rs.3.rs-4736178/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Immunohistochemical (IHC) staining for the antigen of Kaposi sarcoma-associated herpesvirus (KSHV), latency-associated nuclear antigen (LANA), is helpful in diagnosing Kaposi sarcoma (KS). A challenge, however, lies in distinguishing anti-LANA-positive cells from morphologically similar brown counterparts. In this work, we demonstrate a framework for automated localization and quantification of LANA positivity in whole slide images (WSI) of skin biopsies, leveraging weakly supervised multiple instance learning (MIL) while reducing false positive predictions by introducing a novel morphology-based slide aggregation method. Our framework generates interpretable heatmaps, offering insights into precise anti-LANA-positive cell localization within WSIs and a quantitative value for the percentage of positive tiles, which may assist with histological subtyping. We trained and tested our framework with an anti-LANA-stained KS pathology dataset prepared by pathologists in the United States from skin biopsies of KS-suspected patients investigated in Uganda. We achieved an area under the receiver operating characteristic curve (AUC) of 0.99 with a sensitivity and specificity of 98.15% and 96.00% in predicting anti-LANA-positive WSIs in a test dataset. We believe that the framework can provide promise for automated detection of LANA in skin biopsies, which may be especially impactful in resource-limited areas that lack trained pathologists.
Collapse
|
37
|
Sharma A, Lövgren SK, Eriksson KL, Wang Y, Robertson S, Hartman J, Rantalainen M. Validation of an AI-based solution for breast cancer risk stratification using routine digital histopathology images. Breast Cancer Res 2024; 26:123. [PMID: 39143539 PMCID: PMC11323658 DOI: 10.1186/s13058-024-01879-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 08/06/2024] [Indexed: 08/16/2024] Open
Abstract
BACKGROUND Stratipath Breast is a CE-IVD marked artificial intelligence-based solution for prognostic risk stratification of breast cancer patients into high- and low-risk groups, using haematoxylin and eosin (H&E)-stained histopathology whole slide images (WSIs). In this validation study, we assessed the prognostic performance of Stratipath Breast in two independent breast cancer cohorts. METHODS This retrospective multi-site validation study included 2719 patients with primary breast cancer from two Swedish hospitals. The Stratipath Breast tool was applied to stratify patients based on digitised WSIs of the diagnostic H&E-stained tissue sections from surgically resected tumours. The prognostic performance was evaluated using time-to-event analysis by multivariable Cox Proportional Hazards analysis with progression-free survival (PFS) as the primary endpoint. RESULTS In the clinically relevant oestrogen receptor (ER)-positive/human epidermal growth factor receptor 2 (HER2)-negative patient subgroup, the estimated hazard ratio (HR) associated with PFS between low- and high-risk groups was 2.76 (95% CI: 1.63-4.66, p-value < 0.001) after adjusting for established risk factors. In the ER+/HER2- Nottingham histological grade (NHG) 2 subgroup, the HR was 2.20 (95% CI: 1.22-3.98, p-value = 0.009) between low- and high-risk groups. CONCLUSION The results indicate an independent prognostic value of Stratipath Breast among all breast cancer patients, as well as in the clinically relevant ER+/HER2- subgroup and the NHG2/ER+/HER2- subgroup. Improved risk stratification of intermediate-risk ER+/HER2- breast cancers provides information relevant for treatment decisions of adjuvant chemotherapy and has the potential to reduce both under- and overtreatment. Image-based risk stratification provides the added benefit of short lead times and substantially lower cost compared to molecular diagnostics and therefore has the potential to reach broader patient groups.
Collapse
Affiliation(s)
- Abhinav Sharma
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Sandy Kang Lövgren
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
- Stratipath AB, Solna, Sweden
| | - Kajsa Ledesma Eriksson
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
- Stratipath AB, Solna, Sweden
| | - Yinxi Wang
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
- Stratipath AB, Solna, Sweden
| | - Stephanie Robertson
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
- Stratipath AB, Solna, Sweden
| | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
- MedTechLabs, BioClinicum, Karolinska University Hospital, Solna, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
- MedTechLabs, BioClinicum, Karolinska University Hospital, Solna, Sweden.
| |
Collapse
|
38
|
Hölscher DL, Bülow RD. Decoding pathology: the role of computational pathology in research and diagnostics. Pflugers Arch 2024:10.1007/s00424-024-03002-2. [PMID: 39095655 DOI: 10.1007/s00424-024-03002-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 07/25/2024] [Accepted: 07/25/2024] [Indexed: 08/04/2024]
Abstract
Traditional histopathology, characterized by manual quantifications and assessments, faces challenges such as low-throughput and inter-observer variability that hinder the introduction of precision medicine in pathology diagnostics and research. The advent of digital pathology allowed the introduction of computational pathology, a discipline that leverages computational methods, especially based on deep learning (DL) techniques, to analyze histopathology specimens. A growing body of research shows impressive performances of DL-based models in pathology for a multitude of tasks, such as mutation prediction, large-scale pathomics analyses, or prognosis prediction. New approaches integrate multimodal data sources and increasingly rely on multi-purpose foundation models. This review provides an introductory overview of advancements in computational pathology and discusses their implications for the future of histopathology in research and diagnostics.
Collapse
Affiliation(s)
- David L Hölscher
- Department for Nephrology and Clinical Immunology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074, Aachen, Germany
- Institute for Pathology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074, Aachen, Germany
| | - Roman D Bülow
- Institute for Pathology, RWTH Aachen University Hospital, Pauwelsstraße 30, 52074, Aachen, Germany.
| |
Collapse
|
39
|
Zhang X, Liu C, Zhu H, Wang T, Du Z, Ding W. A universal multiple instance learning framework for whole slide image analysis. Comput Biol Med 2024; 178:108714. [PMID: 38889627 DOI: 10.1016/j.compbiomed.2024.108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024]
Abstract
BACKGROUND The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
Collapse
Affiliation(s)
- Xueqin Zhang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China; Shanghai Key Laboratory of Computer Software Evaluating and Testing, Shanghai 201112, China
| | - Chang Liu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China.
| | - Huitong Zhu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Tianqi Wang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Zunguo Du
- Department of Pathology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Weihong Ding
- Department of Urology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
40
|
Wen Z, Wu H, Ying S. Histopathology Image Classification With Noisy Labels via The Ranking Margins. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2790-2802. [PMID: 38526889 DOI: 10.1109/tmi.2024.3381775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Clinically, histopathology images always offer a golden standard for disease diagnosis. With the development of artificial intelligence, digital histopathology significantly improves the efficiency of diagnosis. Nevertheless, noisy labels are inevitable in histopathology images, which lead to poor algorithm efficiency. Curriculum learning is one of the typical methods to solve such problems. However, existing curriculum learning methods either fail to measure the training priority between difficult samples and noisy ones or need an extra clean dataset to establish a valid curriculum scheme. Therefore, a new curriculum learning paradigm is designed based on a proposed ranking function, which is named The Ranking Margins (TRM). The ranking function measures the 'distances' between samples and decision boundaries, which helps distinguish difficult samples and noisy ones. The proposed method includes three stages: the warm-up stage, the main training stage and the fine-tuning stage. In the warm-up stage, the margin of each sample is obtained through the ranking function. In the main training stage, samples are progressively fed into the networks for training, starting from those with larger margins to those with smaller ones. Label correction is also performed in this stage. In the fine-tuning stage, the networks are retrained on the samples with corrected labels. In addition, we provide theoretical analysis to guarantee the feasibility of TRM. The experiments on two representative histopathologies image datasets show that the proposed method achieves substantial improvements over the latest Label Noise Learning (LNL) methods.
Collapse
|
41
|
Javed S, Mahmood A, Qaiser T, Werghi N, Rajpoot N. Unsupervised mutual transformer learning for multi-gigapixel Whole Slide Image classification. Med Image Anal 2024; 96:103203. [PMID: 38810517 DOI: 10.1016/j.media.2024.103203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/30/2024] [Accepted: 05/13/2024] [Indexed: 05/31/2024]
Abstract
The classification of gigapixel Whole Slide Images (WSIs) is an important task in the emerging area of computational pathology. There has been a surge of interest in deep learning models for WSI classification with clinical applications such as cancer detection or prediction of cellular mutations. Most supervised methods require expensive and labor-intensive manual annotations by expert pathologists. Weakly supervised Multiple Instance Learning (MIL) methods have recently demonstrated excellent performance; however, they still require large-scale slide-level labeled training datasets that require a careful inspection of each slide by an expert pathologist. In this work, we propose a fully unsupervised WSI classification algorithm based on mutual transformer learning. The instances (i.e., patches) from gigapixel WSIs are transformed into a latent space and then inverse-transformed to the original space. Using the transformation loss, pseudo labels are generated and cleaned using a transformer label cleaner. The proposed transformer-based pseudo-label generator and cleaner modules mutually train each other iteratively in an unsupervised manner. A discriminative learning mechanism is introduced to improve normal versus cancerous instance labeling. In addition to the unsupervised learning, we demonstrate the effectiveness of the proposed framework for weakly supervised learning and cancer subtype classification as downstream analysis. Extensive experiments on four publicly available datasets show better performance of the proposed algorithm compared to the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi, P.O. Box 127788, United Arab Emirates.
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan.
| | - Talha Qaiser
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Naoufel Werghi
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi, P.O. Box 127788, United Arab Emirates
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, NW1 2DB, UK
| |
Collapse
|
42
|
Zhang A, Chen Z, Mei S, Ji Y, Lin Y, Shi H. DLCNBC-SA: a model for assessing axillary lymph node metastasis status in early breast cancer patients. Quant Imaging Med Surg 2024; 14:5831-5844. [PMID: 39144041 PMCID: PMC11320494 DOI: 10.21037/qims-24-257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 06/17/2024] [Indexed: 08/16/2024]
Abstract
Background Axillary lymph node (ALN) status is a crucial prognostic indicator for breast cancer metastasis, with manual interpretation of whole slide images (WSIs) being the current standard practice. However, this method is subjective and time-consuming. Recent advancements in deep learning-based methods for medical image analysis have shown promise in improving clinical diagnosis. This study aims to leverage these technological advancements to develop a deep learning model based on features extracted from primary tumor biopsies for preoperatively identifying ALN metastasis in early-stage breast cancer patients with negative nodes. Methods We present DLCNBC-SA, a deep learning-based network specifically tailored for core needle biopsy and clinical data feature extraction, which integrates a self-attention mechanism (CNBC-SA). The proposed model consists of a feature extractor based on convolutional neural network (CNN) and an improved self-attention mechanism module, which can preserve the independence of features in WSIs for analysis and enhancement to provide rich feature representation. To validate the performance of the proposed model, we conducted comparative experiments and ablation studies using publicly available datasets, and verification was performed through quantitative analysis. Results The comparative experiment illustrates the superior performance of the proposed model in the task of binary classification of ALNs, as compared to alternative methods. Our method achieved outstanding performance [area under the curve (AUC): 0.882] in this task, significantly surpassing the state-of-the-art (SOTA) method on the same dataset (AUC: 0.862). The ablation experiment reveals that incorporating RandomRotation data augmentation technology and utilizing Adadelta optimizer can effectively enhance the performance of the proposed model. Conclusions The experimental results demonstrate that the model proposed in this paper outperforms the SOTA model on the same dataset, thereby establishing its reliability as an assistant for pathologists in analyzing WSIs of breast cancer. Consequently, it significantly enhances both the efficiency and accuracy of doctors during the diagnostic process.
Collapse
Affiliation(s)
- Aiguo Zhang
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| | - Zhen Chen
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
- Institute of Spatial Information Technology, Xiamen University of Technology, Xiamen, China
| | - Shengxiang Mei
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen, China
| | - Yunfan Ji
- College of Computer and Information Engineering, Xiamen University of Technology, Xiamen, China
| | - Yiqi Lin
- School of Mechanical and Automotive Engineering, Xiamen University of Technology, Xiamen, China
| | - Hua Shi
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen, China
| |
Collapse
|
43
|
Ma M, Zeng X, Qu L, Sheng X, Ren H, Chen W, Li B, You Q, Xiao L, Wang Y, Dai M, Zhang B, Lu C, Sheng W, Huang D. Advancing Automatic Gastritis Diagnosis: An Interpretable Multilabel Deep Learning Framework for the Simultaneous Assessment of Multiple Indicators. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:1538-1549. [PMID: 38762117 DOI: 10.1016/j.ajpath.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 03/17/2024] [Accepted: 04/26/2024] [Indexed: 05/20/2024]
Abstract
The evaluation of morphologic features, such as inflammation, gastric atrophy, and intestinal metaplasia, is crucial for diagnosing gastritis. However, artificial intelligence analysis for nontumor diseases like gastritis is limited. Previous deep learning models have omitted important morphologic indicators and cannot simultaneously diagnose gastritis indicators or provide interpretable labels. To address this, an attention-based multi-instance multilabel learning network (AMMNet) was developed to simultaneously achieve the multilabel diagnosis of activity, atrophy, and intestinal metaplasia with only slide-level weak labels. To evaluate AMMNet's real-world performance, a diagnostic test was designed to observe improvements in junior pathologists' diagnostic accuracy and efficiency with and without AMMNet assistance. In this study of 1096 patients from seven independent medical centers, AMMNet performed well in assessing activity [area under the curve (AUC), 0.93], atrophy (AUC, 0.97), and intestinal metaplasia (AUC, 0.93). The false-negative rates of these indicators were only 0.04, 0.08, and 0.18, respectively, and junior pathologists had lower false-negative rates with model assistance (0.15 versus 0.10). Furthermore, AMMNet reduced the time required per whole slide image from 5.46 to 2.85 minutes, enhancing diagnostic efficiency. In block-level clustering analysis, AMMNet effectively visualized task-related patches within whole slide images, improving interpretability. These findings highlight AMMNet's effectiveness in accurately evaluating gastritis morphologic indicators on multicenter data sets. Using multi-instance multilabel learning strategies to support routine diagnostic pathology deserves further evaluation.
Collapse
Affiliation(s)
- Mengke Ma
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Xixi Zeng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Linhao Qu
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Xia Sheng
- Department of Pathology, Minhang Hospital, Fudan University, Shanghai, China
| | - Hongzheng Ren
- Department of Pathology, Gongli Hospital, Naval Medical University, Shanghai, China
| | - Weixiang Chen
- Department of Pathology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bin Li
- Department of Pathology, Shanghai Xu-Hui Central Hospital, Shanghai, China
| | - Qinghua You
- Department of Pathology, Shanghai Pudong Hospital, Fudan University Pudong Medical Center, Shanghai, China
| | - Li Xiao
- Department of Pathology, Huadong Hospital, Shanghai, China
| | - Yi Wang
- Information Center, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Mei Dai
- Information Center, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Boqiang Zhang
- Shanghai Foremost Medical Technology Co. Ltd., Shanghai, China
| | - Changqing Lu
- Shanghai Foremost Medical Technology Co. Ltd., Shanghai, China
| | - Weiqi Sheng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China.
| | - Dan Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China.
| |
Collapse
|
44
|
Miyahira AK, Kamran SC, Jamaspishvili T, Marshall CH, Maxwell KN, Parolia A, Zorko NA, Pienta KJ, Soule HR. Disrupting prostate cancer research: Challenge accepted; report from the 2023 Coffey-Holden Prostate Cancer Academy Meeting. Prostate 2024; 84:993-1015. [PMID: 38682886 DOI: 10.1002/pros.24721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 04/16/2024] [Indexed: 05/01/2024]
Abstract
INTRODUCTION The 2023 Coffey-Holden Prostate Cancer Academy (CHPCA) Meeting, themed "Disrupting Prostate Cancer Research: Challenge Accepted," was convened at the University of California, Los Angeles, Luskin Conference Center, in Los Angeles, CA, from June 22 to 25, 2023. METHODS The 2023 marked the 10th Annual CHPCA Meeting, a discussion-oriented scientific think-tank conference convened annually by the Prostate Cancer Foundation, which centers on innovative and emerging research topics deemed pivotal for advancing critical unmet needs in prostate cancer research and clinical care. The 2023 CHPCA Meeting was attended by 81 academic investigators and included 40 talks across 8 sessions. RESULTS The central topic areas covered at the meeting included: targeting transcription factor neo-enhancesomes in cancer, AR as a pro-differentiation and oncogenic transcription factor, why few are cured with androgen deprivation therapy and how to change dogma to cure metastatic prostate cancer without castration, reducing prostate cancer morbidity and mortality with genetics, opportunities for radiation to enhance therapeutic benefit in oligometastatic prostate cancer, novel immunotherapeutic approaches, and the new era of artificial intelligence-driven precision medicine. DISCUSSION This article provides an overview of the scientific presentations delivered at the 2023 CHPCA Meeting, such that this knowledge can help in facilitating the advancement of prostate cancer research worldwide.
Collapse
Affiliation(s)
- Andrea K Miyahira
- Science Department, Prostate Cancer Foundation, Santa Monica, California, USA
| | - Sophia C Kamran
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Tamara Jamaspishvili
- Department of Pathology and Laboratory Medicine, SUNY Upstate Medical University, Syracuse, New York, USA
| | - Catherine H Marshall
- Department of Oncology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Kara N Maxwell
- Department of Medicine-Hematology/Oncology and Department of Genetics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Medicine Service, Corporal Michael J. Crescenz VA Medical Center, Philadelphia, Pennsylvania, USA
| | - Abhijit Parolia
- Department of Pathology, Rogel Cancer Center, University of Michigan, Ann Arbor, Michigan, USA
| | - Nicholas A Zorko
- Division of Hematology, Oncology and Transplantation, Department of Medicine, University of Minnesota, Minneapolis, Minnesota, USA
- University of Minnesota Masonic Cancer Center, University of Minnesota, Minneapolis, Minnesota, USA
| | - Kenneth J Pienta
- The James Buchanan Brady Urological Institute, The Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Howard R Soule
- Science Department, Prostate Cancer Foundation, Santa Monica, California, USA
| |
Collapse
|
45
|
Schmidt A, Morales-Alvarez P, Molina R. Probabilistic Attention Based on Gaussian Processes for Deep Multiple Instance Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:10909-10922. [PMID: 37027623 DOI: 10.1109/tnnls.2023.3245329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multiple instance learning (MIL) is a weakly supervised learning paradigm that is becoming increasingly popular because it requires less labeling effort than fully supervised methods. This is especially interesting for areas where the creation of large annotated datasets remains challenging, as in medicine. Although recent deep learning MIL approaches have obtained state-of-the-art results, they are fully deterministic and do not provide uncertainty estimations for the predictions. In this work, we introduce the attention Gaussian process (AGP) model, a novel probabilistic attention mechanism based on Gaussian processes (GPs) for deep MIL. AGP provides accurate bag-level predictions as well as instance-level explainability and can be trained end-to-end. Moreover, its probabilistic nature guarantees robustness to overfit on small datasets and uncertainty estimations for the predictions. The latter is especially important in medical applications, where decisions have a direct impact on the patient's health. The proposed model is validated experimentally as follows. First, its behavior is illustrated in two synthetic MIL experiments based on the well-known MNIST and CIFAR-10 datasets, respectively. Then, it is evaluated in three different real-world cancer detection experiments. AGP outperforms state-of-the-art MIL approaches, including deterministic deep learning ones. It shows a strong performance even on a small dataset with less than 100 labels and generalizes better than competing methods on an external test set. Moreover, we experimentally show that predictive uncertainty correlates with the risk of wrong predictions, and therefore it is a good indicator of reliability in practice. Our code is publicly available.
Collapse
|
46
|
Zheng Q, Wang X, Yang R, Fan J, Yuan J, Liu X, Wang L, Xiao Z, Chen Z. Predicting tumor mutation burden and VHL mutation from renal cancer pathology slides with self-supervised deep learning. Cancer Med 2024; 13:e70112. [PMID: 39166457 PMCID: PMC11336896 DOI: 10.1002/cam4.70112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 05/28/2024] [Accepted: 08/04/2024] [Indexed: 08/23/2024] Open
Abstract
BACKGROUND Tumor mutation burden (TMB) and VHL mutation play a crucial role in the management of patients with clear cell renal cell carcinoma (ccRCC), such as guiding adjuvant chemotherapy and improving clinical outcomes. However, the time-consuming and expensive high-throughput sequencing methods severely limit their clinical applicability. Predicting intratumoral heterogeneity poses significant challenges in biology and clinical settings. Our aimed to develop a self-supervised attention-based multiple instance learning (SSL-ABMIL) model to predict TMB and VHL mutation status from hematoxylin and eosin-stained histopathological images. METHODS We obtained whole slide images (WSIs) and somatic mutation data of 350 ccRCC patients from The Cancer Genome Atlas for developing SSL-ABMIL model. In parallel, 163 ccRCC patients from Clinical Proteomic Tumor Analysis Consortium cohort was used as independent external validation set. We systematically compared three different models (Wang-ABMIL, Ciga-ABMIL, and ImageNet-MIL) for their ability to predict TMB and VHL alterations. RESULTS We first identified two groups of populations with high- and low-TMB (cut-off point = 0.9). In two independent cohorts, the Wang-ABMIL model achieved the highest performance with decent generalization performance (AUROC = 0.83 ± 0.02 and 0.8 ± 0.04 in predicting TMB and VHL, respectively). Attention heatmaps revealed that the Wang-ABMIL model paid the highest attention to tumor regions in high-TMB patients, while in VHL mutation prediction, non-tumor regions were also assigned high attention, particularly the stromal regions infiltrated by lymphocytes. CONCLUSIONS Our results indicated that SSL-ABMIL can effectively extract histological features for predicting TMB and VHL mutation, demonstrating promising results in linking tumor morphology and molecular biology.
Collapse
Affiliation(s)
- Qingyuan Zheng
- Department of UrologyRenmin Hospital of Wuhan UniversityWuhanHubeiChina
- Institute of Urologic DiseaseRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Xinyu Wang
- Centre for Reproductive ScienceRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Rui Yang
- Department of UrologyRenmin Hospital of Wuhan UniversityWuhanHubeiChina
- Institute of Urologic DiseaseRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Junjie Fan
- University of Chinese Academy of SciencesBeijingChina
- Trusted Computing and Information Assurance LaboratoryInstitute of Software, Chinese Academy of SciencesBeijingChina
| | - Jingping Yuan
- Department of PathologyRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Xiuheng Liu
- Department of UrologyRenmin Hospital of Wuhan UniversityWuhanHubeiChina
- Institute of Urologic DiseaseRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Lei Wang
- Department of UrologyRenmin Hospital of Wuhan UniversityWuhanHubeiChina
- Institute of Urologic DiseaseRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Zhuoni Xiao
- Centre for Reproductive ScienceRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| | - Zhiyuan Chen
- Department of UrologyRenmin Hospital of Wuhan UniversityWuhanHubeiChina
- Institute of Urologic DiseaseRenmin Hospital of Wuhan UniversityWuhanHubeiChina
| |
Collapse
|
47
|
Bergstrom EN, Abbasi A, Díaz-Gay M, Galland L, Ladoire S, Lippman SM, Alexandrov LB. Deep Learning Artificial Intelligence Predicts Homologous Recombination Deficiency and Platinum Response From Histologic Slides. J Clin Oncol 2024:JCO2302641. [PMID: 39083703 DOI: 10.1200/jco.23.02641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/23/2024] [Accepted: 05/28/2024] [Indexed: 08/02/2024] Open
Abstract
PURPOSE Cancers with homologous recombination deficiency (HRD) can benefit from platinum salts and poly(ADP-ribose) polymerase inhibitors. Standard diagnostic tests for detecting HRD require molecular profiling, which is not universally available. METHODS We trained DeepHRD, a deep learning platform for predicting HRD from hematoxylin and eosin (H&E)-stained histopathological slides, using primary breast (n = 1,008) and ovarian (n = 459) cancers from The Cancer Genome Atlas (TCGA). DeepHRD was compared with four standard HRD molecular tests using breast (n = 349) and ovarian (n = 141) cancers from multiple independent data sets, including platinum-treated clinical cohorts with RECIST progression-free survival (PFS), complete response (CR), and overall survival (OS) endpoints. RESULTS DeepHRD predicted HRD from held-out H&E-stained breast cancer slides in TCGA with an AUC of 0.81 (95% CI, 0.77 to 0.85). This performance was confirmed in two independent primary breast cancer cohorts (AUC, 0.76 [95% CI, 0.71 to 0.82]). In an external platinum-treated metastatic breast cancer cohort, samples predicted as HRD had higher complete CR (AUC, 0.76 [95% CI, 0.54 to 0.93]) with 3.7-fold increase in median PFS (14.4 v 3.9 months; P = .0019) and hazard ratio (HR) of 0.45 (P = .0047). There were no significant differences in nonplatinum treatment outcome by predicted HRD status in three breast cancer cohorts, including CR (AUC, 0.39) and PFS (HR, 0.98, P = .95) in taxane-treated metastatic breast cancer. Through transfer learning to high-grade serous ovarian cancer, DeepHRD-predicted HRD samples had better OS after first-line (HR, 0.46; P = .030) and neoadjuvant (HR, 0.49; P = .015) platinum therapy in two cohorts. CONCLUSION DeepHRD can predict HRD in breast and ovarian cancers directly from routine H&E slides across multiple external cohorts, slide scanners, and tissue fixation variables. When compared with molecular testing, DeepHRD classified 1.8- to 3.1-fold more patients with HRD, which exhibited better OS in high-grade serous ovarian cancer and platinum-specific PFS in metastatic breast cancer.
Collapse
Affiliation(s)
- Erik N Bergstrom
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
| | - Ammal Abbasi
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
| | - Marcos Díaz-Gay
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
| | - Loïck Galland
- Department of Medical Oncology, Centre Georges-François Leclerc, Dijon, France
- Platform of Transfer in Biological Oncology, Centre Georges-François Leclerc, Dijon, France
- University of Burgundy-Franche Comté, France
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France
| | - Sylvain Ladoire
- Department of Medical Oncology, Centre Georges-François Leclerc, Dijon, France
- Platform of Transfer in Biological Oncology, Centre Georges-François Leclerc, Dijon, France
- University of Burgundy-Franche Comté, France
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France
| | | | - Ludmil B Alexandrov
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
- Sanford Stem Cell Institute, University of California San Diego, La Jolla, CA
| |
Collapse
|
48
|
Border SP, Tomaszewski JE, Yoshida T, Kopp JB, Hodgin JB, Clapp WL, Rosenberg AZ, Buyon JP, Sarder P. Investigating quantitative histological characteristics in renal pathology using HistoLens. Sci Rep 2024; 14:17528. [PMID: 39080444 PMCID: PMC11289473 DOI: 10.1038/s41598-024-68406-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 07/23/2024] [Indexed: 08/02/2024] Open
Abstract
HistoLens is an open-source graphical user interface developed using MATLAB AppDesigner for visual and quantitative analysis of histological datasets. HistoLens enables users to interrogate sets of digitally annotated whole slide images to efficiently characterize histological differences between disease and experimental groups. Users can dynamically visualize the distribution of 448 hand-engineered features quantifying color, texture, morphology, and distribution across microanatomic sub-compartments. Additionally, users can map differentially detected image features within the images by highlighting affected regions. We demonstrate the utility of HistoLens to identify hand-engineered features that correlate with pathognomonic renal glomerular characteristics distinguishing diabetic nephropathy and amyloid nephropathy from the histologically unremarkable glomeruli in minimal change disease. Additionally, we examine the use of HistoLens for glomerular feature discovery in the Tg26 mouse model of HIV-associated nephropathy. We identify numerous quantitative glomerular features distinguishing Tg26 transgenic mice from wild-type mice, corresponding to a progressive renal disease phenotype. Thus, we demonstrate an off-the-shelf and ready-to-use toolkit for quantitative renal pathology applications.
Collapse
Affiliation(s)
- Samuel P Border
- Section of Quantitative Health, Division of Nephrology, Hypertension, and Renal Transplantation, Department of Medicine, University of Florida, 1600 SW Archer Rd., Gainesville, FL, 32608, USA
| | - John E Tomaszewski
- Department of Pathology & Anatomical Sciences, University at Buffalo, Buffalo, NY, USA
| | - Teruhiko Yoshida
- Kidney Disease Section, Kidney Diseases Branch, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA
| | - Jeffrey B Kopp
- Kidney Disease Section, Kidney Diseases Branch, National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, USA
| | - Jeffrey B Hodgin
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - William L Clapp
- Department of Pathology, Immunology and Laboratory Medicine, University of Florida, Gainesville, FL, USA
| | - Avi Z Rosenberg
- Department of Pathology, Johns Hopkins Medical Institutions, Baltimore, MD, USA
| | - Jill P Buyon
- New York University Grossman School of Medicine, New York, NY, USA
| | - Pinaki Sarder
- Section of Quantitative Health, Division of Nephrology, Hypertension, and Renal Transplantation, Department of Medicine, University of Florida, 1600 SW Archer Rd., Gainesville, FL, 32608, USA.
| |
Collapse
|
49
|
Faa G, Coghe F, Pretta A, Castagnola M, Van Eyken P, Saba L, Scartozzi M, Fraschini M. Artificial Intelligence Models for the Detection of Microsatellite Instability from Whole-Slide Imaging of Colorectal Cancer. Diagnostics (Basel) 2024; 14:1605. [PMID: 39125481 PMCID: PMC11311951 DOI: 10.3390/diagnostics14151605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 07/19/2024] [Accepted: 07/24/2024] [Indexed: 08/12/2024] Open
Abstract
With the advent of whole-slide imaging (WSI), a technology that can digitally scan whole slides in high resolution, pathology is undergoing a digital revolution. Detecting microsatellite instability (MSI) in colorectal cancer is crucial for proper treatment, as it identifies patients responsible for immunotherapy. Even though universal testing for MSI is recommended, particularly in patients affected by colorectal cancer (CRC), many patients remain untested, and they reside mainly in low-income countries. A critical need exists for accessible, low-cost tools to perform MSI pre-screening. Here, the potential predictive role of the most relevant artificial intelligence-driven models in predicting microsatellite instability directly from histology alone is discussed, focusing on CRC. The role of deep learning (DL) models in identifying the MSI status is here analyzed in the most relevant studies reporting the development of algorithms trained to this end. The most important performance and the most relevant deficiencies are discussed for every AI method. The models proposed for algorithm sharing among multiple research and clinical centers, including federal learning (FL) and swarm learning (SL), are reported. According to all the studies reported here, AI models are valuable tools for predicting MSI status on WSI alone in CRC. The use of digitized H&E-stained sections and a trained algorithm allow the extraction of relevant molecular information, such as MSI status, in a short time and at a low cost. The possible advantages related to introducing DL methods in routine surgical pathology are underlined here, and the acceleration of the digital transformation of pathology departments and services is recommended.
Collapse
Affiliation(s)
- Gavino Faa
- Dipartimento di Scienze Mediche e Sanità Pubblica, University of Cagliari, 09123 Cagliari, Italy;
| | - Ferdinando Coghe
- UOC Laboratorio Analisi, AOU of Cagliari, 09123 Cagliari, Italy;
| | - Andrea Pretta
- Medical Oncology Unit, University Hospital and University of Cagliari, 09042 Cagliari, Italy; (A.P.); (M.S.)
| | - Massimo Castagnola
- Laboratorio di Proteomica, Centro Europeo di Ricerca sul Cervello, IRCCS Fondazione Santa Lucia, 00179 Rome, Italy;
| | - Peter Van Eyken
- Division of Pathology, Genk Regional Hospital, 3600 Genk, Belgium;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria, University of Cagliari, 40138 Cagliari, Italy;
| | - Mario Scartozzi
- Medical Oncology Unit, University Hospital and University of Cagliari, 09042 Cagliari, Italy; (A.P.); (M.S.)
| | - Matteo Fraschini
- Dipartimento di Ingegneria Elettrica ed Elettronica, University of Cagliari, 09123 Cagliari, Italy
| |
Collapse
|
50
|
Viet CT, Zhang M, Dharmaraj N, Li GY, Pearson AT, Manon VA, Grandhi A, Xu K, Aouizerat BE, Young S. Artificial Intelligence Applications in Oral Cancer and Oral Dysplasia. Tissue Eng Part A 2024. [PMID: 39041628 DOI: 10.1089/ten.tea.2024.0096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/24/2024] Open
Abstract
Oral squamous cell carcinoma (OSCC) is a highly unpredictable disease with devastating mortality rates that have not changed over the past decades, in the face of advancements in treatments and biomarkers, which have improved survival for other cancers. Delays in diagnosis are frequent, leading to more disfiguring treatments and poor outcomes in patients. The clinical challenge lies in identifying those patients at highest risk for developing OSCC. Oral epithelial dysplasia (OED) is a precursor of OSCC with highly variable behavior across patients. There is no reliable clinical, pathologic, histologic or molecular biomarker to determine individual risk in OED patients. Similarly, there are no robust biomarkers to predict treatment outcomes or mortality of OSCC patients. This review aims to highlight advancements in artificial intelligence (AI)-based methods to develop predictive biomarkers of OED transformation to OSCC or predictive biomarkers of OSCC mortality and treatment response. Machine-learning based biomarkers, such as S100A7, demonstrate promising appraisal for the risk of malignant transformation of OED. Machine learning-enhanced multiplex immunohistochemistry (mIHC) workflows examine immune cell patterns and organization within the tumor immune microenvironment to generate outcome predictions in immunotherapy. Deep learning (DL) is an AI-based method using an extended neural network or related architecture with multiple "hidden" layers of simulated neurons to combine simple visual features into complex patterns. DL-based digital pathology is currently being developed to assess OED and OSCC outcomes. The integration of machine learning in epigenomics aims to examine the epigenetic modification of diseases and improve our ability to detect, classify, and predict outcomes associated with epigenetic marks. Collectively, these tools showcase promising advancements in discovery and technology, which may provide a potential solution to addressing the current limitations in predicting OED transformation and OSCC behavior, both of which are clinical challenges that must be addressed in order to improve OSCC survival.
Collapse
Affiliation(s)
- Chi Tonglien Viet
- Loma Linda University, Department of Oral and Maxillofacial Surgery, Loma Linda, California, United States;
| | - Michael Zhang
- Loma Linda University, Department of Oral and Maxillofacial Surgery, Loma Linda, California, United States;
| | - Neeraja Dharmaraj
- The University of Texas Health Science Center at Houston School of Dentistry, Bernard & Gloria Pepper Katz Department of Oral and Maxillofacial Surgery, Houston, Texas, United States;
| | - Grace Y Li
- The University of Chicago Medical Center, Department of Medicine, Section of Hematology/Oncology,, Chicago, Illinois, United States;
| | - Alexander T Pearson
- The University of Chicago Medical Center, Department of Medicine, Section of Hematology/Oncology,, Chicago, Illinois, United States;
| | - Victoria A Manon
- The University of Texas Health Science Center at Houston School of Dentistry, Bernard & Gloria Pepper Katz Department of Oral and Maxillofacial Surgery, Houston, Texas, United States;
| | - Anupama Grandhi
- Loma Linda University, Department of Oral and Maxillofacial Surgery, Loma Linda, California, United States;
| | - Ke Xu
- Yale School of Medicine, Department of Psychiatry, New Haven, Connecticut, United States
- VA Connecticut Healthcare System - West Haven Campus, West Haven, Connecticut, United States;
| | - Bradley E Aouizerat
- New York University College of Dentistry, Translational Research Center, New York, New York, United States;
| | - Simon Young
- The University of Texas Health Science Center at Houston School of Dentistry, Bernard & Gloria Pepper Katz Department of Oral and Maxillofacial Surgery, Houston, Texas, United States;
| |
Collapse
|