1
|
Kim JS, Lee JH, Yeon Y, An DY, Kim SJ, Noh MG, Lee S. Predicting Nottingham grade in breast cancer digital pathology using a foundation model. Breast Cancer Res 2025; 27:58. [PMID: 40253353 PMCID: PMC12008962 DOI: 10.1186/s13058-025-02019-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Accepted: 04/09/2025] [Indexed: 04/21/2025] Open
Abstract
BACKGROUND The Nottingham histologic grade is crucial for assessing severity and predicting prognosis in breast cancer, a prevalent cancer worldwide. Traditional grading systems rely on subjective expert judgment and require extensive pathological expertise, are time-consuming, and often lead to inter-observer variability. METHODS To address these limitations, we develop an AI-based model to predict Nottingham grade from whole-slide images of hematoxylin and eosin (H&E)-stained breast cancer tissue using a pathology foundation model. From TCGA database, we trained and evaluated using 521 H&E breast cancer slide images with available Nottingham scores through internal split validation, and further validated its clinical utility using an additional set of 597 cases without Nottingham scores. The model leveraged deep features extracted from a pathology foundation model (UNI) and incorporated 14 distinct multiple instance learning (MIL) algorithms. RESULTS The best-performing model achieved an F1 score of 0.731 and a multiclass average AUC of 0.835. The top 300 genes correlated with model predictions were significantly enriched in pathways related to cell division and chromosome segregation, supporting the model's biological relevance. The predicted grades demonstrated statistically significant association with 5-year overall survival (p < 0.05). CONCLUSION Our AI-based automated Nottingham grading system provides an efficient and reproducible tool for breast cancer assessment, offering potential for standardization of histologic grade in clinical practice.
Collapse
Affiliation(s)
- Jun Seo Kim
- Department of Computer Engineering, Gachon University, Seongnam, 13120, South Korea
| | - Jeong Hoon Lee
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Yousung Yeon
- Department of Computer Engineering, Gachon University, Seongnam, 13120, South Korea
| | - Do-Yeon An
- Department of Computer Engineering, Gachon University, Seongnam, 13120, South Korea
| | - Seok Jun Kim
- Department of Computer Engineering, Gachon University, Seongnam, 13120, South Korea
| | - Myung-Giun Noh
- Department of Pathology, School of Medicine, Ajou University, Suwon, 16499, South Korea.
| | - Suehyun Lee
- Department of Computer Engineering, Gachon University, Seongnam, 13120, South Korea.
| |
Collapse
|
2
|
Saeed A, Ismail MA, Ghanem NM. Colorectal cancer classification using weakly annotated whole slide images: Multiple instance learning optimization study. Comput Biol Med 2025; 186:109649. [PMID: 39798507 DOI: 10.1016/j.compbiomed.2024.109649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 12/30/2024] [Accepted: 12/31/2024] [Indexed: 01/15/2025]
Abstract
Colorectal cancer (CRC) is considered one of the most deadly cancer types nowadays. It is rapidly increasing due to many factors, such as unhealthy lifestyles, water and food pollution, aging, and medical diagnosis development. Detecting CRC in its early stages can help stop its growth by providing the necessary treatments, thereby saving many people's lives. There are various tests that doctors can perform to diagnose CRC; however, biopsy using histopathological images is considered the "gold standard" for CRC diagnosis. Deep learning techniques can now be leveraged to build computer-aided diagnosis (CAD) systems that can affirm if an input sample shows any symptoms of cancer and determine its stage and location with an acceptable degree of confidence. In this research, we utilize deep learning to study the CRC classification problem using weakly annotated histopathological whole slide images (WSIs). We relax the constraints of the multiple instance learning (MIL) algorithm and primarily propose WSI-label prediction functions to be integrated with MIL, which significantly enhances the performance of WSI-level classification. We also applied efficient preprocessing techniques that output a computationally power-efficient dataset representation and performed multiple experiments to compose the most efficient CAD system. Our study introduces a notable improvement over the results obtained by the baseline research where we achieved an accuracy of 93.05% compared to 84.17%. Furthermore, our results using only the weakly annotated WSIs outperformed the baseline results that are based on performing initial pre-training using a strongly annotated part of the dataset.
Collapse
Affiliation(s)
- Ahmed Saeed
- Computer and Systems Engineering Department, Faculty of Engineering, Alexandria University, Alexandria, Egypt.
| | - Mohamed A Ismail
- Computer and Systems Engineering Department, Faculty of Engineering, Alexandria University, Alexandria, Egypt.
| | - Nagia M Ghanem
- Computer and Systems Engineering Department, Faculty of Engineering, Alexandria University, Alexandria, Egypt.
| |
Collapse
|
3
|
Silveira JA, da Silva AR, de Lima MZT. Harnessing artificial intelligence for predicting breast cancer recurrence: a systematic review of clinical and imaging data. Discov Oncol 2025; 16:135. [PMID: 39921795 PMCID: PMC11807043 DOI: 10.1007/s12672-025-01908-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 02/03/2025] [Indexed: 02/10/2025] Open
Abstract
Breast cancer is a leading cause of mortality among women, with recurrence prediction remaining a significant challenge. In this context, artificial intelligence application and its resources can serve as a powerful tool in analyzing large amounts of data and predicting cancer recurrence, potentially enabling personalized medical treatment and improving the patient's quality of life. Thus, the systematic review examines the role of AI in predicting breast cancer recurrence using clinical data, imaging data, and combined datasets. Support Vector Machine (SVM) and Neural Networks, especially when applied to combined data, demonstrate strong potential in improving prediction accuracy. SVMs are effective with high-dimensional clinical data, while Neural Networks in genetic and molecular analysis. Despite these advancements, limitations such as dataset diversity, sample size, and evaluation standardization persist, emphasizing the need for further research. AI integration in recurrence prediction offers promising prospects for personalized care but requires rigorous validation for safe clinical application.
Collapse
Affiliation(s)
| | - Alexandre Ray da Silva
- OncoAI, Oncologia Inteligência Artificial, Cel Jose Eusebio, 95, Sao Paulo, Sao Paulo, 01239-030, Brazil
| | - Mariana Zuliani Theodoro de Lima
- OncoAI, Oncologia Inteligência Artificial, Cel Jose Eusebio, 95, Sao Paulo, Sao Paulo, 01239-030, Brazil.
- Engineering School, Mackenzie Presbyterian University, Consolacao street, 930, Sao Paulo, Sao Paulo, 01302-907, Brazil.
| |
Collapse
|
4
|
Kamal SA, Du Y, Khalid M, Farrash M, Dhelim S. DRSegNet: A cutting-edge approach to Diabetic Retinopathy segmentation and classification using parameter-aware Nature-Inspired optimization. PLoS One 2024; 19:e0312016. [PMID: 39637079 PMCID: PMC11620556 DOI: 10.1371/journal.pone.0312016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Accepted: 09/30/2024] [Indexed: 12/07/2024] Open
Abstract
Diabetic retinopathy (DR) is a prominent reason of blindness globally, which is a diagnostically challenging disease owing to the intricate process of its development and the human eye's complexity, which consists of nearly forty connected components like the retina, iris, optic nerve, and so on. This study proposes a novel approach to the identification of DR employing methods such as synthetic data generation, K- Means Clustering-Based Binary Grey Wolf Optimizer (KCBGWO), and Fully Convolutional Encoder-Decoder Networks (FCEDN). This is achieved using Generative Adversarial Networks (GANs) to generate high-quality synthetic data and transfer learning for accurate feature extraction and classification, integrating these with Extreme Learning Machines (ELM). The substantial evaluation plan we have provided on the IDRiD dataset gives exceptional outcomes, where our proposed model gives 99.87% accuracy and 99.33% sensitivity, while its specificity is 99. 78%. This is why the outcomes of the presented study can be viewed as promising in terms of the further development of the proposed approach for DR diagnosis, as well as in creating a new reference point within the framework of medical image analysis and providing more effective and timely treatments.
Collapse
Affiliation(s)
- Sundreen Asad Kamal
- School of Electronics and Information Technology, Xi’an Jiaotong University, Xian, China
| | - Youtian Du
- School of Electronics and Information Technology, Xi’an Jiaotong University, Xian, China
| | - Majdi Khalid
- Department of Computer Science and Artificial Intelligence, College of Computing, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Majed Farrash
- Department of Computer Science and Artificial Intelligence, College of Computing, Umm Al-Qura University, Makkah, Saudi Arabia
| | | |
Collapse
|
5
|
Hashimoto N, Hanada H, Miyoshi H, Nagaishi M, Sato K, Hontani H, Ohshima K, Takeuchi I. Multimodal Gated Mixture of Experts Using Whole Slide Image and Flow Cytometry for Multiple Instance Learning Classification of Lymphoma. J Pathol Inform 2024; 15:100359. [PMID: 38322152 PMCID: PMC10844119 DOI: 10.1016/j.jpi.2023.100359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/23/2023] [Indexed: 02/08/2024] Open
Abstract
In this study, we present a deep-learning-based multimodal classification method for lymphoma diagnosis in digital pathology, which utilizes a whole slide image (WSI) as the primary image data and flow cytometry (FCM) data as auxiliary information. In pathological diagnosis of malignant lymphoma, FCM serves as valuable auxiliary information during the diagnosis process, offering useful insights into predicting the major class (superclass) of subtypes. By incorporating both images and FCM data into the classification process, we can develop a method that mimics the diagnostic process of pathologists, enhancing the explainability. In order to incorporate the hierarchical structure between superclasses and their subclasses, the proposed method utilizes a network structure that effectively combines the mixture of experts (MoE) and multiple instance learning (MIL) techniques, where MIL is widely recognized for its effectiveness in handling WSIs in digital pathology. The MoE network in the proposed method consists of a gating network for superclass classification and multiple expert networks for (sub)class classification, specialized for each superclass. To evaluate the effectiveness of our method, we conducted experiments involving a six-class classification task using 600 lymphoma cases. The proposed method achieved a classification accuracy of 72.3%, surpassing the 69.5% obtained through the straightforward combination of FCM and images, as well as the 70.2% achieved by the method using only images. Moreover, the combination of multiple weights in the MoE and MIL allows for the visualization of specific cellular and tumor regions, resulting in a highly explanatory model that cannot be attained with conventional methods. It is anticipated that by targeting a larger number of classes and increasing the number of expert networks, the proposed method could be effectively applied to the real problem of lymphoma diagnosis.
Collapse
Affiliation(s)
- Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| | - Hiroyuki Hanada
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Miharu Nagaishi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Kensaku Sato
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, 4668555, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
- Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| |
Collapse
|
6
|
Kumar S, Chatterjee S. HistoSPACE: Histology-inspired spatial transcriptome prediction and characterization engine. Methods 2024; 232:107-114. [PMID: 39521362 DOI: 10.1016/j.ymeth.2024.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 10/30/2024] [Accepted: 11/04/2024] [Indexed: 11/16/2024] Open
Abstract
Spatial transcriptomics (ST) enables the visualization of gene expression within the context of tissue morphology. This emerging discipline has the potential to serve as a foundation for developing tools to design precision medicines. However, due to the higher costs and expertise required for such experiments, its translation into a regular clinical practice might be challenging. Despite implementing modern deep learning to enhance information obtained from histological images using AI, efforts have been constrained by limitations in the diversity of information. In this paper, we developed a model, HistoSPACE, that explores the diversity of histological images available with ST data to extract molecular insights from tissue images. Further, our approach allows us to link the predicted expression with disease pathology. Our proposed study built an image encoder derived from a universal image autoencoder. This image encoder was connected to convolution blocks to build the final model. It was further fine-tuned with the help of ST-Data. The number of model parameters is small and requires lesser system memory and relatively lesser training time. Making it lightweight in comparison to traditional histological models. Our developed model demonstrates significant efficiency compared to contemporary algorithms, revealing a correlation of 0.56 in leave-one-out cross-validation. Finally, its robustness was validated through an independent dataset, showing similar prediction with predefined disease pathology. Our code is available at https://github.com/samrat-lab/HistoSPACE.
Collapse
Affiliation(s)
- Shivam Kumar
- Complex Analysis Group, Translational Health Science and Technology Institute, NCR Biotech Science Cluster, Faridabad-Gurgaon Expressway, Faridabad, 121001, India
| | - Samrat Chatterjee
- Complex Analysis Group, Translational Health Science and Technology Institute, NCR Biotech Science Cluster, Faridabad-Gurgaon Expressway, Faridabad, 121001, India.
| |
Collapse
|
7
|
Katayama A, Aoki Y, Watanabe Y, Horiguchi J, Rakha EA, Oyama T. Current status and prospects of artificial intelligence in breast cancer pathology: convolutional neural networks to prospective Vision Transformers. Int J Clin Oncol 2024; 29:1648-1668. [PMID: 38619651 DOI: 10.1007/s10147-024-02513-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/12/2024] [Indexed: 04/16/2024]
Abstract
Breast cancer is the most prevalent cancer among women, and its diagnosis requires the accurate identification and classification of histological features for effective patient management. Artificial intelligence, particularly through deep learning, represents the next frontier in cancer diagnosis and management. Notably, the use of convolutional neural networks and emerging Vision Transformers (ViT) has been reported to automate pathologists' tasks, including tumor detection and classification, in addition to improving the efficiency of pathology services. Deep learning applications have also been extended to the prediction of protein expression, molecular subtype, mutation status, therapeutic efficacy, and outcome prediction directly from hematoxylin and eosin-stained slides, bypassing the need for immunohistochemistry or genetic testing. This review explores the current status and prospects of deep learning in breast cancer diagnosis with a focus on whole-slide image analysis. Artificial intelligence applications are increasingly applied to many tasks in breast pathology ranging from disease diagnosis to outcome prediction, thus serving as valuable tools for assisting pathologists and supporting breast cancer management.
Collapse
Affiliation(s)
- Ayaka Katayama
- Diagnostic Pathology, Gunma University Graduate School of Medicine, 3-39-22 Showamachi, Maebashi, Gunma, 371-8511, Japan.
| | - Yuki Aoki
- Center for Mathematics and Data Science, Gunma University, Maebashi, Japan
| | - Yukako Watanabe
- Clinical Training Center, Gunma University Hospital, Maebashi, Japan
| | - Jun Horiguchi
- Department of Breast Surgery, International University of Health and Welfare, Narita, Japan
| | - Emad A Rakha
- Department of Histopathology School of Medicine, University of Nottingham, University Park, Nottingham, UK
- Department of Pathology, Hamad Medical Corporation, Doha, Qatar
| | - Tetsunari Oyama
- Diagnostic Pathology, Gunma University Graduate School of Medicine, 3-39-22 Showamachi, Maebashi, Gunma, 371-8511, Japan
| |
Collapse
|
8
|
Mooghal M, Nasir S, Arif A, Khan W, Rashid YA, Vohra LM. Innovations in Artificial Intelligence-Driven Breast Cancer Survival Prediction: A Narrative Review. Cancer Inform 2024; 23:11769351241272389. [PMID: 39483314 PMCID: PMC11526191 DOI: 10.1177/11769351241272389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 06/18/2024] [Indexed: 11/03/2024] Open
Abstract
This narrative review explores the burgeoning field of Artificial Intelligence (AI)-driven Breast Cancer (BC) survival prediction, emphasizing the transformative impact on patient care. From machine learning to deep neural networks, diverse models demonstrate the potential to refine prognosis accuracy and tailor treatment strategies. The literature underscores the need for clinician integration and addresses challenges of model generalizability and ethical considerations. Crucially, AI's promise extends to Low- and Middle-Income Countries (LMICs), presenting an opportunity to bridge healthcare disparities. Collaborative efforts in research, technology transfer, and education are essential to empower healthcare professionals in LMICs. As we navigate this frontier, AI emerges not only as a technological advancement but as a guiding light toward personalized, accessible BC care, marking a significant stride in the global fight against this formidable disease.
Collapse
Affiliation(s)
- Mehwish Mooghal
- Section Breast Surgery, Department of Surgery, Aga Khan University Hospital Karachi, Sindh, Pakistan
| | - Saad Nasir
- Department of Medicine, Aga Khan University Hospital Karachi, Sindh, Pakistan
| | - Aiman Arif
- Department of Surgery, Aga Khan University Hospital Karachi, Sindh, Pakistan
| | - Wajiha Khan
- Department of Surgery & Medicine, Dow University of Health Sciences, Sindh, Pakistan
| | - Yasmin Abdul Rashid
- Section Medical Oncology, Department of Medicine, Aga Khan University Hospital Karachi, Sindh, Pakistan
| | - Lubna M Vohra
- Section Breast Surgery, Department of Surgery, Aga Khan University Hospital Karachi, Sindh, Pakistan
| |
Collapse
|
9
|
Li M, Hou X, Yan W, Wang D, Yu R, Li X, Li F, Chen J, Wei L, Liu J, Wang H, Zeng Q. Identification of Bipolar Disorder and Schizophrenia Based on Brain CT and Deep Learning Methods. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01279-4. [PMID: 39327378 DOI: 10.1007/s10278-024-01279-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 09/03/2024] [Accepted: 09/18/2024] [Indexed: 09/28/2024]
Abstract
With the increasing prevalence of mental illness, accurate clinical diagnosis of mental illness is crucial. Compared with MRI, CT has the advantages of wide application, low price, short scanning time, and high patient cooperation. This study aims to construct a deep learning (DL) model based on CT images to make identification of bipolar disorder (BD) and schizophrenia (SZ). A total of 506 patients (BD = 227, SZ = 279) and 179 healthy controls (HC) was collected from January 2022 to May 2023 at two hospitals, and divided into an internal training set and an internal validation set according to a ratio of 4:1. An additional 65 patients (BD = 35, SZ = 30) and 40 HC were recruited from different hospitals, and served as an external test set. All subjects accepted the conventional brain CT examination. The DenseMD model for identify BD and SZ using multiple instance learning was developed and compared with other classical DL models. The results showed that DenseMD performed excellently with an accuracy of 0.745 in the internal validation set, whereas the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.672, 0.664, and 0.679, respectively. For the external test set, DenseMD again outperformed other models with an accuracy of 0.724; however, the accuracy of the ResNet-18, ResNeXt-50, and DenseNet-121model was 0.657, 0.638, and 0.676, respectively. Therefore, the potential of DL models for identification of BD and SZ based on brain CT images was established, and identification ability of the DenseMD model was better than other classical DL models.
Collapse
Affiliation(s)
- Meilin Li
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, 250000, China
- Shandong First Medical University, Jinan, 250000, China
| | - Xingyu Hou
- Department of Psychiatry, Shandong Mental Health Center, Shandong University, Jinan, 250000, China
| | - Wanying Yan
- Infervision Medical Technology Co., Ltd, Beijing, 100000, China
| | - Dawei Wang
- Infervision Medical Technology Co., Ltd, Beijing, 100000, China
| | - Ruize Yu
- Infervision Medical Technology Co., Ltd, Beijing, 100000, China
| | - Xixiang Li
- Department of Radiology, Zaozhuang Mental Health Center (Zaozhuang Municipal No. 2 Hospital), Zaozhuang, 277000, China
| | - Fuyan Li
- Department of Radiology, Shandong Provincial Hospital Afliated to Shandong First Medical University, Jinan, 250000, China
| | - Jinming Chen
- Department of Radiology, Shandong Provincial Qianfoshan Hospital, Shandong University, Jinan, 250000, China
| | - Lingzhen Wei
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, 250000, China
- School of Clinical Medicine, Jining Medical University, Jining, 272000, China
| | - Jiahao Liu
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, 250000, China
- Shandong First Medical University, Jinan, 250000, China
| | - Huaizhen Wang
- The First Clinical Medical College, Shandong University of Traditional Chinese Medicine, Jinan, 250000, China
| | - Qingshi Zeng
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, Jinan, 250000, China.
| |
Collapse
|
10
|
Verma R, Alban TJ, Parthasarathy P, Mokhtari M, Toro Castano P, Cohen ML, Lathia JD, Ahluwalia M, Tiwari P. Sexually dimorphic computational histopathological signatures prognostic of overall survival in high-grade gliomas via deep learning. SCIENCE ADVANCES 2024; 10:eadi0302. [PMID: 39178259 PMCID: PMC11343024 DOI: 10.1126/sciadv.adi0302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/16/2024] [Indexed: 08/25/2024]
Abstract
High-grade glioma (HGG) is an aggressive brain tumor. Sex is an important factor that differentially affects survival outcomes in HGG. We used an end-to-end deep learning approach on hematoxylin and eosin (H&E) scans to (i) identify sex-specific histopathological attributes of the tumor microenvironment (TME), and (ii) create sex-specific risk profiles to prognosticate overall survival. Surgically resected H&E-stained tissue slides were analyzed in a two-stage approach using ResNet18 deep learning models, first, to segment the viable tumor regions and second, to build sex-specific prognostic models for prediction of overall survival. Our mResNet-Cox model yielded C-index (0.696, 0.736, 0.731, and 0.729) for the female cohort and C-index (0.729, 0.738, 0.724, and 0.696) for the male cohort across training and three independent validation cohorts, respectively. End-to-end deep learning approaches using routine H&E-stained slides, trained separately on male and female patients with HGG, may allow for identifying sex-specific histopathological attributes of the TME associated with survival and, ultimately, build patient-centric prognostic risk assessment models.
Collapse
Affiliation(s)
- Ruchika Verma
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
- Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Tyler J. Alban
- Center for Immunotherapy and Precision Immuno-Oncology, Cleveland Clinic Foundation, Cleveland, OH, USA
- Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Prerana Parthasarathy
- Center for Immunotherapy and Precision Immuno-Oncology, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Mojgan Mokhtari
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | | | - Mark L. Cohen
- Department of Pathology, University Hospitals Case Medical Center, Cleveland, OH, USA
| | - Justin D. Lathia
- Lerner Research Institute, Cleveland Clinic, Cleveland, OH, USA
- Rose Ella Burkhardt Brain Tumor and Neuro-Oncology Center, Cleveland Clinic, Cleveland, OH, USA
- Case Comprehensive Cancer Center, Case Western Reserve University, Cleveland, OH, USA
| | - Manmeet Ahluwalia
- Miami Cancer Institute, Miami, FL, USA
- Herbert Wertheim College of Medicine, Florida International University, University Park, FL, USA
| | - Pallavi Tiwari
- Departments of Radiology and Biomedical Engineering, University of Wisconsin–Madison, Madison, WI, USA
- Carbone Cancer Center, Madison, WI, USA
- William S. Middleton Memorial Veterans Affairs Healthcare, Madison, WI, USA
| |
Collapse
|
11
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1728-1751. [PMID: 38429563 PMCID: PMC11300721 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
12
|
Darbandsari A, Farahani H, Asadi M, Wiens M, Cochrane D, Khajegili Mirabadi A, Jamieson A, Farnell D, Ahmadvand P, Douglas M, Leung S, Abolmaesumi P, Jones SJM, Talhouk A, Kommoss S, Gilks CB, Huntsman DG, Singh N, McAlpine JN, Bashashati A. AI-based histopathology image analysis reveals a distinct subset of endometrial cancers. Nat Commun 2024; 15:4973. [PMID: 38926357 PMCID: PMC11208496 DOI: 10.1038/s41467-024-49017-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Endometrial cancer (EC) has four molecular subtypes with strong prognostic value and therapeutic implications. The most common subtype (NSMP; No Specific Molecular Profile) is assigned after exclusion of the defining features of the other three molecular subtypes and includes patients with heterogeneous clinical outcomes. In this study, we employ artificial intelligence (AI)-powered histopathology image analysis to differentiate between p53abn and NSMP EC subtypes and consequently identify a sub-group of NSMP EC patients that has markedly inferior progression-free and disease-specific survival (termed 'p53abn-like NSMP'), in a discovery cohort of 368 patients and two independent validation cohorts of 290 and 614 from other centers. Shallow whole genome sequencing reveals a higher burden of copy number abnormalities in the 'p53abn-like NSMP' group compared to NSMP, suggesting that this group is biologically distinct compared to other NSMP ECs. Our work demonstrates the power of AI to detect prognostically different and otherwise unrecognizable subsets of EC where conventional and standard molecular or pathologic criteria fall short, refining image-based tumor classification. This study's findings are applicable exclusively to females.
Collapse
Affiliation(s)
- Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Maryam Asadi
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Matthew Wiens
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Dawn Cochrane
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | | | - Amy Jamieson
- Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, BC, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Maxwell Douglas
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | - Samuel Leung
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Steven J M Jones
- Michael Smith Genome Sciences Center, British Columbia Cancer Research Center, Vancouver, BC, Canada
| | - Aline Talhouk
- Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, BC, Canada
| | - Stefan Kommoss
- Department of Women's Health, Tübingen University Hospital, Tübingen, Germany
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - David G Huntsman
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | - Naveena Singh
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Jessica N McAlpine
- Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
13
|
Boissin C, Wang Y, Sharma A, Weitz P, Karlsson E, Robertson S, Hartman J, Rantalainen M. Deep learning-based risk stratification of preoperative breast biopsies using digital whole slide images. Breast Cancer Res 2024; 26:90. [PMID: 38831336 PMCID: PMC11145850 DOI: 10.1186/s13058-024-01840-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/15/2024] [Indexed: 06/05/2024] Open
Abstract
BACKGROUND Nottingham histological grade (NHG) is a well established prognostic factor in breast cancer histopathology but has a high inter-assessor variability with many tumours being classified as intermediate grade, NHG2. Here, we evaluate if DeepGrade, a previously developed model for risk stratification of resected tumour specimens, could be applied to risk-stratify tumour biopsy specimens. METHODS A total of 11,955,755 tiles from 1169 whole slide images of preoperative biopsies from 896 patients diagnosed with breast cancer in Stockholm, Sweden, were included. DeepGrade, a deep convolutional neural network model, was applied for the prediction of low- and high-risk tumours. It was evaluated against clinically assigned grades NHG1 and NHG3 on the biopsy specimen but also against the grades assigned to the corresponding resection specimen using area under the operating curve (AUC). The prognostic value of the DeepGrade model in the biopsy setting was evaluated using time-to-event analysis. RESULTS Based on preoperative biopsy images, the DeepGrade model predicted resected tumour cases of clinical grades NHG1 and NHG3 with an AUC of 0.908 (95% CI: 0.88; 0.93). Furthermore, out of the 432 resected clinically-assigned NHG2 tumours, 281 (65%) were classified as DeepGrade-low and 151 (35%) as DeepGrade-high. Using a multivariable Cox proportional hazards model the hazard ratio between DeepGrade low- and high-risk groups was estimated as 2.01 (95% CI: 1.06; 3.79). CONCLUSIONS DeepGrade provided prediction of tumour grades NHG1 and NHG3 on the resection specimen using only the biopsy specimen. The results demonstrate that the DeepGrade model can provide decision support to identify high-risk tumours based on preoperative biopsies, thus improving early treatment decisions.
Collapse
Affiliation(s)
- Constance Boissin
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Yinxi Wang
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Abhinav Sharma
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Philippe Weitz
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Emelie Karlsson
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | | | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
- MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
- MedTechLabs, BioClinicum, Karolinska University Hospital, Stockholm, Sweden.
| |
Collapse
|
14
|
van Diest PJ, Flach RN, van Dooijeweert C, Makineli S, Breimer GE, Stathonikos N, Pham P, Nguyen TQ, Veta M. Pros and cons of artificial intelligence implementation in diagnostic pathology. Histopathology 2024; 84:924-934. [PMID: 38433288 DOI: 10.1111/his.15153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/29/2023] [Accepted: 01/19/2024] [Indexed: 03/05/2024]
Abstract
The rapid introduction of digital pathology has greatly facilitated development of artificial intelligence (AI) models in pathology that have shown great promise in assisting morphological diagnostics and quantitation of therapeutic targets. We are now at a tipping point where companies have started to bring algorithms to the market, and questions arise whether the pathology community is ready to implement AI in routine workflow. However, concerns also arise about the use of AI in pathology. This article reviews the pros and cons of introducing AI in diagnostic pathology.
Collapse
Affiliation(s)
- Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Rachel N Flach
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Seher Makineli
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Surgical Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Gerben E Breimer
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Paul Pham
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Tri Q Nguyen
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mitko Veta
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| |
Collapse
|
15
|
Wang Z, Gao J, Li M, Zuo E, Chen C, Chen C, Liang F, Lv X, Ma Y. DIEANet: an attention model for histopathological image grading of lung adenocarcinoma based on dimensional information embedding. Sci Rep 2024; 14:6209. [PMID: 38485967 PMCID: PMC10940683 DOI: 10.1038/s41598-024-56355-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 03/05/2024] [Indexed: 03/18/2024] Open
Abstract
Efficient and rapid auxiliary diagnosis of different grades of lung adenocarcinoma is conducive to helping doctors accelerate individualized diagnosis and treatment processes, thus improving patient prognosis. Currently, there is often a problem of large intra-class differences and small inter-class differences between pathological images of lung adenocarcinoma tissues under different grades. If attention mechanisms such as Coordinate Attention (CA) are directly used for lung adenocarcinoma grading tasks, it is prone to excessive compression of feature information and overlooking the issue of information dependency within the same dimension. Therefore, we propose a Dimension Information Embedding Attention Network (DIEANet) for the task of lung adenocarcinoma grading. Specifically, we combine different pooling methods to automatically select local regions of key growth patterns such as lung adenocarcinoma cells, enhancing the model's focus on local information. Additionally, we employ an interactive fusion approach to concentrate feature information within the same dimension and across dimensions, thereby improving model performance. Extensive experiments have shown that under the condition of maintaining equal computational expenses, the accuracy of DIEANet with ResNet34 as the backbone reaches 88.19%, with an AUC of 96.61%, MCC of 81.71%, and Kappa of 81.16%. Compared to seven other attention mechanisms, it achieves state-of-the-art objective metrics. Additionally, it aligns more closely with the visual attention of pathology experts under subjective visual assessment.
Collapse
Affiliation(s)
- Zexin Wang
- College of Software, Xinjiang University, Urumqi, 830046, China
| | - Jing Gao
- Xinjiang Key Laboratory of Clinical Genetic Testing and Biomedical Information, Karamay, 834099, China
- Xinjiang Clinical Research Center for Precision Medicine of Digestive System Tumor, Karamay, 834099, China
- Department of Pathology, Karamay Central Hospital, Karamay, 834099, China
| | - Min Li
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China
| | - Enguang Zuo
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China
| | - Chen Chen
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
- Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China
| | - Cheng Chen
- College of Software, Xinjiang University, Urumqi, 830046, China
| | - Fei Liang
- Xinjiang Key Laboratory of Clinical Genetic Testing and Biomedical Information, Karamay, 834099, China
- Xinjiang Clinical Research Center for Precision Medicine of Digestive System Tumor, Karamay, 834099, China
- Department of Pathology, Karamay Central Hospital, Karamay, 834099, China
| | - Xiaoyi Lv
- College of Software, Xinjiang University, Urumqi, 830046, China.
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Urumqi, 830046, China.
- Xinjiang Cloud Computing Application Laboratory, Karamay, 834099, China.
| | - Yuhua Ma
- Xinjiang Key Laboratory of Clinical Genetic Testing and Biomedical Information, Karamay, 834099, China.
- Xinjiang Clinical Research Center for Precision Medicine of Digestive System Tumor, Karamay, 834099, China.
- Department of Pathology, Karamay Central Hospital, Karamay, 834099, China.
| |
Collapse
|
16
|
Bilal A, Liu X, Shafiq M, Ahmed Z, Long H. NIMEQ-SACNet: A novel self-attention precision medicine model for vision-threatening diabetic retinopathy using image data. Comput Biol Med 2024; 171:108099. [PMID: 38364659 DOI: 10.1016/j.compbiomed.2024.108099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 02/02/2024] [Accepted: 02/02/2024] [Indexed: 02/18/2024]
Abstract
In the realm of precision medicine, the potential of deep learning is progressively harnessed to facilitate intricate clinical decision-making, especially when navigating multifaceted datasets encompassing Omics, Clinical, image, device, social, and environmental dimensions. This study accentuates the criticality of image data, given its instrumental role in detecting and classifying vision-threatening diabetic retinopathy (VTDR) - a predominant global contributor to vision impairment. The timely identification of VTDR is a linchpin for efficacious interventions and the mitigation of vision loss. Addressing this, This study introduces "NIMEQ-SACNet," a novel hybrid model by the prowess of the Enhanced Quantum-Inspired Binary Grey Wolf Optimizer (EQI-BGWO) with a self-attention capsule network. The proposed approach is characterized by two pivotal advancements: firstly, the augmentation of the Binary Grey Wolf Optimization through Quantum Computing methodologies, and secondly, the deployment of the enhanced EQI-BGWO to adeptly calibrate the SACNet's parameters, culminating in a notable uplift in VTDR classification accuracy. The proposed model's ability to handle binary, 5-stage, and 7-stage VTDR classifications adroitly is noteworthy. Rigorous assessments on the fundus image dataset, underscored by metrics such as Accuracy, Sensitivity, Specificity, Precision, F1-Score, and MCC, bear testament to NIMEQ-SACNet's pre-eminence over prevailing algorithms and classification frameworks.
Collapse
Affiliation(s)
- Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
| | - Xiaowen Liu
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
| | - Muhammad Shafiq
- School of Information Engineering, Qujing Normal University, Sichuan, China
| | - Zohaib Ahmed
- Department of Criminology and Forensic Sciences, Lahore Garrison University, Lahore, Pakistan
| | - Haixia Long
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China.
| |
Collapse
|
17
|
Sharma A, Weitz P, Wang Y, Liu B, Vallon-Christersson J, Hartman J, Rantalainen M. Development and prognostic validation of a three-level NHG-like deep learning-based model for histological grading of breast cancer. Breast Cancer Res 2024; 26:17. [PMID: 38287342 PMCID: PMC10823657 DOI: 10.1186/s13058-024-01770-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 01/15/2024] [Indexed: 01/31/2024] Open
Abstract
BACKGROUND Histological grade is a well-known prognostic factor that is routinely assessed in breast tumours. However, manual assessment of Nottingham Histological Grade (NHG) has high inter-assessor and inter-laboratory variability, causing uncertainty in grade assignments. To address this challenge, we developed and validated a three-level NHG-like deep learning-based histological grade model (predGrade). The primary performance evaluation focuses on prognostic performance. METHODS This observational study is based on two patient cohorts (SöS-BC-4, N = 2421 (training and internal test); SCAN-B-Lund, N = 1262 (test)) that include routine histological whole-slide images (WSIs) together with patient outcomes. A deep convolutional neural network (CNN) model with an attention mechanism was optimised for the classification of the three-level histological grading (NHG) from haematoxylin and eosin-stained WSIs. The prognostic performance was evaluated by time-to-event analysis of recurrence-free survival and compared to clinical NHG grade assignments in the internal test set as well as in the fully independent external test cohort. RESULTS We observed effect sizes (hazard ratio) for grade 3 versus 1, for the conventional NHG method (HR = 2.60 (1.18-5.70 95%CI, p-value = 0.017)) and the deep learning model (HR = 2.27, 95%CI 1.07-4.82, p-value = 0.033) on the internal test set after adjusting for established clinicopathological risk factors. In the external test set, the unadjusted HR for clinical NHG 2 versus 1 was estimated to be 2.59 (p-value = 0.004) and clinical NHG 3 versus 1 was estimated to be 3.58 (p-value < 0.001). For predGrade, the unadjusted HR for predGrade 2 versus 1 HR = 2.52 (p-value = 0.030), and 4.07 (p-value = 0.001) for preGrade 3 versus 1 was observed in the independent external test set. In multivariable analysis, HR estimates for neither clinical NHG nor predGrade were found to be significant (p-value > 0.05). We tested for differences in HR estimates between NHG and predGrade in the independent test set and found no significant difference between the two classification models (p-value > 0.05), confirming similar prognostic performance between conventional NHG and predGrade. CONCLUSION Routine histopathology assessment of NHG has a high degree of inter-assessor variability, motivating the development of model-based decision support to improve reproducibility in histological grading. We found that the proposed model (predGrade) provides a similar prognostic performance as clinical NHG. The results indicate that deep CNN-based models can be applied for breast cancer histological grading.
Collapse
Affiliation(s)
- Abhinav Sharma
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Philippe Weitz
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Yinxi Wang
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Bojing Liu
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
- Division of Precision Medicine, Department of Medicine, NYU Grossman School of Medicine, New York, NY, USA
| | | | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
- Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden
- MedTechLabs, BioClinicum, Karolinska University Hospital, Solna, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
- MedTechLabs, BioClinicum, Karolinska University Hospital, Solna, Sweden.
| |
Collapse
|
18
|
Kang J, Lafata K, Kim E, Yao C, Lin F, Rattay T, Nori H, Katsoulakis E, Lee CI. Artificial intelligence across oncology specialties: current applications and emerging tools. BMJ ONCOLOGY 2024; 3:e000134. [PMID: 39886165 PMCID: PMC11203066 DOI: 10.1136/bmjonc-2023-000134] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/03/2024] [Indexed: 02/01/2025]
Abstract
Oncology is becoming increasingly personalised through advancements in precision in diagnostics and therapeutics, with more and more data available on both ends to create individualised plans. The depth and breadth of data are outpacing our natural ability to interpret it. Artificial intelligence (AI) provides a solution to ingest and digest this data deluge to improve detection, prediction and skill development. In this review, we provide multidisciplinary perspectives on oncology applications touched by AI-imaging, pathology, patient triage, radiotherapy, genomics-driven therapy and surgery-and integration with existing tools-natural language processing, digital twins and clinical informatics.
Collapse
Affiliation(s)
- John Kang
- Department of Radiation Oncology, University of Washington, Seattle, Washington, USA
| | - Kyle Lafata
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
- Department of Radiology, Duke University, Durham, North Carolina, USA
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Ellen Kim
- Department of Radiation Oncology, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Christopher Yao
- Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Frank Lin
- Kinghorn Centre for Clinical Genomics, Garvan Institute of Medical Research, Darlinghurst, New South Wales, Australia
- NHMRC Clinical Trials Centre, Camperdown, New South Wales, Australia
- Faculty of Medicine, St Vincent's Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Tim Rattay
- Department of Genetics and Genome Biology, University of Leicester Cancer Research Centre, Leicester, UK
| | - Harsha Nori
- Microsoft Research, Redmond, Washington, USA
| | - Evangelia Katsoulakis
- Department of Radiation Oncology, University of South Florida, Tampa, Florida, USA
- Veterans Affairs Informatics and Computing Infrastructure, Salt Lake City, Utah, USA
| | | |
Collapse
|
19
|
Wahab N, Toss M, Miligy IM, Jahanifar M, Atallah NM, Lu W, Graham S, Bilal M, Bhalerao A, Lashen AG, Makhlouf S, Ibrahim AY, Snead D, Minhas F, Raza SEA, Rakha E, Rajpoot N. AI-enabled routine H&E image based prognostic marker for early-stage luminal breast cancer. NPJ Precis Oncol 2023; 7:122. [PMID: 37968376 PMCID: PMC10651910 DOI: 10.1038/s41698-023-00472-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 10/24/2023] [Indexed: 11/17/2023] Open
Abstract
Breast cancer (BC) grade is a well-established subjective prognostic indicator of tumour aggressiveness. Tumour heterogeneity and subjective assessment result in high degree of variability among observers in BC grading. Here we propose an objective Haematoxylin & Eosin (H&E) image-based prognostic marker for early-stage luminal/Her2-negative BReAst CancEr that we term as the BRACE marker. The proposed BRACE marker is derived from AI based assessment of heterogeneity in BC at a detailed level using the power of deep learning. The prognostic ability of the marker is validated in two well-annotated cohorts (Cohort-A/Nottingham: n = 2122 and Cohort-B/Coventry: n = 311) on early-stage luminal/HER2-negative BC patients treated with endocrine therapy and with long-term follow-up. The BRACE marker is able to stratify patients for both distant metastasis free survival (p = 0.001, C-index: 0.73) and BC specific survival (p < 0.0001, C-index: 0.84) showing comparable prediction accuracy to Nottingham Prognostic Index and Magee scores, which are both derived from manual histopathological assessment, to identify luminal BC patients that may be likely to benefit from adjuvant chemotherapy.
Collapse
Affiliation(s)
- Noorul Wahab
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Michael Toss
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Histopathology, Sheffield Teaching Hospitals NHS Trust, Sheffield, UK
| | - Islam M Miligy
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Koum, Egypt
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Nehal M Atallah
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Koum, Egypt
| | - Wenqi Lu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Simon Graham
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
- Histofy Ltd, Birmingham, UK
| | - Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Abhir Bhalerao
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Ayat G Lashen
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Koum, Egypt
| | - Shorouk Makhlouf
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
- Department of Pathology, Faculty of Medicine, Assiut University, Asyut, Egypt
| | - Asmaa Y Ibrahim
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - David Snead
- Histofy Ltd, Birmingham, UK
- The Alan Turing Institute, London, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Emad Rakha
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
- Histofy Ltd, Birmingham, UK.
- The Alan Turing Institute, London, UK.
| |
Collapse
|
20
|
Mohanty S, Shivanna DB, Rao RS, Astekar M, Chandrashekar C, Radhakrishnan R, Sanjeevareddygari S, Kotrashetti V, Kumar P. Building Automation Pipeline for Diagnostic Classification of Sporadic Odontogenic Keratocysts and Non-Keratocysts Using Whole-Slide Images. Diagnostics (Basel) 2023; 13:3384. [PMID: 37958281 PMCID: PMC10648794 DOI: 10.3390/diagnostics13213384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/13/2023] [Accepted: 10/27/2023] [Indexed: 11/15/2023] Open
Abstract
The microscopic diagnostic differentiation of odontogenic cysts from other cysts is intricate and may cause perplexity for both clinicians and pathologists. Of particular interest is the odontogenic keratocyst (OKC), a developmental cyst with unique histopathological and clinical characteristics. Nevertheless, what distinguishes this cyst is its aggressive nature and high tendency for recurrence. Clinicians encounter challenges in dealing with this frequently encountered jaw lesion, as there is no consensus on surgical treatment. Therefore, the accurate and early diagnosis of such cysts will benefit clinicians in terms of treatment management and spare subjects from the mental agony of suffering from aggressive OKCs, which impact their quality of life. The objective of this research is to develop an automated OKC diagnostic system that can function as a decision support tool for pathologists, whether they are working locally or remotely. This system will provide them with additional data and insights to enhance their decision-making abilities. This research aims to provide an automation pipeline to classify whole-slide images of OKCs and non-keratocysts (non-KCs: dentigerous and radicular cysts). OKC diagnosis and prognosis using the histopathological analysis of tissues using whole-slide images (WSIs) with a deep-learning approach is an emerging research area. WSIs have the unique advantage of magnifying tissues with high resolution without losing information. The contribution of this research is a novel, deep-learning-based, and efficient algorithm that reduces the trainable parameters and, in turn, the memory footprint. This is achieved using principal component analysis (PCA) and the ReliefF feature selection algorithm (ReliefF) in a convolutional neural network (CNN) named P-C-ReliefF. The proposed model reduces the trainable parameters compared to standard CNN, achieving 97% classification accuracy.
Collapse
Affiliation(s)
- Samahit Mohanty
- Department of Computer Science and Engineering, M S Ramaiah University of Applied Sciences, Bengaluru 560054, India;
| | - Divya B. Shivanna
- Department of Computer Science and Engineering, M S Ramaiah University of Applied Sciences, Bengaluru 560054, India;
| | - Roopa S. Rao
- Department of Oral Pathology and Microbiology, Faculty of Dental Sciences, M S Ramaiah University of Applied Sciences, Bengaluru 560054, India;
| | - Madhusudan Astekar
- Department of Oral Pathology, Institute of Dental Sciences, Bareilly 243006, India;
| | - Chetana Chandrashekar
- Department of Oral & Maxillofacial Pathology & Microbiology, Manipal College of Dental Sciences, Manipal 576104, India; (C.C.); (R.R.)
| | - Raghu Radhakrishnan
- Department of Oral & Maxillofacial Pathology & Microbiology, Manipal College of Dental Sciences, Manipal 576104, India; (C.C.); (R.R.)
| | | | - Vijayalakshmi Kotrashetti
- Department of Oral & Maxillofacial Pathology & Microbiology, Maratha Mandal’s Nathajirao G Halgekar, Institute of Dental Science & Research Centre, Belgaum 590010, India;
| | - Prashant Kumar
- Department of Oral & Maxillofacial Pathology, Nijalingappa Institute of Dental Science & Research, Gulbarga 585105, India;
| |
Collapse
|
21
|
Lee J, Han C, Kim K, Park GH, Kwak JT. CaMeL-Net: Centroid-aware metric learning for efficient multi-class cancer classification in pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107749. [PMID: 37579551 DOI: 10.1016/j.cmpb.2023.107749] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 07/25/2023] [Accepted: 08/05/2023] [Indexed: 08/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Cancer grading in pathology image analysis is a major task due to its importance in patient care, treatment, and management. The recent developments in artificial neural networks for computational pathology have demonstrated great potential to improve the accuracy and quality of cancer diagnosis. These improvements are generally ascribable to the advance in the architecture of the networks, often leading to increase in the computation and resources. In this work, we propose an efficient convolutional neural network that is designed to conduct multi-class cancer classification in an accurate and robust manner via metric learning. METHODS We propose a centroid-aware metric learning network for an improved cancer grading in pathology images. The proposed network utilizes centroids of different classes within the feature embedding space to optimize the relative distances between pathology images, which manifest the innate similarities/dissimilarities between them. For improved optimization, we introduce a new loss function and a training strategy that are tailored to the proposed network and metric learning. RESULTS We evaluated the proposed approach on multiple datasets of colorectal and gastric cancers. For the colorectal cancer, two different datasets were employed that were collected from different acquisition settings. the proposed method achieved an accuracy, F1-score, quadratic weighted kappa of 88.7%, 0.849, and 0.946 for the first dataset and 83.3%, 0.764, and 0.907 for the second dataset, respectively. For the gastric cancer, the proposed method obtained an accuracy of 85.9%, F1-score of 0.793, and quadratic weighted kappa of 0.939. We also found that the proposed method outperforms other competing models and is computationally efficient. CONCLUSIONS The experimental results demonstrate that the prediction results by the proposed network are both accurate and reliable. The proposed network not only outperformed other related methods in cancer classification but also achieved superior computational efficiency during training and inference. The future study will entail further development of the proposed method and the application of the method to other problems and domains.
Collapse
Affiliation(s)
- Jaeung Lee
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Chiwon Han
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Kyungeun Kim
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Gi-Ho Park
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
22
|
Asif A, Rajpoot K, Graham S, Snead D, Minhas F, Rajpoot N. Unleashing the potential of AI for pathology: challenges and recommendations. J Pathol 2023; 260:564-577. [PMID: 37550878 PMCID: PMC10952719 DOI: 10.1002/path.6168] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 08/09/2023]
Abstract
Computational pathology is currently witnessing a surge in the development of AI techniques, offering promise for achieving breakthroughs and significantly impacting the practices of pathology and oncology. These AI methods bring with them the potential to revolutionize diagnostic pipelines as well as treatment planning and overall patient care. Numerous peer-reviewed studies reporting remarkable performance across diverse tasks serve as a testimony to the potential of AI in the field. However, widespread adoption of these methods in clinical and pre-clinical settings still remains a challenge. In this review article, we present a detailed analysis of the major obstacles encountered during the development of effective models and their deployment in practice. We aim to provide readers with an overview of the latest developments, assist them with insights into identifying some specific challenges that may require resolution, and suggest recommendations and potential future research directions. © 2023 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Amina Asif
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
| | - Kashif Rajpoot
- School of Computer ScienceUniversity of BirminghamBirminghamUK
| | - Simon Graham
- Histofy Ltd, Birmingham Business ParkBirminghamUK
| | - David Snead
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Department of PathologyUniversity Hospitals Coventry & Warwickshire NHS TrustCoventryUK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Cancer Research CentreUniversity of WarwickCoventryUK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer ScienceUniversity of WarwickCoventryUK
- Histofy Ltd, Birmingham Business ParkBirminghamUK
- Cancer Research CentreUniversity of WarwickCoventryUK
- The Alan Turing InstituteLondonUK
| |
Collapse
|
23
|
Alhussan AA, Abdelhamid AA, Towfek SK, Ibrahim A, Abualigah L, Khodadadi N, Khafaga DS, Al-Otaibi S, Ahmed AE. Classification of Breast Cancer Using Transfer Learning and Advanced Al-Biruni Earth Radius Optimization. Biomimetics (Basel) 2023; 8:270. [PMID: 37504158 PMCID: PMC10377265 DOI: 10.3390/biomimetics8030270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/21/2023] [Accepted: 06/24/2023] [Indexed: 07/29/2023] Open
Abstract
Breast cancer is one of the most common cancers in women, with an estimated 287,850 new cases identified in 2022. There were 43,250 female deaths attributed to this malignancy. The high death rate associated with this type of cancer can be reduced with early detection. Nonetheless, a skilled professional is always necessary to manually diagnose this malignancy from mammography images. Many researchers have proposed several approaches based on artificial intelligence. However, they still face several obstacles, such as overlapping cancerous and noncancerous regions, extracting irrelevant features, and inadequate training models. In this paper, we developed a novel computationally automated biological mechanism for categorizing breast cancer. Using a new optimization approach based on the Advanced Al-Biruni Earth Radius (ABER) optimization algorithm, a boosting to the classification of breast cancer cases is realized. The stages of the proposed framework include data augmentation, feature extraction using AlexNet based on transfer learning, and optimized classification using a convolutional neural network (CNN). Using transfer learning and optimized CNN for classification improved the accuracy when the results are compared to recent approaches. Two publicly available datasets are utilized to evaluate the proposed framework, and the average classification accuracy is 97.95%. To ensure the statistical significance and difference between the proposed methodology, additional tests are conducted, such as analysis of variance (ANOVA) and Wilcoxon, in addition to evaluating various statistical analysis metrics. The results of these tests emphasized the effectiveness and statistical difference of the proposed methodology compared to current methods.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Abdelaziz A Abdelhamid
- Department of Computer Science, College of Computing and Information Technology, Shaqra University, Shaqra 11961, Saudi Arabia
- Department of Computer Science, Faculty of Computer and Information Sciences, Ain Shams University, Cairo 11566, Egypt
| | - S K Towfek
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
- Department of Communications and Electronics, Delta Higher Institute of Engineering and Technology, Mansoura 35111, Egypt
| | - Abdelhameed Ibrahim
- Computer Engineering and Control Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - Laith Abualigah
- Computer Science Department, Prince Hussein Bin Abdullah Faculty for Information Technology, Al al-Bayt University, Mafraq 25113, Jordan
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
- MEU Research Unit, Middle East University, Amman 11831, Jordan
- School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
| | - Nima Khodadadi
- Department of Civil and Architectural Engineering, University of Miami, Coral Gables, FL 33146, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Shaha Al-Otaibi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Ayman Em Ahmed
- Faculty of Engineering, King Salman International University, El-Tor 8701301, Egypt
| |
Collapse
|
24
|
Zarean Shahraki S, Azizmohammad Looha M, Mohammadi kazaj P, Aria M, Akbari A, Emami H, Asadi F, Akbari ME. Time-related survival prediction in molecular subtypes of breast cancer using time-to-event deep-learning-based models. Front Oncol 2023; 13:1147604. [PMID: 37342184 PMCID: PMC10277681 DOI: 10.3389/fonc.2023.1147604] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 05/19/2023] [Indexed: 06/22/2023] Open
Abstract
Background Breast cancer (BC) survival prediction can be a helpful tool for identifying important factors selecting the effective treatment reducing mortality rates. This study aims to predict the time-related survival probability of BC patients in different molecular subtypes over 30 years of follow-up. Materials and methods This study retrospectively analyzed 3580 patients diagnosed with invasive breast cancer (BC) from 1991 to 2021 in the Cancer Research Center of Shahid Beheshti University of Medical Science. The dataset contained 18 predictor variables and two dependent variables, which referred to the survival status of patients and the time patients survived from diagnosis. Feature importance was performed using the random forest algorithm to identify significant prognostic factors. Time-to-event deep-learning-based models, including Nnet-survival, DeepHit, DeepSurve, NMLTR and Cox-time, were developed using a grid search approach with all variables initially and then with only the most important variables selected from feature importance. The performance metrics used to determine the best-performing model were C-index and IBS. Additionally, the dataset was clustered based on molecular receptor status (i.e., luminal A, luminal B, HER2-enriched, and triple-negative), and the best-performing prediction model was used to estimate survival probability for each molecular subtype. Results The random forest method identified tumor state, age at diagnosis, and lymph node status as the best subset of variables for predicting breast cancer (BC) survival probabilities. All models yielded very close performance, with Nnet-survival (C-index=0.77, IBS=0.13) slightly higher using all 18 variables or the three most important variables. The results showed that the Luminal A had the highest predicted BC survival probabilities, while triple-negative and HER2-enriched had the lowest predicted survival probabilities over time. Additionally, the luminal B subtype followed a similar trend as luminal A for the first five years, after which the predicted survival probability decreased steadily in 10- and 15-year intervals. Conclusion This study provides valuable insight into the survival probability of patients based on their molecular receptor status, particularly for HER2-positive patients. This information can be used by healthcare providers to make informed decisions regarding the appropriateness of medical interventions for high-risk patients. Future clinical trials should further explore the response of different molecular subtypes to treatment in order to optimize the efficacy of breast cancer treatments.
Collapse
Affiliation(s)
- Saba Zarean Shahraki
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mehdi Azizmohammad Looha
- Basic and Molecular Epidemiology of Gastrointestinal Disorders Research Center, Research Institute for Gastroenterology and Liver Diseases, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Pooya Mohammadi kazaj
- Geographic Information Systems Department, Faculty of Geodesy and Geomatics Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Mehrad Aria
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tehran, Iran
| | - Atieh Akbari
- Cancer Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hassan Emami
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farkhondeh Asadi
- Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
25
|
Sun K, Chen Y, Bai B, Gao Y, Xiao J, Yu G. Automatic Classification of Histopathology Images across Multiple Cancers Based on Heterogeneous Transfer Learning. Diagnostics (Basel) 2023; 13:diagnostics13071277. [PMID: 37046497 PMCID: PMC10093253 DOI: 10.3390/diagnostics13071277] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/07/2023] [Accepted: 03/23/2023] [Indexed: 03/31/2023] Open
Abstract
Background: Current artificial intelligence (AI) in histopathology typically specializes on a single task, resulting in a heavy workload of collecting and labeling a sufficient number of images for each type of cancer. Heterogeneous transfer learning (HTL) is expected to alleviate the data bottlenecks and establish models with performance comparable to supervised learning (SL). Methods: An accurate source domain model was trained using 28,634 colorectal patches. Additionally, 1000 sentinel lymph node patches and 1008 breast patches were used to train two target domain models. The feature distribution difference between sentinel lymph node metastasis or breast cancer and CRC was reduced by heterogeneous domain adaptation, and the maximum mean difference between subdomains was used for knowledge transfer to achieve accurate classification across multiple cancers. Result: HTL on 1000 sentinel lymph node patches (L-HTL-1000) outperforms SL on 1000 sentinel lymph node patches (L-SL-1-1000) (average area under the curve (AUC) and standard deviation of L-HTL-1000 vs. L-SL-1-1000: 0.949 ± 0.004 vs. 0.931 ± 0.008, p value = 0.008). There is no significant difference between L-HTL-1000 and SL on 7104 patches (L-SL-2-7104) (0.949 ± 0.004 vs. 0.948 ± 0.008, p value = 0.742). Similar results are observed for breast cancer. B-HTL-1008 vs. B-SL-1-1008: 0.962 ± 0.017 vs. 0.943 ± 0.018, p value = 0.008; B-HTL-1008 vs. B-SL-2-5232: 0.962 ± 0.017 vs. 0.951 ± 0.023, p value = 0.148. Conclusions: HTL is capable of building accurate AI models for similar cancers using a small amount of data based on a large dataset for a certain type of cancer. HTL holds great promise for accelerating the development of AI in histopathology.
Collapse
|
26
|
Jabeen K, Khan MA, Balili J, Alhaisoni M, Almujally NA, Alrashidi H, Tariq U, Cha JH. BC 2NetRF: Breast Cancer Classification from Mammogram Images Using Enhanced Deep Learning Features and Equilibrium-Jaya Controlled Regula Falsi-Based Features Selection. Diagnostics (Basel) 2023; 13:1238. [PMID: 37046456 PMCID: PMC10093018 DOI: 10.3390/diagnostics13071238] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 03/13/2023] [Accepted: 03/23/2023] [Indexed: 03/29/2023] Open
Abstract
One of the most frequent cancers in women is breast cancer, and in the year 2022, approximately 287,850 new cases have been diagnosed. From them, 43,250 women died from this cancer. An early diagnosis of this cancer can help to overcome the mortality rate. However, the manual diagnosis of this cancer using mammogram images is not an easy process and always requires an expert person. Several AI-based techniques have been suggested in the literature. However, still, they are facing several challenges, such as similarities between cancer and non-cancer regions, irrelevant feature extraction, and weak training models. In this work, we proposed a new automated computerized framework for breast cancer classification. The proposed framework improves the contrast using a novel enhancement technique called haze-reduced local-global. The enhanced images are later employed for the dataset augmentation. This step aimed at increasing the diversity of the dataset and improving the training capability of the selected deep learning model. After that, a pre-trained model named EfficientNet-b0 was employed and fine-tuned to add a few new layers. The fine-tuned model was trained separately on original and enhanced images using deep transfer learning concepts with static hyperparameters' initialization. Deep features were extracted from the average pooling layer in the next step and fused using a new serial-based approach. The fused features were later optimized using a feature selection algorithm known as Equilibrium-Jaya controlled Regula Falsi. The Regula Falsi was employed as a termination function in this algorithm. The selected features were finally classified using several machine learning classifiers. The experimental process was conducted on two publicly available datasets-CBIS-DDSM and INbreast. For these datasets, the achieved average accuracy is 95.4% and 99.7%. A comparison with state-of-the-art (SOTA) technology shows that the obtained proposed framework improved the accuracy. Moreover, the confidence interval-based analysis shows consistent results of the proposed framework.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (K.J.); (M.A.K.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (K.J.); (M.A.K.)
| | - Jamel Balili
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia;
- Higher Institute of Applied Science and Technology of Sousse (ISSATS), Cité Taffala (Ibn Khaldoun) 4003 Sousse, University of Souse, Sousse 4000, Tunisia
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 81451, Saudi Arabia;
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Huda Alrashidi
- Faculty of Information Technology and Computing, Arab Open University, Ardiya 92400, Kuwait;
| | - Usman Tariq
- Department of Management, CoBA, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia;
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Republic of Korea;
| |
Collapse
|
27
|
FRE-Net: Full-region enhanced network for nuclei segmentation in histopathology images. Biocybern Biomed Eng 2023. [DOI: 10.1016/j.bbe.2023.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
28
|
Survival Analysis of Oncological Patients Using Machine Learning Method. Healthcare (Basel) 2022; 11:healthcare11010080. [PMID: 36611540 PMCID: PMC9818920 DOI: 10.3390/healthcare11010080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 12/19/2022] [Accepted: 12/23/2022] [Indexed: 12/29/2022] Open
Abstract
Currently, a considerable volume of information is collected and stored by large health institutions. These data come from medical records and hospital records, and the Hospital Cancer Registry is a database for integrating data from hospitals throughout Iraq. The data mining (DM) technique provides knowledge previously not visible in the database and can be used to predict trends or describe characteristics of the past. DM methods can include classification, generalisation, characterisation, clustering, association, evolution, pattern discovery, data visualisation, and rule-guided mining techniques to perform survival analyses that take into account all the patient's medical record variables. For four of the eleven groups examined, this accuracy was relatively high. The database of patients treated by the Baghdad Teaching Hospital between 2018 and 2021 was examined using a classification of the most crucial variables for event prediction, and a distinctive pattern was found. Machine learning techniques allow a global assessment of the data that is available and produce results that can be interpreted as significant information for epidemiological studies, even in cases where the sample is small and there is a lack of information on several variables.
Collapse
|
29
|
Cigdem O, Chen S, Zhang C, Cho K, Kijowski R, Deniz CM. Estimating time-to-total knee replacement on radiographs and MRI: a multimodal approach using self-supervised deep learning. RADIOLOGY ADVANCES 2022; 1:umae030. [PMID: 39744045 PMCID: PMC11687945 DOI: 10.1093/radadv/umae030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 10/18/2024] [Accepted: 11/11/2024] [Indexed: 01/07/2025]
Abstract
Purpose Accurately predicting the expected duration of time until total knee replacement (time-to-TKR) is crucial for patient management and health care planning. Predicting when surgery may be needed, especially within shorter windows like 3 years, allows clinicians to plan timely interventions and health care systems to allocate resources more effectively. Existing models lack the precision for such time-based predictions. A survival analysis model for predicting time-to-TKR was developed using features from medical images and clinical measurements. Methods From the Osteoarthritis Initiative dataset, all knees with clinical variables, MRI scans, radiographs, and quantitative and semiquantitative assessments from images were identified. This resulted in 895 knees that underwent TKR within the 9-year follow-up period, as specified by the Osteoarthritis Initiative study design, and 786 control knees that did not undergo TKR (right-censored, indicating their status beyond the 9-year follow-up is unknown). These knees were used for model training and testing. Additionally, 518 and 164 subjects from the Multi-Center Osteoarthritis Study and internal hospital data were used for external testing, respectively. Deep learning models were utilized to extract features from radiographs and MR scans. Extracted features, clinical variables, and image assessments were used in survival analysis with Lasso Cox feature selection and a random survival forest model to predict time-to-TKR. Results The proposed model exhibited strong discrimination power by integrating self-supervised deep learning features with clinical variables (eg, age, body mass index, pain score) and image assessment measurements (eg, Kellgren-Lawrence grade, joint space narrowing, bone marrow lesion size, cartilage morphology) from multiple modalities. The model achieved an area under the curve of 94.5 (95% CI, 94.0-95.1) for predicting the time-to-TKR. Conclusions The proposed model demonstrated the potential of self-supervised learning and multimodal data fusion in accurately predicting time-to-TKR that may assist physicians to develop personalize treatment strategies.
Collapse
Affiliation(s)
- Ozkan Cigdem
- Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, United States
| | - Shengjia Chen
- Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, United States
| | - Chaojie Zhang
- Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, United States
| | - Kyunghyun Cho
- Center of Data Science, New York University, New York, NY 10011, United States
- Courant Institute of Mathematical Sciences, New York University, New York, NY 10012-1185, United States
| | - Richard Kijowski
- Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, United States
| | - Cem M Deniz
- Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, United States
| |
Collapse
|