1
|
Ganie SM, Dutta Pramanik PK. Interpretable lung cancer risk prediction using ensemble learning and XAI based on lifestyle and demographic data. Comput Biol Chem 2025; 117:108438. [PMID: 40174511 DOI: 10.1016/j.compbiolchem.2025.108438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2025] [Revised: 03/16/2025] [Accepted: 03/18/2025] [Indexed: 04/04/2025]
Abstract
Lung cancer is a leading cause of cancer-related death worldwide. The early and accurate detection of lung cancer is crucial for improving patient outcomes. Traditional predictive models often lack the accuracy and interpretability required in clinical settings. This study aims to enhance lung cancer prediction accuracy using ensemble learning methods while integrating explainable AI (XAI) techniques to ensure model interpretability. Advanced ensemble learning techniques, such as Voting and Stacking, have been implemented to improve the predictive accuracy compared to traditional models. The models are implemented on three real lung cancer datasets, comprising lifestyle data of the patients, and assessed using various performance metrics, highlighting their reliability in clinical diagnosis. XAI methods are incorporated to ensure the models are interpretable, fostering trust among clinicians. SHAP (SHapley Additive exPlanations) values are utilized to identify and prioritize clinical and demographic factors influencing risk predictions. The ensemble models demonstrate superior performance metrics, significantly improving lung cancer prediction accuracy. Specifically, the Stacking ensemble model achieves the average prediction accuracy of 99.59 %, precision of 100 %, recall of 97.64 %, F1-score 98.65 %, AUC of 100 %, Kappa 98.40 %, and MCC of 98.44 % across three datasets. We employed the Friedman aligned ranks test and Holm post hoc analysis to validate performance, showing that the Stacking ensemble consistently outperformed others with higher accuracy and reliable predictions. Feature importance analysis reveals critical risk factors, providing insights into their interconnectivity and enhancing risk assessment frameworks. Integrating XAI techniques ensures the models are interpretable, promoting their potential adoption in clinical practices. The findings support the development of targeted interventions and effective risk management strategies, aiming to improve patient outcomes in lung cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, Woxsen University, Hyderabad, Telangana 502345, India.
| | - Pijush Kanti Dutta Pramanik
- School of Computer Science and Engineering, Galgotias University, Greater Noida, Uttar Pradesh 203201, India.
| |
Collapse
|
2
|
Xu H, Lv R. Rapid diagnosis of lung cancer by multi-modal spectral data combined with deep learning. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2025; 335:125997. [PMID: 40073660 DOI: 10.1016/j.saa.2025.125997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 02/21/2025] [Accepted: 03/04/2025] [Indexed: 03/14/2025]
Abstract
Lung cancer is a malignant tumor that poses a serious threat to human health. Existing lung cancer diagnostic techniques face the challenges of high cost and slow diagnosis. Early and rapid diagnosis and treatment are essential to improve the outcome of lung cancer. In this study, a deep learning-based multi-modal spectral information fusion (MSIF) network is proposed for lung adenocarcinoma cell detection. First, multi-modal data of Fourier transform infrared spectra, UV-vis absorbance spectra, and fluorescence spectra of normal and patient cells were collected. Subsequently, the spectral text data were efficiently processed by one-dimensional convolutional neural network. The global and local features of the spectral images are deeply mined by the hybrid model of ResNet and Transformer. An adaptive depth-wise convolution (ADConv) is introduced to be applied to feature extraction, overcoming the shortcomings of conventional convolution. In order to achieve feature learning between multi-modalities, a cross-modal interaction fusion (CMIF) module is designed. This module fuses the extracted spectral image and text features in a multi-faceted interaction, enabling full utilization of multi-modal features through feature sharing. The method demonstrated excellent performance on the test sets of Fourier transform infrared spectra, UV-vis absorbance spectra and fluorescence spectra, achieving 95.83 %, 97.92 % and 100 % accuracy, respectively. In addition, experiments validate the superiority of multi-modal spectral data and the robustness of the model generalization capability. This study not only provides strong technical support for the early diagnosis of lung cancer, but also opens a new chapter for the application of multi-modal data fusion in spectroscopy.
Collapse
Affiliation(s)
- Han Xu
- State Key Laboratory of Electromechanical Integrated Manufacturing of High-performance Electronic Equipment, School of Electro-Mechanical Engineering, Xidian University, Xi'an, Shaanxi 710071, China
| | - Ruichan Lv
- State Key Laboratory of Electromechanical Integrated Manufacturing of High-performance Electronic Equipment, School of Electro-Mechanical Engineering, Xidian University, Xi'an, Shaanxi 710071, China.
| |
Collapse
|
3
|
Akter SB, Akter S, Hasan R, Hasan MM, Eisenberg D, Azim R, Fresneda Fernandez J, Pias TS. Optimizing stability of heart disease prediction across imbalanced learning with interpretable Grow Network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 265:108702. [PMID: 40147157 DOI: 10.1016/j.cmpb.2025.108702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 01/29/2025] [Accepted: 02/28/2025] [Indexed: 03/29/2025]
Abstract
BACKGROUND AND OBJECTIVES Heart disease prediction models often face stability challenges when applied to public datasets due to significant class imbalances, unlike the more balanced benchmark datasets. These imbalances can adversely affect various stages of prediction, including feature selection, sampling, and modeling, leading to skewed performance, with one class often being favored over another. METHODS To enhance stability, this study proposes a Grow Network (GrowNet) architecture, which dynamically configures itself based on the data's characteristics. To enhance GrowNet's stability, this study proposes the use of TriDyn Dependence feature selection and Adaptive Refinement sampling, which ensure the selection of relevant features across imbalanced data and effectively manage class imbalance during training. RESULTS When evaluated on the benchmark UCI heart disease dataset, GrowNet has outperformed other models, achieving a specificity of 92%, sensitivity of 88%, precision of 90%, and F1 score of 90%. Further evaluation on three public datasets from the Behavioral Risk Factor Surveillance System (BRFSS), where heart disease cases constitute only about 6% of the data, has demonstrated GrowNet's ability to maintain balanced performance, with an average specificity, sensitivity, and AUC-ROC of 77.67%, 81.67%, and 89.67%, respectively, while other models have exhibited instability. This represents a 22.8% improvement in handling class imbalance compared to prior studies. Additional tests on two public datasets from the National Health Interview Survey (NHIS) have confirmed GrowNet's robustness and generalizability, with an average specificity, sensitivity, and AUC-ROC of 80.5%, 82.5%, and 90%, respectively, while other models have continued to demonstrate instability. DISCUSSION To enhance transparency, this study incorporates SHapley Additive exPlanations (SHAP) analysis, enabling healthcare professionals to understand the decision-making process and identify key risk factors for heart disease, such as bronchitis in midlife, renal dysfunction in the elderly, and depressive disorders in individuals aged 35-44. CONCLUSION This study presents a robust, interpretable model to assist healthcare professionals in cost-effective, early heart disease detection by focusing on key risk factors, ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Simon Bin Akter
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, 07102, NJ, USA; Department of Computer Science and Engineering, Northern University Bangladesh, Dhaka, Bangladesh
| | - Sumya Akter
- Martin Tuchman School of Management, New Jersey Institute of Technology, Newark, 07102, NJ, USA; Department of Computer Science and Engineering, Northern University Bangladesh, Dhaka, Bangladesh
| | - Rakibul Hasan
- Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh; Department of Computer Science and Engineering, Northern University Bangladesh, Dhaka, Bangladesh
| | - Md Mahadi Hasan
- Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh; Department of Computer Science and Engineering, Northern University Bangladesh, Dhaka, Bangladesh
| | - David Eisenberg
- Department of Information Management and Business Analytics, Montclair State University, Feliciano School of Business, NJ, USA
| | - Riasat Azim
- Department of Computer Science and Engineering, United International University, Dhaka, Bangladesh
| | | | | |
Collapse
|
4
|
Dakhli R, Barhoumi W. Improving skin lesion classification through saliency-guided loss functions. Comput Biol Med 2025; 192:110299. [PMID: 40375427 DOI: 10.1016/j.compbiomed.2025.110299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2025] [Revised: 04/19/2025] [Accepted: 04/28/2025] [Indexed: 05/18/2025]
Abstract
Deep learning has significantly advanced computer-aided diagnosis, particularly in skin lesion classification. However, achieving high classification performance and providing explainable model predictions remain challenging in medical imaging. To tackle both performance and explainability challenges, we propose an effective method to enhance the performance of deep learning classifiers by integrating saliency scores directly into the loss function. Presuming that exploring various loss functions can significantly impact on the model performance, the proposed method is based on integrating a penalization weight derived from the saliency scores into the loss function, resulting a custom loss function for each XAI method. To evaluate the effectiveness of the proposed method, we have performed experiments on the challenging HAM10000 and PH2 datasets using the Inception-ResNet-v2, the EfficientNet-B3 and the ResNeXt classifiers for different XAI methods. The results demonstrate substantial enhancements compared to the baseline and relevant methods from the state-of-the-art. In fact, the proposed method achieved an accuracy of 94.3% and 98% for the HAM10000 and PH2 datasets, respectively, demonstrating an improvement over the standard loss function by 7% and 6% accuracy with an LRP-guided loss function. Thus, the designed integration improves model performance and reliability while implicitly assessing the effectiveness of the XAI techniques quantitatively through its ability to enhance classification.
Collapse
Affiliation(s)
- Rym Dakhli
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06, Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Abou Rayhane Bayrouni, Ariana 2080, Tunisia.
| | - Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06, Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Abou Rayhane Bayrouni, Ariana 2080, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, Tunis-Carthage 2035, Tunisia.
| |
Collapse
|
5
|
Chadha S, Mukherjee S, Sanyal S. Advancements and implications of artificial intelligence for early detection, diagnosis and tailored treatment of cancer. Semin Oncol 2025; 52:152349. [PMID: 40345002 DOI: 10.1016/j.seminoncol.2025.152349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Revised: 03/20/2025] [Accepted: 04/04/2025] [Indexed: 05/11/2025]
Abstract
The complexity and heterogeneity of cancer makes early detection and effective treatment crucial to enhance patient survival and quality of life. The intrinsic creative ability of artificial intelligence (AI) offers improvements in patient screening, diagnosis, and individualized care. Advanced technologies, like computer vision, machine learning, deep learning, and natural language processing, can analyze large datasets and identify patterns that permit early cancer detection, diagnosis, management and incorporation of conclusive treatment plans, ensuring improved quality of life for patients by personalizing care and minimizing unnecessary interventions. Genomics, transcriptomics and proteomics data can be combined with AI algorithms to unveil an extensive overview of cancer biology, assisting in its detailed understanding and will help in identifying new drug targets and developing effective therapies. This can also help to identify personalized molecular signatures which can facilitate tailored interventions addressing the unique aspects of each patient. AI-driven transcriptomics, proteomics, and genomes represents a revolutionary strategy to improve patient outcome by offering precise diagnosis and tailored therapy. The inclusion of AI in oncology may boost efficiency, reduce errors, and save costs, but it cannot take the role of medical professionals. While clinicians and doctors have the final say in all matters, it might serve as their faithful assistant.
Collapse
Affiliation(s)
- Sonia Chadha
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India.
| | - Sayali Mukherjee
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India
| | - Somali Sanyal
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India
| |
Collapse
|
6
|
Peimankar A, Garvik OS, Nørgård BM, Søndergaard J, Jarbøl DE, Wehberg S, Sheikh SP, Ebrahimi A, Wiil UK, Iachina M. Prescription data and demographics: An explainable machine learning exploration of colorectal cancer risk factors based on data from Danish national registries. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 267:108774. [PMID: 40287990 DOI: 10.1016/j.cmpb.2025.108774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2024] [Revised: 02/23/2025] [Accepted: 04/10/2025] [Indexed: 04/29/2025]
Abstract
OBJECTIVES Despite substantial advancements in both treatment and prevention, colorectal cancer continues to be a leading cause of global morbidity and mortality. This study investigated the potential of using demographics and prescribed drug information to predict risk of colorectal cancer using a machine learning approach. METHODS Five different machine learning algorithms, including Logistic Regression, XGBoost, Random Forests, kNN, and Voting Classifier, were initially developed and evaluated for their predictive capabilities across various time horizons (3, 6, 12, and 36 months). To enhance transparency and interpretability, explainable techniques were employed to understand the model's predictions and identify the relative contributions of factors like age, sex, social status, and prescribed medications, promoting trust and clinical insights. While all developed models, including simpler ones such as Logistic Regression, demonstrated comparable performance, the Voting Classifier, as an ensemble model, was selected for further investigation due to its inherent diversity and generalizability. This ensemble model combines predictions from multiple base models, reducing the risk of overfitting and improving the robustness of the final prediction. RESULTS The model demonstrated consistent performance across these time horizons, achieving a precision consistently above 0.99, indicating high ability in identifying patients at risk. However, the recall remained relatively low (around 0.6), highlighting the model's limitations in comprehensively identifying all at risk patients, despite its high precision. This suggests additional investigations in future studies to further enhance the performance of the proposed model. CONCLUSION Machine learning models can identify individuals at higher risk for developing colorectal cancer, enabling earlier interventions and personalized risk management strategies. However, further studies are needed before implementation in clinical practice.
Collapse
Affiliation(s)
- Abdolrahman Peimankar
- SDU Health Informatics and Technology, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, 5230 Odense, Denmark.
| | - Olav Sivertsen Garvik
- Center for Clinical Epidemiology, Odense University Hospital, 5230 Odense, Denmark; Research Unit of Clinical Epidemiology, University of Southern Denmark, 5230 Odense, Denmark
| | - Bente Mertz Nørgård
- Center for Clinical Epidemiology, Odense University Hospital, 5230 Odense, Denmark; Research Unit of Clinical Epidemiology, University of Southern Denmark, 5230 Odense, Denmark
| | - Jens Søndergaard
- Research Unit of General Practice, Department of Public Health, University of Southern Denmark, 5230 Odense, Denmark
| | - Dorte Ejg Jarbøl
- Research Unit of General Practice, Department of Public Health, University of Southern Denmark, 5230 Odense, Denmark
| | - Sonja Wehberg
- Research Unit of General Practice, Department of Public Health, University of Southern Denmark, 5230 Odense, Denmark
| | - Søren Paludan Sheikh
- Center for Regenerative Medication, Odense University Hospital, 5230 Odense, Denmark
| | - Ali Ebrahimi
- SDU Health Informatics and Technology, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, 5230 Odense, Denmark
| | - Uffe Kock Wiil
- SDU Health Informatics and Technology, The Maersk Mc-Kinney Moller Institute, University of Southern Denmark, 5230 Odense, Denmark
| | - Maria Iachina
- Center for Clinical Epidemiology, Odense University Hospital, 5230 Odense, Denmark; Research Unit of Clinical Epidemiology, University of Southern Denmark, 5230 Odense, Denmark
| |
Collapse
|
7
|
Li P, Ru J, Fei Q, Chen Z, Wang B. Interpretable capsule networks via self attention routing on spatially invariant feature surfaces. Sci Rep 2025; 15:13026. [PMID: 40234510 PMCID: PMC12000548 DOI: 10.1038/s41598-025-96903-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2025] [Accepted: 04/01/2025] [Indexed: 04/17/2025] Open
Abstract
The accurate and efficient evaluation and classification of situational images is fundamental to making informed and effective decisions. However, current classification approaches based on convolutional neural networks often suffer from limited generalization and robustness, particularly when processing data characterized by abstract class features and pronounced spatial attributes. Additionally, the "black-box" nature of deep neural network architectures poses significant challenges to their application in fields with stringent security requirements. To address these limitations, this paper introduces a novel Spatially Invariant Self-Attention Capsule Network (SISA-CapsNet), designed to encode interpretable spatial features for classification tasks. SISA-CapsNet employs capsules to encode spatial features from specific image regions and classifies these features through a self-attention routing mechanism. Specifically, spatially invariant feature surfaces with dimensions identical to the input image are generated and stacked to form feature capsules, each encoding spatial features from distinct regions. The self-attention mechanism calculates coupling coefficients, clustering feature capsules into class capsules. This architecture integrates a spatially invariant feature extraction structure, facilitating pixel-level encoding of regional spatial features, and leverages self-attention to effectively capture the relative importance of different spatial regions for classification. Together, these two mechanisms constitute an interpretable classification framework. Experimental validation on benchmark datasets and battlefield situational image datasets with pronounced spatial characteristics demonstrates that the proposed method not only achieves superior classification performance but also offers interpretability closely aligned with human cognitive processes. Furthermore, comparative analyses with existing visual interpretability method underscore the enhanced interpretability of SISA-CapsNet.
Collapse
Affiliation(s)
- Peizhang Li
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China
| | - Jiyuan Ru
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China
| | - Qing Fei
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China.
| | - Zhen Chen
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China
| | - Bo Wang
- China Shipbuilding Zhihai Innovation Research Institute, Beijing, China
| |
Collapse
|
8
|
Boubnovski Martell M, Linton-Reid K, Chen M, Aboagye EO. Radiomics for lung cancer diagnosis, management, and future prospects. Clin Radiol 2025; 86:106926. [PMID: 40344812 DOI: 10.1016/j.crad.2025.106926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2024] [Revised: 03/29/2025] [Accepted: 04/04/2025] [Indexed: 05/11/2025]
Abstract
Lung cancer remains the leading cause of cancer-related mortality worldwide, with its early detection and effective treatment posing significant clinical challenges. Radiomics, the extraction of quantitative features from medical imaging, has emerged as a promising approach for enhancing diagnostic accuracy, predicting treatment responses, and personalising patient care. This review explores the role of radiomics in lung cancer diagnosis and management, with methods ranging from handcrafted radiomics to deep learning techniques that can capture biological intricacies. The key applications are highlighted across various stages of lung cancer care, including nodule detection, histology prediction, and disease staging, where artificial intelligence (AI) models demonstrate superior specificity and sensitivity. The article also examines future directions, emphasising the integration of large language models, explainable AI (XAI), and super-resolution imaging techniques as transformative developments. By merging diverse data sources and incorporating interpretability into AI models, radiomics stands poised to redefine clinical workflows, offering more robust and reliable tools for lung cancer diagnosis, treatment planning, and outcome prediction. These advancements underscore radiomics' potential in supporting precision oncology and improving patient outcomes through data-driven insights.
Collapse
Affiliation(s)
| | - K Linton-Reid
- Imperial College London Hammersmith Campus, London, W12 0NN, United Kingdom.
| | - M Chen
- Imperial College London Hammersmith Campus, London, W12 0NN, United Kingdom.
| | - E O Aboagye
- Imperial College London Hammersmith Campus, London, W12 0NN, United Kingdom.
| |
Collapse
|
9
|
Musa A, Prasad R, Hernandez M. Addressing cross-population domain shift in chest X-ray classification through supervised adversarial domain adaptation. Sci Rep 2025; 15:11383. [PMID: 40181036 PMCID: PMC11968948 DOI: 10.1038/s41598-025-95390-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2025] [Accepted: 03/20/2025] [Indexed: 04/05/2025] Open
Abstract
Medical image analysis, empowered by artificial intelligence (AI), plays a crucial role in modern healthcare diagnostics. However, the effectiveness of machine learning models hinges on their ability to generalize to diverse patient populations, presenting domain shift challenges. This study explores the domain shift problem in chest X-ray classification, focusing on cross-population variations, especially in underrepresented groups. We analyze the impact of domain shifts across three population datasets acting as sources using a Nigerian chest X-ray dataset acting as the target. Model performance is evaluated to assess disparities between source and target populations, revealing large discrepancies when the models trained on a source were applied to the target domain. To address with the evident domain shift among the populations, we propose a supervised adversarial domain adaptation (ADA) technique. The feature extractor is first trained on the source domain using a supervised loss function in ADA. The feature extractor is then frozen, and an adversarial domain discriminator is introduced to distinguish between the source and target domains. Adversarial training fine-tunes the feature extractor, making features from both domains indistinguishable, thereby creating domain-invariant features. The technique was evaluated on the Nigerian dataset, showing significant improvements in chest X-ray classification performance. The proposed model achieved a 90.08% accuracy and a 96% AUC score, outperforming existing approaches such as multi-task learning (MTL) and continual learning (CL). This research highlights the importance of developing domain-aware models in AI-driven healthcare, offering a solution to cross-population domain shift challenges in medical imaging.
Collapse
Affiliation(s)
- Aminu Musa
- Deparment of Computer Science, African University of Science and Technology, Abuja, 900107, Nigeria.
- Department of Computer Science, Federal University Dutse, Dutse, Nigeria.
| | - Rajesh Prasad
- Deparment of Computer Science, African University of Science and Technology, Abuja, 900107, Nigeria
- Department of Computer Science and Engineering, Ajay Kumar Garg Engineering College, Ghaziabad, 201015, India
| | - Monica Hernandez
- Deparment of Computer Science, University of Zaragoza, Zaragoza, 50018, Spain
| |
Collapse
|
10
|
Finzel B. Current methods in explainable artificial intelligence and future prospects for integrative physiology. Pflugers Arch 2025; 477:513-529. [PMID: 39994035 PMCID: PMC11958383 DOI: 10.1007/s00424-025-03067-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Revised: 01/14/2025] [Accepted: 01/15/2025] [Indexed: 02/26/2025]
Abstract
Explainable artificial intelligence (XAI) is gaining importance in physiological research, where artificial intelligence is now used as an analytical and predictive tool for many medical research questions. The primary goal of XAI is to make AI models understandable for human decision-makers. This can be achieved in particular through providing inherently interpretable AI methods or by making opaque models and their outputs transparent using post hoc explanations. This review introduces XAI core topics and provides a selective overview of current XAI methods in physiology. It further illustrates solved and discusses open challenges in XAI research using existing practical examples from the medical field. The article gives an outlook on two possible future prospects: (1) using XAI methods to provide trustworthy AI for integrative physiological research and (2) integrating physiological expertise about human explanation into XAI method development for useful and beneficial human-AI partnerships.
Collapse
Affiliation(s)
- Bettina Finzel
- Cognitive Systems, University of Bamberg, Weberei 5, 96047, Bamberg, Germany.
| |
Collapse
|
11
|
Chaddad A, Jiang Y, Daqqaq TS, Kateb R. EAMAPG: Explainable Adversarial Model Analysis via Projected Gradient Descent. Comput Biol Med 2025; 188:109788. [PMID: 39946791 DOI: 10.1016/j.compbiomed.2025.109788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Revised: 01/23/2025] [Accepted: 01/30/2025] [Indexed: 03/05/2025]
Abstract
Despite the outstanding performance of deep learning (DL) models, their interpretability remains a challenging topic. In this study, we address the transparency of DL models in medical image analysis by introducing a novel interpretability method using projected gradient descent (PGD) to generate adversarial examples. We use adversarial generation to analyze images. By introducing perturbations that cause misclassification, we identify key features influencing the model decisions. This method is tested on Brain Tumor, Eye Disease, and COVID-19 datasets using six common convolutional neural networks (CNN) models. We selected the top-performing models for interpretability analysis. DenseNet121 achieved an AUC of 1.00 on Brain Tumor; InceptionV3, 0.99 on Eye Disease; and ResNet101, 1.00 on COVID-19. To test their robustness, we performed an adversarial attack. The p-values from t-tests comparing original and adversarial loss distributions were all < 0.05. This indicates that the adversarial perturbations significantly increased the loss, confirming successful adversarial generation. Our approach offers a distinct solution to bridge the gap between the capabilities of artificial intelligence and its practical use in clinical settings, providing a more intuitive understanding for radiologists. Our code is available at https://anonymous.4open.science/r/EAMAPG.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Artificial Intelligence for Personalized Medicine, School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, 541004, China; Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, H3C 1K3, Canada.
| | - Yuchen Jiang
- Artificial Intelligence for Personalized Medicine, School of Artificial Intelligence, Guilin University of Electronic Technology, Guilin, 541004, China
| | - Tareef S Daqqaq
- College of Medicine, Taibah University, Al-Madinah, 42361, Saudi Arabia; Department of Radiology, Prince Mohammed Bin Abdulaziz Hospital, Al-Madinah, 42324, Saudi Arabia
| | - Reem Kateb
- College of Computer Science and Engineering, Cyber Security Department, Taibah University, Al-Madinah, 42353, Saudi Arabia; College of Computer Science and Engineering, Jeddah University, Jeddah, 23445, Saudi Arabia
| |
Collapse
|
12
|
Li Y, Deng J, Ma X, Li W, Wang Z. Diagnostic accuracy of CT and PET/CT radiomics in predicting lymph node metastasis in non-small cell lung cancer. Eur Radiol 2025; 35:1966-1979. [PMID: 39223336 DOI: 10.1007/s00330-024-11036-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Revised: 06/09/2024] [Accepted: 08/07/2024] [Indexed: 09/04/2024]
Abstract
OBJECTIVES This study evaluates the accuracy of radiomics in predicting lymph node metastasis in non-small cell lung cancer, which is crucial for patient management and prognosis. METHODS Adhering to PRISMA and AMSTAR guidelines, we systematically reviewed literature from March 2012 to December 2023 using databases including PubMed, Web of Science, and Embase. Radiomics studies utilizing computed tomography (CT) and positron emission tomography (PET)/CT imaging were included. The quality of studies was appraised with QUADAS-2 and RQS tools, and the TRIPOD checklist assessed model transparency. Sensitivity, specificity, and AUC values were synthesized to determine diagnostic performance, with subgroup and sensitivity analyses probing heterogeneity and a Fagan plot evaluating clinical applicability. RESULTS Our analysis incorporated 42 cohorts from 22 studies. CT-based radiomics demonstrated a sensitivity of 0.84 (95% CI: 0.79-0.88, p < 0.01) and specificity of 0.82 (95% CI: 0.75-0.87, p < 0.01), with an AUC of 0.90 (95% CI: 0.87-0.92), indicating no publication bias (p-value = 0.54 > 0.05). PET/CT radiomics showed a sensitivity of 0.82 (95% CI: 0.76-0.86, p < 0.01) and specificity of 0.86 (95% CI: 0.81-0.90, p < 0.01), with an AUC of 0.90 (95% CI: 0.87-0.93), with a slight publication bias (p-value = 0.03 < 0.05). Despite high clinical utility, subgroup analysis did not clarify heterogeneity sources, suggesting influences from possible factors like lymph node location and small subgroup sizes. CONCLUSIONS Radiomics models show accuracy in predicting lung cancer lymph node metastasis, yet further validation with larger, multi-center studies is necessary. CLINICAL RELEVANCE STATEMENT Radiomics models using CT and PET/CT imaging may improve the prediction of lung cancer lymph node metastasis, aiding personalized treatment strategies. RESEARCH REGISTRATION UNIQUE IDENTIFYING NUMBER (UIN) International Prospective Register of Systematic Reviews (PROSPERO), CRD42023494701. This study has been registered on the PROSPERO platform with a registration date of 18 December 2023. https://www.crd.york.ac.uk/prospero/ KEY POINTS: The study explores radiomics for lung cancer lymph node metastasis detection, impacting surgery and prognosis. Radiomics improves the accuracy of lymph node metastasis prediction in lung cancer. Radiomics can aid in the prediction of lymph node metastasis in lung cancer and personalized treatment.
Collapse
Affiliation(s)
- Yuepeng Li
- Department of Respiratory and Critical Care Medicine, Frontiers Science Center for Disease-related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, Sichuan University, Chengdu, China
| | - Junyue Deng
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China.
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Frontiers Science Center for Disease-related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, Sichuan University, Chengdu, China
- Institute of Respiratory Health, West China Hospital, Sichuan University, Chengdu, China
- Precision Medicine Center, Precision Medicine Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China
- The Research Units of West China, Chinese Academy of Medical Sciences, West China Hospital, Chengdu, China
| | - Zhoufeng Wang
- Department of Respiratory and Critical Care Medicine, Frontiers Science Center for Disease-related Molecular Network, State Key Laboratory of Respiratory Health and Multimorbidity, West China Hospital, Sichuan University, Chengdu, China.
- Institute of Respiratory Health, West China Hospital, Sichuan University, Chengdu, China.
- Precision Medicine Center, Precision Medicine Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, China.
- The Research Units of West China, Chinese Academy of Medical Sciences, West China Hospital, Chengdu, China.
| |
Collapse
|
13
|
Rasool N, Wani NA, Bhat JI, Saharan S, Sharma VK, Alsulami BS, Alsharif H, Lytras MD. CNN-TumorNet: leveraging explainability in deep learning for precise brain tumor diagnosis on MRI images. Front Oncol 2025; 15:1554559. [PMID: 40206584 PMCID: PMC11979982 DOI: 10.3389/fonc.2025.1554559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2025] [Accepted: 02/27/2025] [Indexed: 04/11/2025] Open
Abstract
Introduction The early identification of brain tumors is essential for optimal treatment and patient prognosis. Advancements in MRI technology have markedly enhanced tumor detection yet necessitate accurate classification for appropriate therapeutic approaches. This underscores the necessity for sophisticated diagnostic instruments that are precise and comprehensible to healthcare practitioners. Methods Our research presents CNN-TumorNet, a convolutional neural network for categorizing MRI images into tumor and non-tumor categories. Although deep learning models exhibit great accuracy, their complexity frequently restricts clinical application due to inadequate interpretability. To address this, we employed the LIME technique, augmenting model transparency and offering explicit insights into its decision-making process. Results CNN-TumorNet attained a 99% accuracy rate in differentiating tumors from non-tumor MRI scans, underscoring its reliability and efficacy as a diagnostic instrument. Incorporating LIME guarantees that the model's judgments are comprehensible, enhancing its clinical adoption. Discussion Despite the efficacy of CNN-TumorNet, the overarching challenge of deep learning interpretability persists. These models may function as "black boxes," complicating doctors' ability to trust and accept them without comprehending their rationale. By integrating LIME, CNN-TumorNet achieves elevated accuracy alongside enhanced transparency, facilitating its application in clinical environments and improving patient care in neuro-oncology.
Collapse
Affiliation(s)
- Novsheena Rasool
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, Kashmir, India
| | - Niyaz Ahmad Wani
- School of Computer Science and Engineering, Institute of Integrated Learning in Management University (IILM), Greater Noida, Uttar Pradesh, India
| | - Javaid Iqbal Bhat
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, Kashmir, India
| | - Sandeep Saharan
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida, Uttar Pradesh, India
| | - Vishal Kumar Sharma
- Senior Project Engineer, AI Research Centre - Woxsen University, Hyderabad, Telangana, India
| | - Bassma Saleh Alsulami
- Faculty of Computing and Information Technology, King Abdulaziz University, Jedda, Saudi Arabia
| | - Hind Alsharif
- Computer Science and Artificial Intelligence Department, College of Computing, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Miltiadis D. Lytras
- Immersive Virtual Reality Research Group, King Abdulaziz University, Jeddah, Saudi Arabia
- Department of Computer Science and Engineering, American College of Greece, Athens, Greece
| |
Collapse
|
14
|
Ghadi YY, Saqib SM, Mazhar T, Almogren A, Waheed W, Altameem A, Hamam H. Explainable AI analysis for smog rating prediction. Sci Rep 2025; 15:8070. [PMID: 40055474 PMCID: PMC11889241 DOI: 10.1038/s41598-025-92788-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Accepted: 03/03/2025] [Indexed: 05/13/2025] Open
Abstract
Smog poses a direct threat to human health and the environment. Addressing this issue requires understanding how smog is formed. While major contributors include industries, fossil fuels, crop burning, and ammonia from fertilizers, vehicles play a significant role. Individually, a vehicle's contribution to smog may be small, but collectively, the vast number of vehicles has a substantial impact. Manually assessing the contribution of each vehicle to smog is impractical. However, advancements in machine learning make it possible to quantify this contribution. By creating a dataset with features such as vehicle model, year, fuel consumption (city), and fuel type, a predictive model can classify vehicles based on their smog impact, rating them on a scale from 1 (poor) to 8 (excellent). This study proposes a novel approach using Random Forest and Explainable Boosting Classifier models, along with SMOTE (Synthetic Minority Oversampling Technique), to predict the smog contribution of individual vehicles. The results outperform previous studies, with the proposed model achieving an accuracy of 86%. Key performance metrics include a Mean Squared Error of 0.2269, R-Squared (R2) of 0.9624, Mean Absolute Error of 0.2104, Explained Variance Score of 0.9625, and a Max Error of 4.3500. These results incorporate explainable AI techniques, using both agnostic and specific models, to provide clear and actionable insights. This work represents a significant step forward, as the dataset was last updated only five months ago, underscoring the timeliness and relevance of the research.
Collapse
Affiliation(s)
- Yazeed Yasin Ghadi
- Department of Computer Science and Software Engineering, Al Ain University, 12555, Abu Dhabi, United Arab Emirates
| | - Sheikh Muhammad Saqib
- Department of Computing and Information Technology, Gomal University, Dera Ismail Khan, 29050, Pakistan.
| | - Tehseen Mazhar
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan.
- Department of Computer Science and Information Technology, School Education Department, Government of Punjab, Layyah, 31200, Pakistan.
| | - Ahmad Almogren
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, 11633, Riyadh, Saudi Arabia
| | - Wajahat Waheed
- Department of Electrical and Computer Engineering, Purdue University, Indiana, 46323, USA
| | - Ayman Altameem
- Department of Natural and Engineering Sciences, College of Applied Studies and Community Services, King Saud University, 11543, Riyadh, Saudi Arabia
| | - Habib Hamam
- Faculty of Engineering, Uni de Moncton, Moncton, NB, E1A3E9, Canada
- School of Electrical Engineering, University of Johannesburg, Johannesburg, 2006, South Africa
- International Institute of Technology and Management (IITG), Av, Grandes Ecoles, BP 1989, Libreville, Gabon
- Bridges for Academic Excellence, Spectrum, Tunisa, Tunisia
| |
Collapse
|
15
|
Song B, Liang R. Integrating artificial intelligence with smartphone-based imaging for cancer detection in vivo. Biosens Bioelectron 2025; 271:116982. [PMID: 39616900 PMCID: PMC11789447 DOI: 10.1016/j.bios.2024.116982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 11/19/2024] [Accepted: 11/20/2024] [Indexed: 01/03/2025]
Abstract
Cancer is a major global health challenge, accounting for nearly one in six deaths worldwide. Early diagnosis significantly improves survival rates and patient outcomes, yet in resource-limited settings, the scarcity of medical resources often leads to late-stage diagnosis. Integrating artificial intelligence (AI) with smartphone-based imaging systems offers a promising solution by providing portable, cost-effective, and widely accessible tools for early cancer detection. This paper introduces advanced smartphone-based imaging systems that utilize various imaging modalities for in vivo detection of different cancer types and highlights the advancements of AI for in vivo cancer detection in smartphone-based imaging. However, these compact smartphone systems face challenges like low imaging quality and restricted computing power. The use of advanced AI algorithms to address the optical and computational limitations of smartphone-based imaging systems provides promising solutions. AI-based cancer detection also faces challenges. Transparency and reliability are critical factors in gaining the trust and acceptance of AI algorithms for clinical application, explainable and uncertainty-aware AI breaks the black box and will shape the future AI development in early cancer detection. The challenges and solutions for improving AI accuracy, transparency, and reliability are general issues in AI applications, the AI technologies, limitations, and potentials discussed in this paper are applicable to a wide range of biomedical imaging diagnostics beyond smartphones or cancer-specific applications. Smartphone-based multimodal imaging systems and deep learning algorithms for multimodal data analysis are also growing trends, as this approach can provide comprehensive information about the tissue being examined. Future opportunities and perspectives of AI-integrated smartphone imaging systems will be to make cutting-edge diagnostic tools more affordable and accessible, ultimately enabling early cancer detection for a broader population.
Collapse
Affiliation(s)
- Bofan Song
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ, 85721, USA.
| | - Rongguang Liang
- Wyant College of Optical Sciences, The University of Arizona, Tucson, AZ, 85721, USA.
| |
Collapse
|
16
|
Dawood H, Nawaz M, Ilyas MU, Nazir T, Javed A. Attention-guided CenterNet deep learning approach for lung cancer detection. Comput Biol Med 2025; 186:109613. [PMID: 39753023 DOI: 10.1016/j.compbiomed.2024.109613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 12/13/2024] [Accepted: 12/21/2024] [Indexed: 02/20/2025]
Abstract
Lung cancer remains a significant health concern worldwide, prompting ongoing research efforts to enhance early detection and diagnosis. Prior studies have identified key challenges in existing approaches, including limitations in feature extraction, interpretability, and computational efficiency. In response, this study introduces a novel deep learning (DL) framework, termed the Improved CenterNet approach, tailored specifically for lung cancer detection. The primary importance of this work lies in its innovative integration of ResNet-34 with an attention mechanism within the CenterNet architecture, addressing critical limitations identified in previous studies. By augmenting the base network with an attention mechanism, our framework offers improved feature extraction capabilities, enabling the model to learn relevant patterns associated with lung cancer amidst complex backgrounds and varying environmental conditions. This enhancement facilitates more accurate and interpretable predictions while reducing computational complexity and inference times. Through extensive experimental evaluations conducted on standard datasets, our proposed approach demonstrates promising results, highlighting its potential to advance the field of lung cancer detection and diagnosis. Specifically, we have acquired the precision, recall, and F1-Score of 99.89 %, 99.82 %, and 99.85 % on the LUNA-16 dataset, and 98.33 %, 98.02 %, and 98.17 % for the Kaggle data sample, respectively which is showing the efficacy of our approach. One limitation of the work is that it cannot effectively locate the samples with intense light variations. Therefore, future research work is focused on overcoming this challenge.
Collapse
Affiliation(s)
- Hussain Dawood
- School of Computing, Skyline University College, Sharjah, United Arab Emirates
| | - Marriam Nawaz
- Department of Software Engineering, University of Engineering and Technology-Taxila, 47050, Punjab, Pakistan
| | - Muhammad U Ilyas
- School of Computer Science, University of Birmingham, Dubai, United Arab Emirates
| | - Tahira Nazir
- Department of Software Engineering and Computer Science, Riphah International University, Gulberg Green Campus Islamabad, Pakistan
| | - Ali Javed
- Department of Software Engineering, University of Engineering and Technology-Taxila, 47050, Punjab, Pakistan.
| |
Collapse
|
17
|
Seven İ, Bayram D, Arslan H, Köş FT, Gümüşlü K, Aktürk Esen S, Şahin M, Şendur MAN, Uncu D. Predicting hepatocellular carcinoma survival with artificial intelligence. Sci Rep 2025; 15:6226. [PMID: 39979406 PMCID: PMC11842547 DOI: 10.1038/s41598-025-90884-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2024] [Accepted: 02/17/2025] [Indexed: 02/22/2025] Open
Abstract
Despite the extensive research on hepatocellular carcinoma (HCC) exploring various treatment strategies, the survival outcomes have remained unsatisfactory. The aim of this research was to evaluate the ability of machine learning (ML) methods in predicting the survival probability of HCC patients. The study retrospectively analyzed cases of patients with stage 1-4 HCC. Demographic, clinical, pathological, and laboratory data served as input variables. The researchers employed various feature selection techniques to identify the key predictors of patient mortality. Additionally, the study utilized a range of machine learning methods to model patient survival rates. The study included 393 individuals with HCC. For early-stage patients (stages 1-2), the models reached recall values of up to 91% for 6-month survival prediction. For advanced-stage patients (stage 4), the models achieved accuracy values of up to 92% for 3-year overall survival prediction. To predict whether patients are ex or not, the accuracy was 87.5% when using all 28 features without feature selection with the best performance coming from the implementation of weighted KNN. Further improvements in accuracy, reaching 87.8%, were achieved by applying feature selection methods and using a medium Gaussian SVM. This study demonstrates that machine learning techniques can reliably predict survival probabilities for HCC patients across all disease stages. The research also shows that AI models can accurately identify a high proportion of surviving individuals when assessing various clinical and pathological factors.
Collapse
Affiliation(s)
- İsmet Seven
- Ankara Bilkent City Hospital, Medical Oncology Clinic, Ankara, Turkey.
| | - Doğan Bayram
- Ankara Bilkent City Hospital, Medical Oncology Clinic, Ankara, Turkey
| | - Hilal Arslan
- Computer Engineering Department, Ankara Yıldırım Beyazıt University, Ankara, Turkey
| | - Fahriye Tuğba Köş
- Ankara Bilkent City Hospital, Medical Oncology Clinic, Ankara, Turkey
| | - Kübranur Gümüşlü
- Computer Engineering Department, Ankara Yıldırım Beyazıt University, Ankara, Turkey
| | - Selin Aktürk Esen
- Ankara Bilkent City Hospital, Medical Oncology Clinic, Ankara, Turkey
| | - Mücella Şahin
- Department of Internal Medicine, Ankara Bilkent City Hospital, Ankara, Turkey
| | | | - Doğan Uncu
- Ankara Bilkent City Hospital, Medical Oncology Clinic, Ankara, Turkey
| |
Collapse
|
18
|
Saadati S, Sepahvand A, Razzazi M. Cloud and IoT based smart agent-driven simulation of human gait for detecting muscles disorder. Heliyon 2025; 11:e42119. [PMID: 39906796 PMCID: PMC11791118 DOI: 10.1016/j.heliyon.2025.e42119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 01/17/2025] [Accepted: 01/18/2025] [Indexed: 02/06/2025] Open
Abstract
Motion disorders affect a significant portion of the global population. While some symptoms can be managed with medications, these treatments often impact all muscles uniformly, not just the affected ones, leading to potential side effects including involuntary movements, confusion, and decreased short-term memory. Currently, there is no dedicated application for differentiating healthy muscles from abnormal ones. Existing analysis applications, designed for other purposes, often lack essential software engineering features such as a user-friendly interface, infrastructure independence, usability and learning ability, cloud computing capabilities, and AI-based assistance. This research proposes a computer-based methodology to analyze human motion and differentiate between healthy and unhealthy muscles. First, an IoT-based approach is proposed to digitize human motion using smartphones instead of hardly accessible wearable sensors and markers. The motion data is then simulated to analyze the neuromusculoskeletal system. An agent-driven modeling method ensures the naturalness, accuracy, and interpretability of the simulation, incorporating neuromuscular details such as Henneman's size principle, action potentials, motor units, and biomechanical principles. The results are then provided to medical and clinical experts to aid in differentiating between healthy and unhealthy muscles and for further investigation. Additionally, a deep learning-based ensemble framework is proposed to assist in the analysis of the simulation results, offering both accuracy and interpretability. A user-friendly graphical interface enhances the application's usability. Being fully cloud-based, the application is infrastructure-independent and can be accessed on smartphones, PCs, and other devices without installation. This strategy not only addresses the current challenges in treating motion disorders but also paves the way for other clinical simulations by considering both scientific and computational requirements.
Collapse
Affiliation(s)
- Sina Saadati
- Department of Computer Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran
| | - Abdolah Sepahvand
- Department of Computer Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran
| | - Mohammadreza Razzazi
- Department of Computer Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran
| |
Collapse
|
19
|
Abas Mohamed Y, Ee Khoo B, Shahrimie Mohd Asaari M, Ezane Aziz M, Rahiman Ghazali F. Decoding the black box: Explainable AI (XAI) for cancer diagnosis, prognosis, and treatment planning-A state-of-the art systematic review. Int J Med Inform 2025; 193:105689. [PMID: 39522406 DOI: 10.1016/j.ijmedinf.2024.105689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 10/28/2024] [Accepted: 10/31/2024] [Indexed: 11/16/2024]
Abstract
OBJECTIVE Explainable Artificial Intelligence (XAI) is increasingly recognized as a crucial tool in cancer care, with significant potential to enhance diagnosis, prognosis, and treatment planning. However, the holistic integration of XAI across all stages of cancer care remains underexplored. This review addresses this gap by systematically evaluating the role of XAI in these critical areas, identifying key challenges and emerging trends. MATERIALS AND METHODS Following the PRISMA guidelines, a comprehensive literature search was conducted across Scopus and Web of Science, focusing on publications from January 2020 to May 2024. After rigorous screening and quality assessment, 69 studies were selected for in-depth analysis. RESULTS The review identified critical gaps in the application of XAI within cancer care, notably the exclusion of clinicians in 83% of studies, which raises concerns about real-world applicability and may lead to explanations that are technically sound but clinically irrelevant. Additionally, 87% of studies lacked rigorous evaluation of XAI explanations, compromising their reliability in clinical practice. The dominance of post-hoc visual methods like SHAP, LIME and Grad-CAM reflects a trend toward explanations that may be inherently flawed due to specific input perturbations and simplifying assumptions. The lack of formal evaluation metrics and standardization constrains broader XAI adoption in clinical settings, creating a disconnect between AI development and clinical integration. Moreover, translating XAI insights into actionable clinical decisions remains challenging due to the absence of clear guidelines for integrating these tools into clinical workflows. CONCLUSION This review highlights the need for greater clinician involvement, standardized XAI evaluation metrics, clinician-centric interfaces, context-aware XAI systems, and frameworks for integrating XAI into clinical workflows for informed clinical decision-making and improved outcomes in cancer care.
Collapse
Affiliation(s)
- Yusuf Abas Mohamed
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia
| | - Bee Ee Khoo
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia.
| | - Mohd Shahrimie Mohd Asaari
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia
| | - Mohd Ezane Aziz
- Department of Radiology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia (USM), Kelantan, Malaysia
| | - Fattah Rahiman Ghazali
- Department of Radiology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia (USM), Kelantan, Malaysia
| |
Collapse
|
20
|
Ozdemir B, Aslan E, Pacal I. Attention Enhanced InceptionNeXt-Based Hybrid Deep Learning Model for Lung Cancer Detection. IEEE ACCESS 2025; 13:27050-27069. [DOI: 10.1109/access.2025.3539122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2025]
Affiliation(s)
- Burhanettin Ozdemir
- Department of Operations and Project Management, College of Business, Alfaisal University, Riyadh, Saudi Arabia
| | - Emrah Aslan
- Department of Computer Engineering, Faculty of Engineering and Architecture, Mardin Artuklu University, Mardin, Türkiye
| | - Ishak Pacal
- Department of Computer Engineering, Faculty of Engineering, Igdir University, Iğdır, Türkiye
| |
Collapse
|
21
|
Hu X, Zhu M, Feng Z, Stanković L. Manifold-based Shapley explanations for high dimensional correlated features. Neural Netw 2024; 180:106634. [PMID: 39191125 DOI: 10.1016/j.neunet.2024.106634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 07/16/2024] [Accepted: 08/13/2024] [Indexed: 08/29/2024]
Abstract
Explainable artificial intelligence (XAI) holds significant importance in enhancing the reliability and transparency of network decision-making. SHapley Additive exPlanations (SHAP) is a game-theoretic approach for network interpretation, attributing confidence to inputs features to measure their importance. However, SHAP often relies on a flawed assumption that the model's features are independent, leading to incorrect results when dealing with correlated features. In this paper, we introduce a novel manifold-based Shapley explanation method, termed Latent SHAP. Latent SHAP transforms high-dimensional data into low-dimensional manifolds to capture correlations among features. We compute Shapley values on the data manifold and devise three distinct gradient-based mapping methods to transfer them back to the high-dimensional space. Our primary objectives include: (1) correcting misinterpretations by SHAP in certain samples; (2) addressing the challenge of feature correlations in high-dimensional data interpretation; and (3) reducing algorithmic complexity through Manifold SHAP for application in complex network interpretations. Code is available at https://github.com/Teriri1999/Latent-SHAP.
Collapse
Affiliation(s)
- Xuran Hu
- School of Electronic Engineering, Xidian University, Xi'an, China; Kunshan Innovation Institute of Xidian University, School of Electronic Engineering, Xidian University, Xi'an, China
| | - Mingzhe Zhu
- School of Electronic Engineering, Xidian University, Xi'an, China; Kunshan Innovation Institute of Xidian University, School of Electronic Engineering, Xidian University, Xi'an, China.
| | - Zhenpeng Feng
- School of Electronic Engineering, Xidian University, Xi'an, China
| | - Ljubiša Stanković
- Faculty of Electrical Engineering, University of Montenegro, Podgorica, Montenegro
| |
Collapse
|
22
|
Saadati S, Amirmazlaghani M. Revolutionizing endometriosis treatment: automated surgical operation through artificial intelligence and robotic vision. J Robot Surg 2024; 18:383. [PMID: 39460835 DOI: 10.1007/s11701-024-02139-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Accepted: 10/07/2024] [Indexed: 10/28/2024]
Abstract
Clinical limitations due to poverty significantly impact the lives and health of many individuals globally. Nevertheless, this challenge can be addressed with modern technologies, particularly through robotics and artificial intelligence. This study aims to address these challenges using advanced technologies in robotic surgery and artificial intelligence, proposing a method to fully automate endometriosis robotic surgery with a focus on interpretability, accuracy, and reliability. A methodology for fully automatic endometriosis surgery is introduced. Given the complexity of endometriosis lesions detection, they are categorized by their anatomical location to improve system interpretability. Then, three ensemble U-Net frameworks are designed to detect and localize common types of endometriosis lesions intraoperatively. A cross-training approach is employed, exploring U-Net models with diverse neural architectures-such as ResNet50, ResNet101, VGG19, InceptionV3, MobileNet, and EfficientNetB7-to develop U-Net ensemble models for precise endometriosis lesions segmentation. A novel image augmentation technique is also introduced, enhancing the segmentation models' accuracy and reliability. Furthermore, two U-Net models are developed to localize the ovaries and uterus, mitigating unexpected noise and bolstering the method's accuracy and reliability. The image segmentation models, assessed using the Intersection over Union (IoU) metric, achieved outstanding results: 97.57% for ovarian, 96.35% for uterine, and 92.58% for peritoneal endometriosis. This study proposes a fully automatic method for some common types of endometriosis surgery, including ovarian endometriomas and superficial endometriosis. This method is centered around three ensemble U-Net frameworks and a noise reduction technique using two additional U-Nets for localizing the ovaries and uterus. This approach has the potential to significantly improve the accuracy and reliability of robotic surgeries, potentially reducing healthcare costs and improving outcomes for patients worldwide.
Collapse
Affiliation(s)
- Sina Saadati
- Department of Computer Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran.
| | - Maryam Amirmazlaghani
- Department of Computer Engineering, Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran
| |
Collapse
|
23
|
Saarela M, Podgorelec V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. APPLIED SCIENCES 2024; 14:8884. [DOI: 10.3390/app14198884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems.
Collapse
Affiliation(s)
- Mirka Saarela
- Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, FI-40014 Jyväskylä, Finland
| | - Vili Podgorelec
- Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia
| |
Collapse
|
24
|
Li M, Cai Y, Zhang M, Deng S, Wang L. NNBGWO-BRCA marker: Neural Network and binary grey wolf optimization based Breast cancer biomarker discovery framework using multi-omics dataset. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108291. [PMID: 38909399 DOI: 10.1016/j.cmpb.2024.108291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 05/09/2024] [Accepted: 06/16/2024] [Indexed: 06/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is a multifaceted condition characterized by diverse features and a substantial mortality rate, underscoring the imperative for timely detection and intervention. The utilization of multi-omics data has gained significant traction in recent years to identify biomarkers and classify subtypes in breast cancer. This kind of research idea from part to whole will also be an inevitable trend in future life science research. Deep learning can integrate and analyze multi-omics data to predict cancer subtypes, which can further drive targeted therapies. However, there are few articles leveraging the nature of deep learning for feature selection. Therefore, this paper proposes a Neural Network and Binary grey Wolf Optimization based BReast CAncer bioMarker (NNBGWO-BRCAMarker) discovery framework using multi-omics data to obtain a series of biomarkers for precise classification of breast cancer subtypes. METHODS NNBGWO-BRCAMarker consists of two phases: in the first phase, relevant genes are selected using the weights obtained from a trained feedforward neural network; in the second phase, the binary grey wolf optimization algorithm is leveraged to further screen the selected genes, resulting in a set of potential breast cancer biomarkers. RESULTS The SVM classifier with RBF kernel achieved a classification accuracy of 0.9242 ± 0.03 when trained using the 80 biomarkers identified by NNBGWO-BRCAMarker, as evidenced by the experimental results. We conducted a comprehensive gene set analysis, prognostic analysis, and druggability analysis, unveiling 25 druggable genes, 16 enriched pathways strongly linked to specific subtypes of breast cancer, and 8 genes linked to prognostic outcomes. CONCLUSIONS The proposed framework successfully identified 80 biomarkers from the multi-omics data, enabling accurate classification of breast cancer subtypes. This discovery may offer novel insights for clinicians to pursue in further studies.
Collapse
Affiliation(s)
- Min Li
- School of Information Engineering, Nanchang Institute of Technology, No. 289 Tianxiang Road, Nanchang Jiangxi, PR China.
| | - Yuheng Cai
- School of Information Engineering, Nanchang Institute of Technology, No. 289 Tianxiang Road, Nanchang Jiangxi, PR China
| | - Mingzhuang Zhang
- School of Information Engineering, Nanchang Institute of Technology, No. 289 Tianxiang Road, Nanchang Jiangxi, PR China
| | - Shaobo Deng
- School of Information Engineering, Nanchang Institute of Technology, No. 289 Tianxiang Road, Nanchang Jiangxi, PR China
| | - Lei Wang
- School of Information Engineering, Nanchang Institute of Technology, No. 289 Tianxiang Road, Nanchang Jiangxi, PR China
| |
Collapse
|
25
|
Chen Z, Liang N, Li H, Zhang H, Li H, Yan L, Hu Z, Chen Y, Zhang Y, Wang Y, Ke D, Shi N. Exploring explainable AI features in the vocal biomarkers of lung disease. Comput Biol Med 2024; 179:108844. [PMID: 38981214 DOI: 10.1016/j.compbiomed.2024.108844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/15/2024] [Accepted: 06/04/2024] [Indexed: 07/11/2024]
Abstract
This review delves into the burgeoning field of explainable artificial intelligence (XAI) in the detection and analysis of lung diseases through vocal biomarkers. Lung diseases, often elusive in their early stages, pose a significant public health challenge. Recent advancements in AI have ushered in innovative methods for early detection, yet the black-box nature of many AI models limits their clinical applicability. XAI emerges as a pivotal tool, enhancing transparency and interpretability in AI-driven diagnostics. This review synthesizes current research on the application of XAI in analyzing vocal biomarkers for lung diseases, highlighting how these techniques elucidate the connections between specific vocal features and lung pathology. We critically examine the methodologies employed, the types of lung diseases studied, and the performance of various XAI models. The potential for XAI to aid in early detection, monitor disease progression, and personalize treatment strategies in pulmonary medicine is emphasized. Furthermore, this review identifies current challenges, including data heterogeneity and model generalizability, and proposes future directions for research. By offering a comprehensive analysis of explainable AI features in the context of lung disease detection, this review aims to bridge the gap between advanced computational approaches and clinical practice, paving the way for more transparent, reliable, and effective diagnostic tools.
Collapse
Affiliation(s)
- Zhao Chen
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Ning Liang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Haoyuan Li
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Haili Zhang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Huizhen Li
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Lijiao Yan
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Ziteng Hu
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Yaxin Chen
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Yujing Zhang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Yanping Wang
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China
| | - Dandan Ke
- Special Disease Clinic, Huaishuling Branch of Beijing Fengtai Hospital of Integrated Traditional Chinese and Western Medicine, Beijing, China.
| | - Nannan Shi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China; Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, Beijing, China.
| |
Collapse
|
26
|
Kumaran S Y, Jeya JJ, R MT, Khan SB, Alzahrani S, Alojail M. Explainable lung cancer classification with ensemble transfer learning of VGG16, Resnet50 and InceptionV3 using grad-cam. BMC Med Imaging 2024; 24:176. [PMID: 39030496 PMCID: PMC11264852 DOI: 10.1186/s12880-024-01345-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 06/24/2024] [Indexed: 07/21/2024] Open
Abstract
Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.
Collapse
Affiliation(s)
- Yogesh Kumaran S
- Department of Computer Science & Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bengaluru, 562112, India
| | - J Jospin Jeya
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, India
| | - Mahesh T R
- Department of Computer Science & Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bengaluru, 562112, India.
| | - Surbhi Bhatia Khan
- School of science, engineering and environment, University of Salford, Manchester, UK
| | - Saeed Alzahrani
- Management Information System Department, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - Mohammed Alojail
- Management Information System Department, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
27
|
Wang W, He J, Liu H, Yuan W. MDC-RHT: Multi-Modal Medical Image Fusion via Multi-Dimensional Dynamic Convolution and Residual Hybrid Transformer. SENSORS (BASEL, SWITZERLAND) 2024; 24:4056. [PMID: 39000834 PMCID: PMC11244347 DOI: 10.3390/s24134056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 06/19/2024] [Accepted: 06/19/2024] [Indexed: 07/16/2024]
Abstract
The fusion of multi-modal medical images has great significance for comprehensive diagnosis and treatment. However, the large differences between the various modalities of medical images make multi-modal medical image fusion a great challenge. This paper proposes a novel multi-scale fusion network based on multi-dimensional dynamic convolution and residual hybrid transformer, which has better capability for feature extraction and context modeling and improves the fusion performance. Specifically, the proposed network exploits multi-dimensional dynamic convolution that introduces four attention mechanisms corresponding to four different dimensions of the convolutional kernel to extract more detailed information. Meanwhile, a residual hybrid transformer is designed, which activates more pixels to participate in the fusion process by channel attention, window attention, and overlapping cross attention, thereby strengthening the long-range dependence between different modes and enhancing the connection of global context information. A loss function, including perceptual loss and structural similarity loss, is designed, where the former enhances the visual reality and perceptual details of the fused image, and the latter enables the model to learn structural textures. The whole network adopts a multi-scale architecture and uses an unsupervised end-to-end method to realize multi-modal image fusion. Finally, our method is tested qualitatively and quantitatively on mainstream datasets. The fusion results indicate that our method achieves high scores in most quantitative indicators and satisfactory performance in visual qualitative analysis.
Collapse
Affiliation(s)
- Wenqing Wang
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
- Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi'an University of Technology, Xi'an 710048, China
| | - Ji He
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
| | - Han Liu
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
- Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi'an University of Technology, Xi'an 710048, China
| | - Wei Yuan
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an 710048, China
| |
Collapse
|
28
|
Pathan RK, Shorna IJ, Hossain MS, Khandaker MU, Almohammed HI, Hamd ZY. The efficacy of machine learning models in lung cancer risk prediction with explainability. PLoS One 2024; 19:e0305035. [PMID: 38870229 PMCID: PMC11175504 DOI: 10.1371/journal.pone.0305035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
Among many types of cancers, to date, lung cancer remains one of the deadliest cancers around the world. Many researchers, scientists, doctors, and people from other fields continuously contribute to this subject regarding early prediction and diagnosis. One of the significant problems in prediction is the black-box nature of machine learning models. Though the detection rate is comparatively satisfactory, people have yet to learn how a model came to that decision, causing trust issues among patients and healthcare workers. This work uses multiple machine learning models on a numerical dataset of lung cancer-relevant parameters and compares performance and accuracy. After comparison, each model has been explained using different methods. The main contribution of this research is to give logical explanations of why the model reached a particular decision to achieve trust. This research has also been compared with a previous study that worked with a similar dataset and took expert opinions regarding their proposed model. We also showed that our research achieved better results than their proposed model and specialist opinion using hyperparameter tuning, having an improved accuracy of almost 100% in all four models.
Collapse
Affiliation(s)
- Refat Khan Pathan
- Department of Computing and Information Systems, School of Engineering and Technology, Sunway University, Selangor, Malaysia
| | | | - Md. Sayem Hossain
- School of Computing Science, Faculty of Innovation and Technology, Taylor’s University Lakeside Campus, Selangor, Malaysia
| | - Mayeen Uddin Khandaker
- Applied Physics and Radiation Technologies Group, CCDCU, School of Engineering and Technology, Sunway University, Selangor, Malaysia
- Faculty of Graduate Studies, Daffodil International University, Daffodil Smart City, Savar, Dhaka, Bangladesh
| | - Huda I. Almohammed
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Zuhal Y. Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|