1
|
Tun HM, Rahman HA, Naing L, Malik OA. Artificial intelligence utilization in cancer screening program across ASEAN: a scoping review. BMC Cancer 2025; 25:703. [PMID: 40234807 PMCID: PMC12001681 DOI: 10.1186/s12885-025-14026-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Accepted: 03/26/2025] [Indexed: 04/17/2025] Open
Abstract
BACKGROUND Cancer remains a significant health challenge in the ASEAN region, highlighting the need for effective screening programs. However, approaches, target demographics, and intervals vary across ASEAN member states, necessitating a comprehensive understanding of these variations to assess program effectiveness. Additionally, while artificial intelligence (AI) holds promise as a tool for cancer screening, its utilization in the ASEAN region is unexplored. PURPOSE This study aims to identify and evaluate different cancer screening programs across ASEAN, with a focus on assessing the integration and impact of AI in these programs. METHODS A scoping review was conducted using PRISMA-ScR guidelines to provide a comprehensive overview of cancer screening programs and AI usage across ASEAN. Data were collected from government health ministries, official guidelines, literature databases, and relevant documents. The use of AI in cancer screening reviews involved searches through PubMed, Scopus, and Google Scholar with the inclusion criteria of only included studies that utilized data from the ASEAN region from January 2019 to May 2024. RESULTS The findings reveal diverse cancer screening approaches in ASEAN. Countries like Myanmar, Laos, Cambodia, Vietnam, Brunei, Philippines, Indonesia and Timor-Leste primarily adopt opportunistic screening, while Singapore, Malaysia, and Thailand focus on organized programs. Cervical cancer screening is widespread, using both opportunistic and organized methods. Fourteen studies were included in the scoping review, covering breast (5 studies), cervical (2 studies), colon (4 studies), hepatic (1 study), lung (1 study), and oral (1 study) cancers. Studies revealed that different stages of AI integration for cancer screening: prospective clinical evaluation (50%), silent trial (36%) and exploratory model development (14%), with promising results in enhancing cancer screening accuracy and efficiency. CONCLUSION Cancer screening programs in the ASEAN region require more organized approaches targeting appropriate age groups at regular intervals to meet the WHO's 2030 screening targets. Efforts to integrate AI in Singapore, Malaysia, Vietnam, Thailand, and Indonesia show promise in optimizing screening processes, reducing costs, and improving early detection. AI technology integration enhances cancer identification accuracy during screening, improving early detection and cancer management across the ASEAN region.
Collapse
Affiliation(s)
- Hein Minn Tun
- PAPRSB Institute of Health Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei.
- School of Digital Science, Universiti Brunei Darussalam, Lebuhraya Tungku, Bandar Seri Begawan, Brunei.
| | - Hanif Abdul Rahman
- PAPRSB Institute of Health Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei
- School of Digital Science, Universiti Brunei Darussalam, Lebuhraya Tungku, Bandar Seri Begawan, Brunei
| | - Lin Naing
- PAPRSB Institute of Health Science, Universiti Brunei Darussalam, Bandar Seri Begawan, Brunei
| | - Owais Ahmed Malik
- School of Digital Science, Universiti Brunei Darussalam, Lebuhraya Tungku, Bandar Seri Begawan, Brunei
| |
Collapse
|
2
|
Kumar KA, Vanmathi C. A hybrid parallel convolutional spiking neural network for enhanced skin cancer detection. Sci Rep 2025; 15:11137. [PMID: 40169652 PMCID: PMC11962159 DOI: 10.1038/s41598-025-85627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Accepted: 01/06/2025] [Indexed: 04/03/2025] Open
Abstract
The most widespread kind of cancer, affecting millions of lives is skin cancer. When the condition of illness worsens, the chance of survival is reduced, and thus detection of skin cancer is extremely difficult. Hence, this paper introduces a new model, known as Parallel Convolutional Spiking Neural Network (PCSN-Net) for detecting skin cancer. Initially, the input skin cancer image is pre-processed by employing Medav filter to eradicate the noise in image. Next, affected region is segmented by utilizing DeepSegNet, which is formed by integrating SegNet and Deep joint segmentation, where RV coefficient is used to fuse the outputs. Here, the segmented image is then augmented by including process, such as geometric transformation, colorspace transformation, mixing images Pixel averaging (mixup), and overlaying crops (CutMix). Then textural, statistical, Discrete Wavelet Transform (DWT) based Local Direction Pattern (LDP) with entropy, and Local Normal Derivative Pattern (LNDP) features are mined. Finally, skin cancer detection is executed using PCSN-Net, which is formed by fusing Parallel Convolutional Neural Network (PCNN) and Deep Spiking Neural Network (DSNN). In this work, the suggested PCSN-Net system shows high accuracy and reliability in identifying skin cancer. The experimental findings suggest that PCSN-Net has an accuracy of 95.7%, a sensitivity of 94.7%, and a specificity of 92.6%. These parameters demonstrate the model's capacity to discriminate among malignant and benign skin lesions properly. Furthermore, the system has a false positive rate (FPR) of 10.7% and a positive predictive value (PPV) of 90.8%, demonstrating its capacity to reduce wrong diagnosis while prioritizing true positive instances. PCSN-Net outperforms various complex algorithms, including EfficientNet, DenseNet, and Inception-ResNet-V2, despite preserving effective training and inference times. The results obtained show the feasibility of the model for real-time clinical use, strengthening its capacity for quick and accurate skin cancer detection.
Collapse
Affiliation(s)
- K Anup Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, Tamilnadu, India
| | - C Vanmathi
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, Tamilnadu, India.
| |
Collapse
|
3
|
Nawaz K, Zanib A, Shabir I, Li J, Wang Y, Mahmood T, Rehman A. Skin cancer detection using dermoscopic images with convolutional neural network. Sci Rep 2025; 15:7252. [PMID: 40021731 PMCID: PMC11871080 DOI: 10.1038/s41598-025-91446-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2024] [Accepted: 02/20/2025] [Indexed: 03/03/2025] Open
Abstract
Skin malignant melanoma is a high-risk tumor with low incidence but high mortality rates. Early detection and treatment are crucial for a cure. Machine learning studies have focused on classifying melanoma tumors, but these methods are cumbersome and fail to extract deeper features. This limits their ability to distinguish subtle variations in skin lesions accurately, hindering effective early diagnosis. The study introduces a deep learning-based network specifically designed for skin lesion detection to enhance data in the melanoma dataset. It leverages a novel FCDS-CNN architecture to address class-imbalanced problems and improve data quality. Specifically, FCDS-CNN incorporates data augmentation and class weighting techniques to mitigate the impact of imbalanced classes. It also presents a practical, large-scale solution that allows seamless, real-world incorporation to support dermatologists in their early screening processes. The proposed robust model incorporates data augmentation and class weighting to improve performance across all lesions. The proposed dataset includes 10015 images of seven classes of skin lesions available in Kaggle. To overcome the dominance of one class over the other, methods like data augmentation and class weighting are used. The FCDS-CNN showed improved accuracy with an average accuracy of 96%, outperforming pre-trained models such as ResNet, EfficientNet, Inception, and MobileNet in the precision, recall, F1-score, and area under the curve parameters. These pre-trained models are more effective for general image classification and struggle with the nuanced features and class imbalances inherent in medical image datasets. The FCDS-CNN demonstrated practical effectiveness by outperforming the compared pre-trained model based on distinct parameters. This work is a testament to the importance of specificity in medical image analysis regarding skin cancer detection.
Collapse
Affiliation(s)
- Khadija Nawaz
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China
- Department of Computer Science, University of Education, Vehari Campus, Vehari, 61161, Pakistan
| | - Atika Zanib
- Department of Computer Science, University of Education, Vehari Campus, Vehari, 61161, Pakistan
| | - Iqra Shabir
- Department of Computer Science, University of Education, Vehari Campus, Vehari, 61161, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124, China
| | - Yu Wang
- Shandong Research Institute of Industrial Technology, Shandong, China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia.
- Faculty of Information Sciences, University of Education, Lahore, 54000, Pakistan.
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| |
Collapse
|
4
|
Kaur R, GholamHosseini H, Lindén M. Advanced Deep Learning Models for Melanoma Diagnosis in Computer-Aided Skin Cancer Detection. SENSORS (BASEL, SWITZERLAND) 2025; 25:594. [PMID: 39943236 PMCID: PMC11821218 DOI: 10.3390/s25030594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2024] [Revised: 01/05/2025] [Accepted: 01/08/2025] [Indexed: 02/16/2025]
Abstract
The most deadly type of skin cancer is melanoma. A visual examination does not provide an accurate diagnosis of melanoma during its early to middle stages. Therefore, an automated model could be developed that assists with early skin cancer detection. It is possible to limit the severity of melanoma by detecting it early and treating it promptly. This study aims to develop efficient approaches for various phases of melanoma computer-aided diagnosis (CAD), such as preprocessing, segmentation, and classification. The first step of the CAD pipeline includes the proposed hybrid method, which uses morphological operations and context aggregation-based deep neural networks to remove hairlines and improve poor contrast in dermoscopic skin cancer images. An image segmentation network based on deep learning is then used to extract lesion regions for detailed analysis and calculate the optimized classification features. Lastly, a deep neural network is used to distinguish melanoma from benign lesions. The proposed approaches use a benchmark dataset named International Skin Imaging Collaboration (ISIC) 2020. In this work, two forms of evaluations are performed with the classification model. The first experiment involves the incorporation of the results from the preprocessing and segmentation stages into the classification model. The second experiment involves the evaluation of the classifier without employing these stages i.e., using raw images. From the study results, it can be concluded that a classification model using segmented and cleaned images contributes more to achieving an accurate classification rate of 93.40% with a 1.3 s test time on a single image.
Collapse
Affiliation(s)
- Ranpreet Kaur
- Department of Software Engineering & AI, Media Design School, Auckland 1010, New Zealand
| | - Hamid GholamHosseini
- School of Engineering, Computer, and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand;
| | - Maria Lindén
- Division of Intelligent Future Technologies, Mälardalen University, 721 23 Västerås, Sweden;
| |
Collapse
|
5
|
Khullar V, Kaur P, Gargrish S, Mishra AM, Singh P, Diwakar M, Bijalwan A, Gupta I. Minimal sourced and lightweight federated transfer learning models for skin cancer detection. Sci Rep 2025; 15:2605. [PMID: 39837883 PMCID: PMC11750969 DOI: 10.1038/s41598-024-82402-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 12/05/2024] [Indexed: 01/23/2025] Open
Abstract
One of the most fatal diseases that affect people is skin cancer. Because nevus and melanoma lesions are so similar and there is a high likelihood of false negative diagnoses challenges in hospitals. The aim of this paper is to propose and develop a technique to classify type of skin cancer with high accuracy using minimal resources and lightweight federated transfer learning models. Here minimal resource based pre-trained deep learning models including EfficientNetV2S, EfficientNetB3, ResNet50, and NasNetMobile have been used to apply transfer learning on data of shape[Formula: see text]. To compare with applied minimal resource transfer learning, same methodology has been applied using best identified model i.e. EfficientNetV2S for images of shape[Formula: see text]. The identified minimal and lightweight resource based EfficientNetV2S with images of shape [Formula: see text] have been applied for federated learning ecosystem. Both, identically and non-identically distributed datasets of shape [Formula: see text] have been applied and analyzed through federated learning implementations. The results have been analyzed to show the impact of low-pixel images with non-identical distributions over clients using parameters such as accuracy, precision, recall and categorical losses. The classification of skin cancer shows an accuracy of IID 89.83% and Non-IID 90.64%.
Collapse
Affiliation(s)
- Vikas Khullar
- Chitkara University Institute of Engineering Technology, Chitkara University, Rajpura, Punjab, India
| | - Prabhjot Kaur
- Chitkara University Institute of Engineering Technology, Chitkara University, Rajpura, Punjab, India
| | - Shubham Gargrish
- Chitkara University Institute of Engineering Technology, Chitkara University, Rajpura, Punjab, India
| | - Anand Muni Mishra
- Chandigarh Engineering College, Chandigarh Group of Colleges, Jhanjeri, Mohali, India
| | - Prabhishek Singh
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida, Uttar Pradesh, India
| | - Manoj Diwakar
- CSE Department, Graphic Era Deemed to be University, Dehradun, Uttrakhand, India
- Graphic Era Hill University, Dehradun, Uttrakhand, India
| | - Anchit Bijalwan
- Faculty of Electrical and Computer Engineering, Arba Minch University, Arba Minch, Ethiopia.
| | - Indrajeet Gupta
- School of Computer Science and AI, SR University, Warangal, Telangana, India
| |
Collapse
|
6
|
Rai HM, Yoo J, Dashkevych S. Transformative Advances in AI for Precise Cancer Detection: A Comprehensive Review of Non-Invasive Techniques. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2025. [DOI: 10.1007/s11831-024-10219-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/07/2024] [Indexed: 03/02/2025]
|
7
|
Natha P, Tera SP, Chinthaginjala R, Rab SO, Narasimhulu CV, Kim TH. Boosting skin cancer diagnosis accuracy with ensemble approach. Sci Rep 2025; 15:1290. [PMID: 39779772 PMCID: PMC11711234 DOI: 10.1038/s41598-024-84864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 12/27/2024] [Indexed: 01/11/2025] Open
Abstract
Skin cancer is common and deadly, hence a correct diagnosis at an early age is essential. Effective therapy depends on precise classification of the several skin cancer forms, each with special traits. Because dermoscopy and other sophisticated imaging methods produce detailed lesion images, early detection has been enhanced. It's still difficult to analyze the images to differentiate benign from malignant tumors, though. Better predictive modeling methods are needed since the diagnostic procedures used now frequently produce inaccurate and inconsistent results. In dermatology, Machine learning (ML) models are becoming essential for the automatic detection and classification of skin cancer lesions from image data. With the ensemble model, which mix several ML approaches to take use of their advantages and lessen their disadvantages, this work seeks to improve skin cancer predictions. We introduce a new method, the Max Voting method, for optimization of skin cancer classification. On the HAM10000 and ISIC 2018 datasets, we trained and assessed three distinct ML models: Random Forest (RF), Multi-layer Perceptron Neural Network (MLPN), and Support Vector Machine (SVM). Overall performance was increased by the combined predictions made with the Max Voting technique. Moreover, feature vectors that were optimally produced from image data by a Genetic Algorithm (GA) were given to the ML models. We demonstrate that the Max Voting method greatly improves predictive performance, reaching an accuracy of 94.70% and producing the best results for F1-measure, recall, and precision. The most dependable and robust approach turned out to be Max Voting, which combines the benefits of numerous pre-trained ML models to provide a new and efficient method for classifying skin cancer lesions.
Collapse
Affiliation(s)
- Priya Natha
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram, Guntur, Andhra Pradesh, 522302, India
| | - Sivarama Prasad Tera
- Department of Electronics and Electrical Engineering, Indian Institute of Technology, Guwahati, Assam, 781039, India
| | - Ravikumar Chinthaginjala
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, 632014, India.
| | - Safia Obaidur Rab
- Department of Clinical Laboratory Sciences, College of Applied Medical Science, King Khalid University, Abha, Saudi Arabia
| | - C Venkata Narasimhulu
- Department of Electronics and Communication Engineering, Chaitanya Bharati Institute of Technology, Hyderabad, 500075, India
| | - Tae Hoon Kim
- School of Information and Electronic Engineering and Zhejiang Key Laboratory of Biomedical Intelligent Computing Technology, Zhejiang University of Science and Technology, No. 318, Hangzhou, Zhejiang, China.
| |
Collapse
|
8
|
Efat AH, Hasan SMM, Uddin MP, Emon FH. Inverse Gini indexed averaging: A multi-leveled ensemble approach for skin lesion classification using attention-integrated customized ResNet variants. Digit Health 2025; 11:20552076241312936. [PMID: 39839960 PMCID: PMC11748089 DOI: 10.1177/20552076241312936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Accepted: 12/18/2024] [Indexed: 01/23/2025] Open
Abstract
Objective To improve the accuracy and explainability of skin lesion detection and classification, particularly for several types of skin cancers, through a novel approach based on the convolutional neural networks with attention-integrated customized ResNet variants (CRVs) and an optimized ensemble learning (EL) strategy. Methods Our approach utilizes all ResNet variants combined with three attention mechanisms: channel attention, soft attention, and squeeze-excitation attention. These attention-integrated ResNet variants are aggregated through a unique multi-level EL strategy. We propose an innovative weight optimization method, inverse Gini indexed averaging (IGIA), which is further extended to multi-leveled IGIA (ML-IGIA) to determine the optimal weights for each model within multiple ensemble levels. For interpretability, we employ gradient class activation map to highlight the regions responsible for classification dominance, enhancing the model's transparency. Results Our method was evaluated on the Human Against Machines 10000 dataset, achieving a superior accuracy of 94.52% with the ML-IGIA approach, outperforming existing methods. Conclusions The proposed CRV-based ensemble model with ML-IGIA demonstrates robust performance in skin lesion classification, offering both high accuracy and enhanced interpretability. This approach addresses the current research gap in effective weight optimization in EL and supports timely, automated skin disease detection.
Collapse
Affiliation(s)
- Anwar Hossain Efat
- Computer Science and Engineering Department, IUBAT - International University of Business Agriculture and Technology, Dhaka, Bangladesh
| | - SM Mahedy Hasan
- Computer Science and Engineering Department, Rajshahi University of Engineering & Technology, Kazla, Rajshahi, Bangladesh
| | - Md Palash Uddin
- Computer Science and Engineering Department, Hajee Mohammad Danesh Science and Technology University, Dinajpur, Rangpur, Bangladesh
| | - Faysal Hossain Emon
- Civil Engineering Department, Daffodil International University, Dhaka, Bangladesh
| |
Collapse
|
9
|
Hamim SA, Tamim MUI, Mridha MF, Safran M, Che D. SmartSkin-XAI: An Interpretable Deep Learning Approach for Enhanced Skin Cancer Diagnosis in Smart Healthcare. Diagnostics (Basel) 2024; 15:64. [PMID: 39795592 PMCID: PMC11720047 DOI: 10.3390/diagnostics15010064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 12/23/2024] [Accepted: 12/23/2024] [Indexed: 01/13/2025] Open
Abstract
Background: Skin cancer, particularly melanoma, poses significant challenges due to the heterogeneity of skin images and the demand for accurate and interpretable diagnostic systems. Early detection and effective management are crucial for improving patient outcomes. Traditional AI models often struggle with balancing accuracy and interpretability, which are critical for clinical adoption. Methods: The SmartSkin-XAI methodology incorporates a fine-tuned DenseNet121 model combined with XAI techniques to interpret predictions. This approach improves early detection and patient management by offering a transparent decision-making process. The model was evaluated using two datasets: the ISIC dataset and the Kaggle dataset. Performance metrics such as classification accuracy, precision, recall, and F1 score were compared against benchmark models, including DenseNet121, InceptionV3, and esNet50. Results: SmartSkin-XAI achieved a classification accuracy of 97% on the ISIC dataset and 98% on the Kaggle dataset. The model demonstrated high stability in precision, recall, and F1 score measures, outperforming the benchmark models. These results underscore the robustness and applicability of SmartSkin-XAI for real-world healthcare scenarios. Conclusions: SmartSkin-XAI addresses critical challenges in melanoma diagnosis by integrating state-of-the-art architecture with XAI methods, providing both accuracy and interpretability. This approach enhances clinical decision-making, fosters trust among healthcare professionals, and represents a significant advancement in incorporating AI-driven diagnostics into medicine, particularly for bedside applications.
Collapse
Affiliation(s)
- Sultanul Arifeen Hamim
- Department of Computer Science, American International University-Bangladesh, Dhaka 1229, Bangladesh; (S.A.H.); (M.U.I.T.)
| | - Mubasshar U. I. Tamim
- Department of Computer Science, American International University-Bangladesh, Dhaka 1229, Bangladesh; (S.A.H.); (M.U.I.T.)
| | - M. F. Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka 1229, Bangladesh; (S.A.H.); (M.U.I.T.)
| | - Mejdl Safran
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Dunren Che
- Department of Electrical Engineering and Computer Science, Texas A&M University-Kingsville, Kingsville, TX 78363, USA;
| |
Collapse
|
10
|
Pacal I, Alaftekin M, Zengul FD. Enhancing Skin Cancer Diagnosis Using Swin Transformer with Hybrid Shifted Window-Based Multi-head Self-attention and SwiGLU-Based MLP. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:3174-3192. [PMID: 38839675 PMCID: PMC11612041 DOI: 10.1007/s10278-024-01140-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 06/07/2024]
Abstract
Skin cancer is one of the most frequently occurring cancers worldwide, and early detection is crucial for effective treatment. Dermatologists often face challenges such as heavy data demands, potential human errors, and strict time limits, which can negatively affect diagnostic outcomes. Deep learning-based diagnostic systems offer quick, accurate testing and enhanced research capabilities, providing significant support to dermatologists. In this study, we enhanced the Swin Transformer architecture by implementing the hybrid shifted window-based multi-head self-attention (HSW-MSA) in place of the conventional shifted window-based multi-head self-attention (SW-MSA). This adjustment enables the model to more efficiently process areas of skin cancer overlap, capture finer details, and manage long-range dependencies, while maintaining memory usage and computational efficiency during training. Additionally, the study replaces the standard multi-layer perceptron (MLP) in the Swin Transformer with a SwiGLU-based MLP, an upgraded version of the gated linear unit (GLU) module, to achieve higher accuracy, faster training speeds, and better parameter efficiency. The modified Swin model-base was evaluated using the publicly accessible ISIC 2019 skin dataset with eight classes and was compared against popular convolutional neural networks (CNNs) and cutting-edge vision transformer (ViT) models. In an exhaustive assessment on the unseen test dataset, the proposed Swin-Base model demonstrated exceptional performance, achieving an accuracy of 89.36%, a recall of 85.13%, a precision of 88.22%, and an F1-score of 86.65%, surpassing all previously reported research and deep learning models documented in the literature.
Collapse
Affiliation(s)
- Ishak Pacal
- Department of Computer Engineering, Igdir University, 76000, Igdir, Turkey
| | - Melek Alaftekin
- Department of Computer Engineering, Igdir University, 76000, Igdir, Turkey
| | - Ferhat Devrim Zengul
- Department of Health Services Administration, The University of Alabama at Birmingham, Birmingham, AL, USA.
- Center for Integrated System, School of Engineering, The University of Alabama at Birmingham, Birmingham, AL, USA.
- Department of Biomedical Informatics and Data Science, School of Medicine, The University of Alabama, Birmingham, USA.
| |
Collapse
|
11
|
Efat AH, Hasan SMM, Uddin MP, Mamun MA. A Multi-level ensemble approach for skin lesion classification using Customized Transfer Learning with Triple Attention. PLoS One 2024; 19:e0309430. [PMID: 39446759 PMCID: PMC11500880 DOI: 10.1371/journal.pone.0309430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 08/12/2024] [Indexed: 10/26/2024] Open
Abstract
Skin lesions encompass a variety of skin abnormalities, including skin diseases that affect structure and function, and skin cancer, which can be fatal and arise from abnormal cell growth. Early detection of lesions and automated prediction is crucial, yet accurately identifying responsible regions post-dominance dispersion remains a challenge in current studies. Thus, we propose a Convolutional Neural Network (CNN)-based approach employing a Customized Transfer Learning (CTL) model and Triple Attention (TA) modules in conjunction with Ensemble Learning (EL). While Ensemble Learning has become an integral component of both Machine Learning (ML) and Deep Learning (DL) methodologies, a specific technique ensuring optimal allocation of weights for each model's prediction is currently lacking. Consequently, the primary objective of this study is to introduce a novel method for determining optimal weights to aggregate the contributions of models for achieving desired outcomes. We term this approach "Information Gain Proportioned Averaging (IGPA)," further refining it to "Multi-Level Information Gain Proportioned Averaging (ML-IGPA)," which specifically involves the utilization of IGPA at multiple levels. Empirical evaluation of the HAM1000 dataset demonstrates that our approach achieves 94.93% accuracy with ML-IGPA, surpassing state-of-the-art methods. Given previous studies' failure to elucidate the exact focus of black-box models on specific regions, we utilize the Gradient Class Activation Map (GradCAM) to identify responsible regions and enhance explainability. Our study enhances both accuracy and interpretability, facilitating early diagnosis and preventing the consequences of neglecting skin lesion detection, thereby addressing issues related to time, accessibility, and costs.
Collapse
Affiliation(s)
- Anwar Hossain Efat
- Department of Computer Science and Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - S. M. Mahedy Hasan
- Department of Computer Science and Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Md. Palash Uddin
- Department of Computer Science and Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur, Bangladesh
| | - Md. Al Mamun
- Department of Computer Science and Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| |
Collapse
|
12
|
Mateen M, Hayat S, Arshad F, Gu YH, Al-antari MA. Hybrid Deep Learning Framework for Melanoma Diagnosis Using Dermoscopic Medical Images. Diagnostics (Basel) 2024; 14:2242. [PMID: 39410645 PMCID: PMC11476274 DOI: 10.3390/diagnostics14192242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Revised: 09/29/2024] [Accepted: 10/05/2024] [Indexed: 10/20/2024] Open
Abstract
Background: Melanoma, or skin cancer, is a dangerous form of cancer that is the major cause of the demise of thousands of people around the world. Methods: In recent years, deep learning has become more popular for analyzing and detecting these medical issues. In this paper, a hybrid deep learning approach has been proposed based on U-Net for image segmentation, Inception-ResNet-v2 for feature extraction, and the Vision Transformer model with a self-attention mechanism for refining the features for early and accurate diagnosis and classification of skin cancer. Furthermore, in the proposed approach, hyperparameter tuning helps to obtain more accurate and optimized results for image classification. Results: Dermoscopic shots gathered by the worldwide skin imaging collaboration (ISIC2020) challenge dataset are used in the proposed research work and achieved 98.65% accuracy, 99.20% sensitivity, and 98.03% specificity, which outperforms the other existing approaches for skin cancer classification. Furthermore, the HAM10000 dataset is used for ablation studies to compare and validate the performance of the proposed approach. Conclusions: The achieved outcome suggests that the proposed approach would be able to serve as a valuable tool for assisting dermatologists in the early detection of melanoma.
Collapse
Affiliation(s)
- Muhammad Mateen
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Shaukat Hayat
- Department of Software Engineering, International Islamic University, Islamabad 44000, Pakistan;
| | - Fizzah Arshad
- Department of Computer Science, Air University Multan Campus, Multan 61000, Pakistan;
| | - Yeong-Hyeon Gu
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
13
|
Saghir U, Singh SK, Hasan M. Skin Cancer Image Segmentation Based on Midpoint Analysis Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2581-2596. [PMID: 38627267 PMCID: PMC11522265 DOI: 10.1007/s10278-024-01106-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 02/16/2024] [Accepted: 03/27/2024] [Indexed: 10/30/2024]
Abstract
Skin cancer affects people of all ages and is a common disease. The death toll from skin cancer rises with a late diagnosis. An automated mechanism for early-stage skin cancer detection is required to diminish the mortality rate. Visual examination with scanning or imaging screening is a common mechanism for detecting this disease, but due to its similarity to other diseases, this mechanism shows the least accuracy. This article introduces an innovative segmentation mechanism that operates on the ISIC dataset to divide skin images into critical and non-critical sections. The main objective of the research is to segment lesions from dermoscopic skin images. The suggested framework is completed in two steps. The first step is to pre-process the image; for this, we have applied a bottom hat filter for hair removal and image enhancement by applying DCT and color coefficient. In the next phase, a background subtraction method with midpoint analysis is applied for segmentation to extract the region of interest and achieves an accuracy of 95.30%. The ground truth for the validation of segmentation is accomplished by comparing the segmented images with validation data provided with the ISIC dataset.
Collapse
Affiliation(s)
- Uzma Saghir
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India
| | - Shailendra Kumar Singh
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India.
| | - Moin Hasan
- Dept. of Computer Science & Engineering, Jain Deemed-to-be-University, Bengaluru, 562112, India
| |
Collapse
|
14
|
Medeiros da Silva F, Pena Modesto R, Cávoli Lira MC, Libanio Reis Santos E, Oliveira-Lima JD. Effects of benzophenone-3 on the liver and thyroid of adult zebrafish. Xenobiotica 2024; 54:840-846. [PMID: 39535153 DOI: 10.1080/00498254.2024.2429724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 11/11/2024] [Accepted: 11/11/2024] [Indexed: 11/16/2024]
Abstract
Benzophenone-3 (BP-3), commonly known as oxybenzone, is an organic compound that acts as a sunscreen, protecting the skin from UVA and UVB rays. Thus, the objective of this study was to investigate the effects of BP-3 on the liver and thyroid using morphological and biochemical approaches.Adult male zebrafish were randomly assigned to three groups, each with three repetitions (n = 10 per group) water control, solvent control (0.01% ethanol), and 1 μg/L of BP-3, using a static exposure system for 96 h. After the experiment, histopathological analyses of the liver and thyroid were performed, along with histochemical analyses (glycogen) and biochemical evaluations of the antioxidant enzymes superoxide dismutase (SOD) and Catalase (CAT).Exposure to BP-3 resulted in significant histopathological changes in the liver of Danio rerio, increasing the frequency of circulatory disturbances, progressive changes, inflammatory responses, and regressive changes. On the other hand, the thyroid gland did not show any morphological changes during exposure to BP-3, maintaining its typical structure with follicles. There was a significant increase in SOD activity, while CAT showed no changes after 96 h of exposure.The results obtained demonstrate that exposure to BP-3 causes significant morphophysiological changes in the liver of D. rerio, highlighting not only the negative impacts on the health of these organisms but also the ecotoxicological potential of the substance and its consequences for aquatic biota in contaminated environments.
Collapse
Affiliation(s)
| | - Renan Pena Modesto
- Faculty of Medicine of Universidade de Gurupi (UnirG), Rua Pará, Paraíso do Tocantins, Tocantins, Brazil
| | | | - Eduardo Libanio Reis Santos
- Department of General and Applied Biology, Institute of Biosciences of Universidade Estadual Paulista 'Júlio de Mesquita Filho' (Unesp), São Paulo, Brazil
| | - Jeffesson de Oliveira-Lima
- Faculty of Medicine of Universidade de Gurupi (UnirG), Rua Pará, Paraíso do Tocantins, Tocantins, Brazil
| |
Collapse
|
15
|
Sriraman H, Badarudeen S, Vats S, Balasubramanian P. A Systematic Review of Real-Time Deep Learning Methods for Image-Based Cancer Diagnostics. J Multidiscip Healthc 2024; 17:4411-4425. [PMID: 39281299 PMCID: PMC11397255 DOI: 10.2147/jmdh.s446745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 07/17/2024] [Indexed: 09/18/2024] Open
Abstract
Deep Learning (DL) drives academics to create models for cancer diagnosis using medical image processing because of its innate ability to recognize difficult-to-detect patterns in complex, noisy, and massive data. The use of deep learning algorithms for real-time cancer diagnosis is explored in depth in this work. Real-time medical diagnosis determines the illness or condition that accounts for a patient's symptoms and outward physical manifestations within a predetermined time frame. With a waiting period of anywhere between 5 days and 30 days, there are currently several ways, including screening tests, biopsies, and other prospective methods, that can assist in discovering a problem, particularly cancer. This article conducts a thorough literature review to understand how DL affects the length of this waiting period. In addition, the accuracy and turnaround time of different imaging modalities is evaluated with DL-based cancer diagnosis. Convolutional neural networks are critical for real-time cancer diagnosis, with models achieving up to 99.3% accuracy. The effectiveness and cost of the infrastructure required for real-time image-based medical diagnostics are evaluated. According to the report, generalization problems, data variability, and explainable DL are some of the most significant barriers to using DL in clinical trials. Making DL applicable for cancer diagnosis will be made possible by explainable DL.
Collapse
Affiliation(s)
- Harini Sriraman
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| | - Saleena Badarudeen
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| | - Saransh Vats
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| | - Prakash Balasubramanian
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, 600127, India
| |
Collapse
|
16
|
Attallah O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput Biol Med 2024; 178:108798. [PMID: 38925085 DOI: 10.1016/j.compbiomed.2024.108798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/30/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandri, 21937, Egypt; Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
17
|
Xu S, Peng H, Yang L, Zhong W, Gao X, Song J. An Automatic Grading System for Orthodontically Induced External Root Resorption Based on Deep Convolutional Neural Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1800-1811. [PMID: 38393620 PMCID: PMC11300848 DOI: 10.1007/s10278-024-01045-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/09/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Orthodontically induced external root resorption (OIERR) is a common complication of orthodontic treatments. Accurate OIERR grading is crucial for clinical intervention. This study aimed to evaluate six deep convolutional neural networks (CNNs) for performing OIERR grading on tooth slices to construct an automatic grading system for OIERR. A total of 2146 tooth slices of different OIERR grades were collected and preprocessed. Six pre-trained CNNs (EfficientNet-B1, EfficientNet-B2, EfficientNet-B3, EfficientNet-B4, EfficientNet-B5, and MobileNet-V3) were trained and validated on the pre-processed images based on four different cross-validation methods. The performances of the CNNs on a test set were evaluated and compared with those of orthodontists. The gradient-weighted class activation mapping (Grad-CAM) technique was used to explore the area of maximum impact on the model decisions in the tooth slices. The six CNN models performed remarkably well in OIERR grading, with a mean accuracy of 0.92, surpassing that of the orthodontists (mean accuracy of 0.82). EfficientNet-B4 trained with fivefold cross-validation emerged as the final OIERR grading system, with a high accuracy of 0.94. Grad-CAM revealed that the apical region had the greatest effect on the OIERR grading system. The six CNNs demonstrated excellent OIERR grading and outperformed orthodontists. The proposed OIERR grading system holds potential as a reliable diagnostic support for orthodontists in clinical practice.
Collapse
Affiliation(s)
- Shuxi Xu
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Houli Peng
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Lanxin Yang
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Wenjie Zhong
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Xiang Gao
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China.
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China.
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China.
| | - Jinlin Song
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China.
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China.
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China.
| |
Collapse
|
18
|
Suleiman TA, Anyimadu DT, Permana AD, Ngim HAA, Scotto di Freca A. Two-step hierarchical binary classification of cancerous skin lesions using transfer learning and the random forest algorithm. Vis Comput Ind Biomed Art 2024; 7:15. [PMID: 38884841 PMCID: PMC11183002 DOI: 10.1186/s42492-024-00166-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 05/24/2024] [Indexed: 06/18/2024] Open
Abstract
Skin lesion classification plays a crucial role in the early detection and diagnosis of various skin conditions. Recent advances in computer-aided diagnostic techniques have been instrumental in timely intervention, thereby improving patient outcomes, particularly in rural communities lacking specialized expertise. Despite the widespread adoption of convolutional neural networks (CNNs) in skin disease detection, their effectiveness has been hindered by the limited size and data imbalance of publicly accessible skin lesion datasets. In this context, a two-step hierarchical binary classification approach is proposed utilizing hybrid machine and deep learning (DL) techniques. Experiments conducted on the International Skin Imaging Collaboration (ISIC 2017) dataset demonstrate the effectiveness of the hierarchical approach in handling large class imbalances. Specifically, employing DenseNet121 (DNET) as a feature extractor and random forest (RF) as a classifier yielded the most promising results, achieving a balanced multiclass accuracy (BMA) of 91.07% compared to the pure deep-learning model (end-to-end DNET) with a BMA of 88.66%. The RF ensemble exhibited significantly greater efficiency than other machine-learning classifiers in aiding DL to address the challenge of learning with limited data. Furthermore, the implemented predictive hybrid hierarchical model demonstrated enhanced performance while significantly reducing computational time, indicating its potential efficiency in real-world applications for the classification of skin lesions.
Collapse
Affiliation(s)
- Taofik Ahmed Suleiman
- Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Cassino, 03043, Italy
| | - Daniel Tweneboah Anyimadu
- Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Cassino, 03043, Italy
| | - Andrew Dwi Permana
- Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Cassino, 03043, Italy
| | - Hsham Abdalgny Abdalwhab Ngim
- Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Cassino, 03043, Italy
| | - Alessandra Scotto di Freca
- Department of Electrical and Information Engineering, University of Cassino and Southern Lazio, Cassino, 03043, Italy.
| |
Collapse
|
19
|
Metta C, Beretta A, Pellungrini R, Rinzivillo S, Giannotti F. Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence. Bioengineering (Basel) 2024; 11:369. [PMID: 38671790 PMCID: PMC11048122 DOI: 10.3390/bioengineering11040369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
This paper focuses on the use of local Explainable Artificial Intelligence (XAI) methods, particularly the Local Rule-Based Explanations (LORE) technique, within healthcare and medical settings. It emphasizes the critical role of interpretability and transparency in AI systems for diagnosing diseases, predicting patient outcomes, and creating personalized treatment plans. While acknowledging the complexities and inherent trade-offs between interpretability and model performance, our work underscores the significance of local XAI methods in enhancing decision-making processes in healthcare. By providing granular, case-specific insights, local XAI methods like LORE enhance physicians' and patients' understanding of machine learning models and their outcome. Our paper reviews significant contributions to local XAI in healthcare, highlighting its potential to improve clinical decision making, ensure fairness, and comply with regulatory standards.
Collapse
Affiliation(s)
- Carlo Metta
- Institute of Information Science and Technologies (ISTI-CNR), Via Moruzzi 1, 56127 Pisa, Italy; (A.B.); (S.R.)
| | - Andrea Beretta
- Institute of Information Science and Technologies (ISTI-CNR), Via Moruzzi 1, 56127 Pisa, Italy; (A.B.); (S.R.)
| | - Roberto Pellungrini
- Faculty of Sciences, Scuola Normale Superiore, P.za dei Cavalieri 7, 56126 Pisa, Italy; (R.P.); (F.G.)
| | - Salvatore Rinzivillo
- Institute of Information Science and Technologies (ISTI-CNR), Via Moruzzi 1, 56127 Pisa, Italy; (A.B.); (S.R.)
| | - Fosca Giannotti
- Faculty of Sciences, Scuola Normale Superiore, P.za dei Cavalieri 7, 56126 Pisa, Italy; (R.P.); (F.G.)
| |
Collapse
|
20
|
Metta C, Beretta A, Guidotti R, Yin Y, Gallinari P, Rinzivillo S, Giannotti F. Advancing Dermatological Diagnostics: Interpretable AI for Enhanced Skin Lesion Classification. Diagnostics (Basel) 2024; 14:753. [PMID: 38611666 PMCID: PMC11011805 DOI: 10.3390/diagnostics14070753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 03/30/2024] [Accepted: 03/30/2024] [Indexed: 04/14/2024] Open
Abstract
A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods are evaluated on broad classifiers and fail to address complex, real-world issues, such as medical diagnosis. In our study, we focus on enhancing user trust and confidence in automated AI decision-making systems, particularly for diagnosing skin lesions, by tailoring an XAI method to explain an AI model's ability to identify various skin lesion types. We generate explanations using synthetic images of skin lesions as examples and counterexamples, offering a method for practitioners to pinpoint the critical features influencing the classification outcome. A validation survey involving domain experts, novices, and laypersons has demonstrated that explanations increase trust and confidence in the automated decision system. Furthermore, our exploration of the model's latent space reveals clear separations among the most common skin lesion classes, a distinction that likely arises from the unique characteristics of each class and could assist in correcting frequent misdiagnoses by human professionals.
Collapse
Affiliation(s)
- Carlo Metta
- Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy; (A.B.); (S.R.)
| | - Andrea Beretta
- Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy; (A.B.); (S.R.)
| | - Riccardo Guidotti
- Department of Computer Science, Universitá di Pisa, 56124 Pisa, Italy;
| | - Yuan Yin
- Laboratoire d’Informatique de Paris 6, Sorbonne Université, 75005 Paris, Italy; (Y.Y.); (P.G.)
| | - Patrick Gallinari
- Laboratoire d’Informatique de Paris 6, Sorbonne Université, 75005 Paris, Italy; (Y.Y.); (P.G.)
| | - Salvatore Rinzivillo
- Institute of Information Science and Technologies (ISTI-CNR), 56124 Pisa, Italy; (A.B.); (S.R.)
| | - Fosca Giannotti
- Faculty of Sciences, Scuola Normale Superiore di Pisa, 56126 Paris, Italy;
| |
Collapse
|
21
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
22
|
Desale RP, Patil PS. An efficient multi-class classification of skin cancer using optimized vision transformer. Med Biol Eng Comput 2024; 62:773-789. [PMID: 37996627 DOI: 10.1007/s11517-023-02969-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023]
Abstract
Skin cancer is a pervasive and deadly disease, prompting a surge in research efforts towards utilizing computer-based techniques to analyze skin lesion images to identify malignancies. This paper introduces an optimized vision transformer approach for effectively classifying skin tumors. The methodology begins with a pre-processing step aimed at preserving color constancy, eliminating hair artifacts, and reducing image noise. Here, a combination of techniques such as piecewise linear bottom hat filtering, adaptive median filtering, Gaussian filtering, and an enhanced gradient intensity method is used for pre-processing. Afterwards, the segmentation phase is initiated using the self-sparse watershed algorithm on the pre-processed image. Subsequently, the segmented image is passed through a feature extraction stage where the hybrid Walsh-Hadamard Karhunen-Loeve expansion technique is employed. The final step involves the application of an improved vision transformer for skin cancer classification. The entire methodology is implemented using the Python programming language, and the International Skin Imaging Collaboration (ISIC) 2019 database is utilized for experimentation. The experimental results demonstrate remarkable performance with the different performance metrics is accuracy 99.81%, precision 96.65%, sensitivity 98.21%, F-measure 97.42%, specificity 99.88%, recall 98.21%, Jaccard coefficient 98.54%, and Mathew's correlation coefficient (MCC) 98.89%. The proposed methodology outperforms the existing methodology.
Collapse
Affiliation(s)
- R P Desale
- E&TC Engineering Department, SSVPS's Bapusaheb Shivajirao Deore College of Engineering, Dhule, Maharashtra, 424005, India.
| | - P S Patil
- E&TC Engineering Department, SSVPS's Bapusaheb Shivajirao Deore College of Engineering, Dhule, Maharashtra, 424005, India
| |
Collapse
|
23
|
Kumar Lilhore U, Simaiya S, Sharma YK, Kaswan KS, Rao KBVB, Rao VVRM, Baliyan A, Bijalwan A, Alroobaea R. A precise model for skin cancer diagnosis using hybrid U-Net and improved MobileNet-V3 with hyperparameters optimization. Sci Rep 2024; 14:4299. [PMID: 38383520 PMCID: PMC10881962 DOI: 10.1038/s41598-024-54212-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 02/09/2024] [Indexed: 02/23/2024] Open
Abstract
Skin cancer is a frequently occurring and possibly deadly disease that necessitates prompt and precise diagnosis in order to ensure efficacious treatment. This paper introduces an innovative approach for accurately identifying skin cancer by utilizing Convolution Neural Network architecture and optimizing hyperparameters. The proposed approach aims to increase the precision and efficacy of skin cancer recognition and consequently enhance patients' experiences. This investigation aims to tackle various significant challenges in skin cancer recognition, encompassing feature extraction, model architecture design, and optimizing hyperparameters. The proposed model utilizes advanced deep-learning methodologies to extract complex features and patterns from skin cancer images. We enhance the learning procedure of deep learning by integrating Standard U-Net and Improved MobileNet-V3 with optimization techniques, allowing the model to differentiate malignant and benign skin cancers. Also substituted the crossed-entropy loss function of the Mobilenet-v3 mathematical framework with a bias loss function to enhance the accuracy. The model's squeeze and excitation component was replaced with the practical channel attention component to achieve parameter reduction. Integrating cross-layer connections among Mobile modules has been proposed to leverage synthetic features effectively. The dilated convolutions were incorporated into the model to enhance the receptive field. The optimization of hyperparameters is of utmost importance in improving the efficiency of deep learning models. To fine-tune the model's hyperparameter, we employ sophisticated optimization methods such as the Bayesian optimization method using pre-trained CNN architecture MobileNet-V3. The proposed model is compared with existing models, i.e., MobileNet, VGG-16, MobileNet-V2, Resnet-152v2 and VGG-19 on the "HAM-10000 Melanoma Skin Cancer dataset". The empirical findings illustrate that the proposed optimized hybrid MobileNet-V3 model outperforms existing skin cancer detection and segmentation techniques based on high precision of 97.84%, sensitivity of 96.35%, accuracy of 98.86% and specificity of 97.32%. The enhanced performance of this research resulted in timelier and more precise diagnoses, potentially contributing to life-saving outcomes and mitigating healthcare expenditures.
Collapse
Affiliation(s)
- Umesh Kumar Lilhore
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, 140413, India
| | - Sarita Simaiya
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, 140413, India
| | - Yogesh Kumar Sharma
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Greenfield, Vaddeswaram, Guntur, AP, India
| | - Kuldeep Singh Kaswan
- School of Computing Science and Engineering, Galgotias University, Greater Noida, Uttar Pradesh, India
| | - K B V Brahma Rao
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Greenfield, Vaddeswaram, Guntur, AP, India
| | - V V R Maheswara Rao
- Departmentt of Computer Science and Engineering, Shri Vishnu Engineering College for Women (A), Bhimavaram, India
| | - Anupam Baliyan
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, 140413, India
| | | | - Roobaea Alroobaea
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, 21944, Taif, Saudi Arabia
| |
Collapse
|
24
|
Sama NU, Zen K, Jhanjhi NZ, Humayun M. Computational Intelligence Ethical Issues in Health Care. STUDIES IN COMPUTATIONAL INTELLIGENCE 2024:349-362. [DOI: 10.1007/978-981-99-8853-2_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
25
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
26
|
Azeem M, Kiani K, Mansouri T, Topping N. SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network. Cancers (Basel) 2023; 16:108. [PMID: 38201535 PMCID: PMC10778045 DOI: 10.3390/cancers16010108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer is a widespread disease that typically develops on the skin due to frequent exposure to sunlight. Although cancer can appear on any part of the human body, skin cancer accounts for a significant proportion of all new cancer diagnoses worldwide. There are substantial obstacles to the precise diagnosis and classification of skin lesions because of morphological variety and indistinguishable characteristics across skin malignancies. Recently, deep learning models have been used in the field of image-based skin-lesion diagnosis and have demonstrated diagnostic efficiency on par with that of dermatologists. To increase classification efficiency and accuracy for skin lesions, a cutting-edge multi-layer deep convolutional neural network termed SkinLesNet was built in this study. The dataset used in this study was extracted from the PAD-UFES-20 dataset and was augmented. The PAD-UFES-20-Modified dataset includes three common forms of skin lesions: seborrheic keratosis, nevus, and melanoma. To comprehensively assess SkinLesNet's performance, its evaluation was expanded beyond the PAD-UFES-20-Modified dataset. Two additional datasets, HAM10000 and ISIC2017, were included, and SkinLesNet was compared to the widely used ResNet50 and VGG16 models. This broader evaluation confirmed SkinLesNet's effectiveness, as it consistently outperformed both benchmarks across all datasets.
Collapse
Affiliation(s)
- Muhammad Azeem
- School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK; (K.K.); (T.M.); (N.T.)
| | | | | | | |
Collapse
|
27
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
28
|
Bakasa W, Viriri S. VGG16 Feature Extractor with Extreme Gradient Boost Classifier for Pancreas Cancer Prediction. J Imaging 2023; 9:138. [PMID: 37504815 PMCID: PMC10381878 DOI: 10.3390/jimaging9070138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/19/2023] [Accepted: 07/04/2023] [Indexed: 07/29/2023] Open
Abstract
The prognosis of patients with pancreatic ductal adenocarcinoma (PDAC) is greatly improved by an early and accurate diagnosis. Several studies have created automated methods to forecast PDAC development utilising various medical imaging modalities. These papers give a general overview of the classification, segmentation, or grading of many cancer types utilising conventional machine learning techniques and hand-engineered characteristics, including pancreatic cancer. This study uses cutting-edge deep learning techniques to identify PDAC utilising computerised tomography (CT) medical imaging modalities. This work suggests that the hybrid model VGG16-XGBoost (VGG16-backbone feature extractor and Extreme Gradient Boosting-classifier) for PDAC images. According to studies, the proposed hybrid model performs better, obtaining an accuracy of 0.97 and a weighted F1 score of 0.97 for the dataset under study. The experimental validation of the VGG16-XGBoost model uses the Cancer Imaging Archive (TCIA) public access dataset, which has pancreas CT images. The results of this study can be extremely helpful for PDAC diagnosis from computerised tomography (CT) pancreas images, categorising them into five different tumours (T), node (N), and metastases (M) (TNM) staging system class labels, which are T0, T1, T2, T3, and T4.
Collapse
|
29
|
Exarchos KP, Gkrepi G, Kostikas K, Gogali A. Recent Advances of Artificial Intelligence Applications in Interstitial Lung Diseases. Diagnostics (Basel) 2023; 13:2303. [PMID: 37443696 DOI: 10.3390/diagnostics13132303] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/02/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
Interstitial lung diseases (ILDs) comprise a rather heterogeneous group of diseases varying in pathophysiology, presentation, epidemiology, diagnosis, treatment and prognosis. Even though they have been recognized for several years, there are still areas of research debate. In the majority of ILDs, imaging modalities and especially high-resolution Computed Tomography (CT) scans have been the cornerstone in patient diagnostic approach and follow-up. The intricate nature of ILDs and the accompanying data have led to an increasing adoption of artificial intelligence (AI) techniques, primarily on imaging data but also in genetic data, spirometry and lung diffusion, among others. In this literature review, we describe the most prominent applications of AI in ILDs presented approximately within the last five years. We roughly stratify these studies in three categories, namely: (i) screening, (ii) diagnosis and classification, (iii) prognosis.
Collapse
Affiliation(s)
- Konstantinos P Exarchos
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| | - Georgia Gkrepi
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| | - Konstantinos Kostikas
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| | - Athena Gogali
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| |
Collapse
|
30
|
Naqvi M, Gilani SQ, Syed T, Marques O, Kim HC. Skin Cancer Detection Using Deep Learning-A Review. Diagnostics (Basel) 2023; 13:1911. [PMID: 37296763 PMCID: PMC10252190 DOI: 10.3390/diagnostics13111911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 05/25/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
Skin cancer is one the most dangerous types of cancer and is one of the primary causes of death worldwide. The number of deaths can be reduced if skin cancer is diagnosed early. Skin cancer is mostly diagnosed using visual inspection, which is less accurate. Deep-learning-based methods have been proposed to assist dermatologists in the early and accurate diagnosis of skin cancers. This survey reviewed the most recent research articles on skin cancer classification using deep learning methods. We also provided an overview of the most common deep-learning models and datasets used for skin cancer classification.
Collapse
Affiliation(s)
- Maryam Naqvi
- Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Tehreem Syed
- Department of Electrical Engineering and Computer Engineering, Technische Universität Dresden, 01069 Dresden, Germany
| | - Oge Marques
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Hee-Cheol Kim
- Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
| |
Collapse
|
31
|
Alwakid G, Gouda W, Humayun M, Jhanjhi NZ. Diagnosing Melanomas in Dermoscopy Images Using Deep Learning. Diagnostics (Basel) 2023; 13:diagnostics13101815. [PMID: 37238299 DOI: 10.3390/diagnostics13101815] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Revised: 05/04/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
When it comes to skin tumors and cancers, melanoma ranks among the most prevalent and deadly. With the advancement of deep learning and computer vision, it is now possible to quickly and accurately determine whether or not a patient has malignancy. This is significant since a prompt identification greatly decreases the likelihood of a fatal outcome. Artificial intelligence has the potential to improve healthcare in many ways, including melanoma diagnosis. In a nutshell, this research employed an Inception-V3 and InceptionResnet-V2 strategy for melanoma recognition. The feature extraction layers that were previously frozen were fine-tuned after the newly added top layers were trained. This study used data from the HAM10000 dataset, which included an unrepresentative sample of seven different forms of skin cancer. To fix the discrepancy, we utilized data augmentation. The proposed models outperformed the results of the previous investigation with an effectiveness of 0.89 for Inception-V3 and 0.91 for InceptionResnet-V2.
Collapse
Affiliation(s)
- Ghadah Alwakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakakah 72341, Saudi Arabia
| | - Walaa Gouda
- Department of Electrical Engineering, Shoubra Faculty of Engineering, Benha University, Cairo 11672, Egypt
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakakah 72341, Saudi Arabia
| | - N Z Jhanjhi
- School of Computer Science (SCS), Taylor's University, Subang Jaya 47500, Malaysia
| |
Collapse
|
32
|
Multi-Models of Analyzing Dermoscopy Images for Early Detection of Multi-Class Skin Lesions Based on Fused Features. Processes (Basel) 2023. [DOI: 10.3390/pr11030910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023] Open
Abstract
Melanoma is a cancer that threatens life and leads to death. Effective detection of skin lesion types by images is a challenging task. Dermoscopy is an effective technique for detecting skin lesions. Early diagnosis of skin cancer is essential for proper treatment. Skin lesions are similar in their early stages, so manual diagnosis is difficult. Thus, artificial intelligence techniques can analyze images of skin lesions and discover hidden features not seen by the naked eye. This study developed hybrid techniques based on hybrid features to effectively analyse dermoscopic images to classify two datasets, HAM10000 and PH2, of skin lesions. The images have been optimized for all techniques, and the problem of imbalance between the two datasets has been resolved. The HAM10000 and PH2 datasets were classified by pre-trained MobileNet and ResNet101 models. For effective detection of the early stages skin lesions, hybrid techniques SVM-MobileNet, SVM-ResNet101 and SVM-MobileNet-ResNet101 were applied, which showed better performance than pre-trained CNN models due to the effectiveness of the handcrafted features that extract the features of color, texture and shape. Then, handcrafted features were combined with the features of the MobileNet and ResNet101 models to form a high accuracy feature. Finally, features of MobileNet-handcrafted and ResNet101-handcrafted were sent to ANN for classification with high accuracy. For the HAM10000 dataset, the ANN with MobileNet and handcrafted features achieved an AUC of 97.53%, accuracy of 98.4%, sensitivity of 94.46%, precision of 93.44% and specificity of 99.43%. Using the same technique, the PH2 data set achieved 100% for all metrics.
Collapse
|
33
|
Maurya S, Tiwari S, Mothukuri MC, Tangeda CM, Nandigam RNS, Addagiri DC. A review on recent developments in cancer detection using Machine Learning and Deep Learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
34
|
The Role of Machine Learning and Deep Learning Approaches for the Detection of Skin Cancer. Healthcare (Basel) 2023; 11:healthcare11030415. [PMID: 36766989 PMCID: PMC9914395 DOI: 10.3390/healthcare11030415] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 01/28/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Machine learning (ML) can enhance a dermatologist's work, from diagnosis to customized care. The development of ML algorithms in dermatology has been supported lately regarding links to digital data processing (e.g., electronic medical records, Image Archives, omics), quicker computing and cheaper data storage. This article describes the fundamentals of ML-based implementations, as well as future limits and concerns for the production of skin cancer detection and classification systems. We also explored five fields of dermatology using deep learning applications: (1) the classification of diseases by clinical photos, (2) der moto pathology visual classification of cancer, and (3) the measurement of skin diseases by smartphone applications and personal tracking systems. This analysis aims to provide dermatologists with a guide that helps demystify the basics of ML and its different applications to identify their possible challenges correctly. This paper surveyed studies on skin cancer detection using deep learning to assess the features and advantages of other techniques. Moreover, this paper also defined the basic requirements for creating a skin cancer detection application, which revolves around two main issues: the full segmentation image and the tracking of the lesion on the skin using deep learning. Most of the techniques found in this survey address these two problems. Some of the methods also categorize the type of cancer too.
Collapse
|
35
|
An Ensemble of Transfer Learning Models for the Prediction of Skin Cancers with Conditional Generative Adversarial Networks. Diagnostics (Basel) 2022; 12:diagnostics12123145. [PMID: 36553152 PMCID: PMC9777332 DOI: 10.3390/diagnostics12123145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/04/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
Skin cancer is one of the most severe forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, diagnosing and treating skin cancer patients at an early stage is crucial. Since a manual skin cancer diagnosis is both time-consuming and expensive, an incorrect diagnosis is made due to the high similarity between the various skin cancers. Improved categorization of multiclass skin cancers requires the development of automated diagnostic systems. Herein, we propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using Conditional Generative Adversarial Network (CGAN) techniques. Thereafter, the traditional augmentation methods are used to augment our existing training set to improve the performance of pre-trained deep models on the skin cancer classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%. A comprehensive discussion on the performance of the models concluded that using this method possibly leads to enhanced performance in skin cancer categorization compared to the efforts made in the past.
Collapse
|
36
|
A Survey on Computer-Aided Intelligent Methods to Identify and Classify Skin Cancer. INFORMATICS 2022. [DOI: 10.3390/informatics9040099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Melanoma is one of the skin cancer types that is more dangerous to human society. It easily spreads to other parts of the human body. An early diagnosis is necessary for a higher survival rate. Computer-aided diagnosis (CAD) is suitable for providing precise findings before the critical stage. The computer-aided diagnostic process includes preprocessing, segmentation, feature extraction, and classification. This study discusses the advantages and disadvantages of various computer-aided algorithms. It also discusses the current approaches, problems, and various types of datasets for skin images. Information about possible future works is also highlighted in this paper. The inferences derived from this survey will be useful for researchers carrying out research in skin cancer image analysis.
Collapse
|