1
|
Pandurangan V, Sarojam SP, Narayanan P, Velayutham M. Hybrid deep learning-based skin cancer classification with RPO-SegNet for skin lesion segmentation. NETWORK (BRISTOL, ENGLAND) 2025; 36:221-248. [PMID: 39628058 DOI: 10.1080/0954898x.2024.2428705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 09/28/2024] [Accepted: 11/07/2024] [Indexed: 05/02/2025]
Abstract
Skin melanin lesions are typically identified as tiny patches on the skin, which are impacted by melanocyte cell overgrowth. The number of people with skin cancer is increasing worldwide. Accurate and timely skin cancer identification is critical to reduce the mortality rates. An incorrect diagnosis can be fatal to the patient. To tackle these issues, this article proposes the Recurrent Prototypical Object Segmentation Network (RPO-SegNet) for the segmentation of skin lesions and a hybrid Deep Learning (DL) - based skin cancer classification. The RPO-SegNet is formed by integrating the Recurrent Prototypical Networks (RP-Net), and Object Segmentation Networks (O-SegNet). At first, the input image is taken from a database and forwarded to image pre-processing. Then, the segmentation of skin lesions is accomplished using the proposed RPO-SegNet. After the segmentation, feature extraction is accomplished. Finally, skin cancer classification and detection are accomplished by employing the Fuzzy-based Shepard Convolutional Maxout Network (FSCMN) by combining the Deep Maxout Network (DMN), and Shepard Convolutional Neural Network (ShCNN). The established RPO-SegNet+FSCMN attained improved accuracy, True Negative Rate (TNR), True Positive Rate (TPR), dice coefficient, Jaccard coefficient, and segmentation analysis of 91.985%, 92.735%, 93.485%, 90.902%, 90.164%, and 91.734%.
Collapse
Affiliation(s)
- Visu Pandurangan
- Department of Artificial Intelligence and Data Science, Velammal Engineering College, Chennai, Tamil Nadu, India
| | - Smitha Ponnayyan Sarojam
- Department of Computer Science and Engineering, Velammal Engineering College, Chennai, Tamil Nadu, India
| | | | | |
Collapse
|
2
|
Kumar KA, Vanmathi C. A hybrid parallel convolutional spiking neural network for enhanced skin cancer detection. Sci Rep 2025; 15:11137. [PMID: 40169652 PMCID: PMC11962159 DOI: 10.1038/s41598-025-85627-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2024] [Accepted: 01/06/2025] [Indexed: 04/03/2025] Open
Abstract
The most widespread kind of cancer, affecting millions of lives is skin cancer. When the condition of illness worsens, the chance of survival is reduced, and thus detection of skin cancer is extremely difficult. Hence, this paper introduces a new model, known as Parallel Convolutional Spiking Neural Network (PCSN-Net) for detecting skin cancer. Initially, the input skin cancer image is pre-processed by employing Medav filter to eradicate the noise in image. Next, affected region is segmented by utilizing DeepSegNet, which is formed by integrating SegNet and Deep joint segmentation, where RV coefficient is used to fuse the outputs. Here, the segmented image is then augmented by including process, such as geometric transformation, colorspace transformation, mixing images Pixel averaging (mixup), and overlaying crops (CutMix). Then textural, statistical, Discrete Wavelet Transform (DWT) based Local Direction Pattern (LDP) with entropy, and Local Normal Derivative Pattern (LNDP) features are mined. Finally, skin cancer detection is executed using PCSN-Net, which is formed by fusing Parallel Convolutional Neural Network (PCNN) and Deep Spiking Neural Network (DSNN). In this work, the suggested PCSN-Net system shows high accuracy and reliability in identifying skin cancer. The experimental findings suggest that PCSN-Net has an accuracy of 95.7%, a sensitivity of 94.7%, and a specificity of 92.6%. These parameters demonstrate the model's capacity to discriminate among malignant and benign skin lesions properly. Furthermore, the system has a false positive rate (FPR) of 10.7% and a positive predictive value (PPV) of 90.8%, demonstrating its capacity to reduce wrong diagnosis while prioritizing true positive instances. PCSN-Net outperforms various complex algorithms, including EfficientNet, DenseNet, and Inception-ResNet-V2, despite preserving effective training and inference times. The results obtained show the feasibility of the model for real-time clinical use, strengthening its capacity for quick and accurate skin cancer detection.
Collapse
Affiliation(s)
- K Anup Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, Tamilnadu, India
| | - C Vanmathi
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, Tamilnadu, India.
| |
Collapse
|
3
|
Khan A, Sajid MZ, Khan NA, Youssef A, Abbas Q. CAD-Skin: A Hybrid Convolutional Neural Network-Autoencoder Framework for Precise Detection and Classification of Skin Lesions and Cancer. Bioengineering (Basel) 2025; 12:326. [PMID: 40281686 PMCID: PMC12025204 DOI: 10.3390/bioengineering12040326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2025] [Revised: 03/06/2025] [Accepted: 03/06/2025] [Indexed: 04/29/2025] Open
Abstract
Skin cancer is a class of disorder defined by the growth of abnormal cells on the body. Accurately identifying and diagnosing skin lesions is quite difficult because skin malignancies share many common characteristics and a wide range of morphologies. To face this challenge, deep learning algorithms have been proposed. Deep learning algorithms have shown diagnostic efficacy comparable to dermatologists in the discipline of images-based skin lesion diagnosis in recent research articles. This work proposes a novel deep learning algorithm to detect skin cancer. The proposed CAD-Skin system detects and classifies skin lesions using deep convolutional neural networks and autoencoders to improve the classification efficiency of skin cancer. The CAD-Skin system was designed and developed by the use of the modern preprocessing approach, which is a combination of multi-scale retinex, gamma correction, unsharp masking, and contrast-limited adaptive histogram equalization. In this work, we have implemented a data augmentation strategy to deal with unbalanced datasets. This step improves the model's resilience to different pigmented skin conditions and avoids overfitting. Additionally, a Quantum Support Vector Machine (QSVM) algorithm is integrated for final-stage classification. Our proposed CAD-Skin enhances category recognition for different skin disease severities, including actinic keratosis, malignant melanoma, and other skin cancers. The proposed system was tested using the PAD-UFES-20-Modified, ISIC-2018, and ISIC-2019 datasets. The system reached accuracy rates of 98%, 99%, and 99%, consecutively, which is higher than state-of-the-art work in the literature. The minimum accuracy achieved for certain skin disorder diseases reached 97.43%. Our research study demonstrates that the proposed CAD-Skin provides precise diagnosis and timely detection of skin abnormalities, diversifying options for doctors and enhancing patient satisfaction during medical practice.
Collapse
Affiliation(s)
- Abdullah Khan
- Department of Computer Software Engineering, Military College of Signals, National University of Science and Technology, Islamabad 44000, Pakistan; (A.K.); (M.Z.S.); (N.A.K.)
| | - Muhammad Zaheer Sajid
- Department of Computer Software Engineering, Military College of Signals, National University of Science and Technology, Islamabad 44000, Pakistan; (A.K.); (M.Z.S.); (N.A.K.)
| | - Nauman Ali Khan
- Department of Computer Software Engineering, Military College of Signals, National University of Science and Technology, Islamabad 44000, Pakistan; (A.K.); (M.Z.S.); (N.A.K.)
| | - Ayman Youssef
- Department of Computers and Systems, Electronics Research Institute, Cairo 12622, Egypt
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| |
Collapse
|
4
|
Khan AR, Mujahid M, Alamri FS, Saba T, Ayesha N. Early-Stage Melanoma Cancer Diagnosis Framework for Imbalanced Data From Dermoscopic Images. Microsc Res Tech 2025; 88:797-809. [PMID: 39573895 DOI: 10.1002/jemt.24736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 07/02/2024] [Accepted: 10/30/2024] [Indexed: 02/22/2025]
Abstract
Skin problems are a serious condition that affects people all over the world. Prolonged exposure to ultraviolet rays' damages melanocyte cells, leading to the uncontrolled proliferation of melanoma, a form of skin cancer. However, the dearth of qualified expertise increases the processing time and cost of diagnosis. Early detection of melanoma in dermoscopy images significantly enhances its chance of survival. Pathologists benefit substantially from the precise and efficient melanoma cancer diagnosis using automated methods. Nevertheless, the diagnosis of melanoma has consistently been a challenging procedure due to the imbalance images and limited data. Our objective was to employ a novel deep method to diagnose melanoma from dermoscopic images automatically. The research has proposed a novel framework for detecting skin malignancies. The proposed plan, which includes CNN, DenseNet, a batch normalization layer, maxpooling, and a ReLU layer activation function, solves the overfitting problem well. Furthermore, we used a large number of samples for testing and effectively employed data augmentation to prevent any issues related to class imbalance. The Adam optimizer is the most efficient deep learning optimizer for addressing challenges associated with large datasets, such as lengthy processing times. This is due to its specifically designed algorithm. Experiments ensure that the proposed framework achieved 95.70% micro average accuracy on the ISIC-2019 dataset and 93.24% accuracy on the HAM-10000 dataset. Comprehensive evaluation and analysis were used to evaluate our framework's performance. The results show that the proposed approach performs better with cross-validation by 94.8% accuracy than the most sophisticated deep learning-based technique. During studies, medical professionals will employ the proposed model to identify skin cancer in its early stages.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab. CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Muhammad Mujahid
- Artificial Intelligence & Data Analytics Lab. CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Faten S Alamri
- Department of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab. CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Noor Ayesha
- Center of Excellence in Cyber Security (CYBEX), Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
5
|
Ju X, Lin CH, Lee S, Wei S. Melanoma classification using generative adversarial network and proximal policy optimization. Photochem Photobiol 2025; 101:434-457. [PMID: 39080818 DOI: 10.1111/php.14006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 07/10/2024] [Accepted: 07/15/2024] [Indexed: 03/19/2025]
Abstract
In oncology, melanoma is a serious concern, often arising from DNA changes caused mainly by ultraviolet radiation. This cancer is known for its aggressive growth, highlighting the necessity of early detection. Our research introduces a novel deep learning framework for melanoma classification, trained and validated using the extensive SIIM-ISIC Melanoma Classification Challenge-ISIC-2020 dataset. The framework features three dilated convolution layers that extract critical feature vectors for classification. A key aspect of our model is incorporating the Off-policy Proximal Policy Optimization (Off-policy PPO) algorithm, which effectively handles data imbalance in the training set by rewarding the accurate classification of underrepresented samples. In this framework, the model is visualized as an agent making a series of decisions, where each sample represents a distinct state. Additionally, a Generative Adversarial Network (GAN) augments training data to improve generalizability, paired with a new regularization technique to stabilize GAN training and prevent mode collapse. The model achieved an F-measure of 91.836% and a geometric mean of 91.920%, surpassing existing models and demonstrating the model's practical utility in clinical environments. These results demonstrate its potential in enhancing early melanoma detection and informing more accurate treatment approaches, significantly advancing in combating this aggressive cancer.
Collapse
Affiliation(s)
- Xiangui Ju
- Beijing Jinzhituo Technology Co., Ltd., Beijing, China
- School of Computer Science, Semyung University, 65 Semyung-ro, Jecheon-si, Korea
| | - Chi-Ho Lin
- School of Computer Science, Semyung University, 65 Semyung-ro, Jecheon-si, Korea
| | - Suan Lee
- School of Computer Science, Semyung University, 65 Semyung-ro, Jecheon-si, Korea
| | - Sizheng Wei
- School of Computer Science, Semyung University, 65 Semyung-ro, Jecheon-si, Korea
- School of Finance, Xuzhou University of Technology, Xuzhou City, China
| |
Collapse
|
6
|
Eliwa EHI. Enhancing Skin Cancer Diagnosis Through Fine-Tuning of Pretrained Models: A Two-Phase Transfer Learning Approach. Int J Breast Cancer 2025; 2025:4362941. [PMID: 39996140 PMCID: PMC11850074 DOI: 10.1155/ijbc/4362941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Accepted: 01/16/2025] [Indexed: 02/26/2025] Open
Abstract
Skin cancer is among the most prevalent types of cancer worldwide, and early detection is crucial for improving treatment outcomes and patient survival rates. Traditional diagnostic methods, often reliant on visual examination and manual evaluation, can be subjective and time-consuming, leading to variability in accuracy. Recent developments in machine learning, particularly using pretrained models and fine-tuning techniques, offer promising advancements in automating and improving skin cancer classification. This paper explores the application of a two-phase model using the HAM10000 dataset, which comprises a wide range of skin lesion images. The first phase employs transfer learning with frozen layers, followed by fine-tuning all layers in the second phase to adapt the models more specifically to the dataset. I evaluate nine pretrained models, including VGG16, VGG19, InceptionV3, Xception (extreme inception), and DenseNet121, assessing their performance based on accuracy, precision, recall, and F1 score metrics. The VGG16 model, after fine-tuning, achieved the highest test set accuracy of 99.3%, highlighting its potential for highly accurate skin cancer classification. This study provides important insights for clinicians and researchers, demonstrating the efficacy of advanced machine learning models in enhancing diagnostic accuracy and supporting clinical decision-making in dermatology.
Collapse
Affiliation(s)
- Entesar Hamed I. Eliwa
- Department of Mathematics and Statistics, College of Science, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
7
|
Alemu MG, Zimale FA. Integration of remote sensing and machine learning algorithm for agricultural drought early warning over Genale Dawa river basin, Ethiopia. ENVIRONMENTAL MONITORING AND ASSESSMENT 2025; 197:243. [PMID: 39904802 DOI: 10.1007/s10661-025-13708-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 01/24/2025] [Indexed: 02/06/2025]
Abstract
Drought remains a menace in the Horn of Africa; as a result, the Ethiopia's Genale Dawa River Basin is one of the most vulnerable to agricultural drought. Hence, this study integrates remote sensing and machine learning algorithm for early warning identification through assessment and prediction of index-based agricultural drought over the basin. To track the severity of the drought in the basin from 2003 to 2023, a range of high-resolution satellite imagery output indexes were used, including the Vegetation Condition Index (VCI), Thermal Condition Index (TCI), and Vegetation Health Index (VHI). Additionally, the Artificial Neural Network machine learning technique was used to predict agricultural drought VHI for the period of 2028 and 2033. Results depict that during the 2023 period, 25% of severe drought and 18% of extreme drought countered at the lower part of the basin at Dolo ado and Chereti regions. A high TCI value was found that around 23.24% under extreme drought and low precipitation countered in areas of Moyale, Dolo ado, Dolobay, Afder, and Bure lower than 3.57 mm per month. Similarly, increment of severe drought from 24.26% to 24.58% and 16.53% to 16.58% of extreme drought value of VHI might be experienced during the 2028 and 2033 period respectively in the area of Mada Wolabu, Dolo ado, Dodola, Gore, Gidir, and Rayitu. The findings of this study are significantly essential for the institutes located particularly in the basin as they will allow them to adapt drought-coping mechanisms and decision-making easily.
Collapse
Affiliation(s)
- Mikhael G Alemu
- Department of Climate Change Engineering, Pan African University Institute for Water and Energy Sciences -Including Climate Change (PAUWES), Tlemcen, Algeria.
- Action for Human Rights and Development, PO Box 1551, Adama, Ethiopia.
| | - Fasikaw A Zimale
- Faculty of Civil and Water Resources Engineering, Bahir Dar Institute of Technology, Bahir Dar University, Bahir Dar, Ethiopia
| |
Collapse
|
8
|
Jeyageetha K, Vijayalakshmi K, Suresh S, Bhuvanesh A. Multi-Skin disease classification using hybrid deep learning model. Technol Health Care 2025:9287329241312628. [PMID: 39973858 DOI: 10.1177/09287329241312628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
Among the many cancers that people face today, skin cancer is among the deadliest and most dangerous. As a result, improving patients' chances of survival requires skin cancer to be identified and classified early. Therefore, it is critical to assist radiologists in detecting skin cancer through the development of Computer Aided Diagnosis (CAD) techniques. The diagnostic procedure currently makes heavy use of Deep Learning (DL) techniques for disease identification. In addition, skin lesion extraction and improved classification performance are achieved through Region Growing (RG) based segmentation. At the outset of this study, noise is reduced using an Adaptive Wiener Filter (AWF), and hair is removed using a Maximum Gradient Intensity (MGI). Then, the best RG, which is the result of integrating RG with the Modified Honey Badger Optimiser (MHBO), does the segmentation. Finally, several forms of skin cancer are classified using the DL model MobileSkinNetV2. The experiments were conducted on the ISIC dataset and the results show that the accuracy and precision were improved to 99.01% and 98.6%, respectively. In comparison to existing models, the experimental results show that the proposed model performs competitively, which is great news for dermatologists treating cancer.
Collapse
Affiliation(s)
- K Jeyageetha
- Department of Computer Science and Engineering, Ramco Institute of Technology, Rajapalayam, India
| | - K Vijayalakshmi
- Department of Computer Science and Engineering, Ramco Institute of Technology, Rajapalayam, India
| | - S Suresh
- Department of Electronics and Communication Engineering, Sri Eshwar College of Engineering, Coimbatore, India
| | - A Bhuvanesh
- Department of Electrical and Electronics Engineering, PSN College of Engineering and Technology, Tirunelveli, India
| |
Collapse
|
9
|
Muthukrishnan R, Balasubramaniam A, Krishnasamy V, Ravichandran SK. An Efficient Lightweight Multi Head Attention Gannet Convolutional Neural Network Based Mammograms Classification. Int J Med Robot 2025; 21:e70043. [PMID: 39921233 DOI: 10.1002/rcs.70043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 12/27/2024] [Accepted: 01/12/2025] [Indexed: 02/10/2025]
Abstract
BACKGROUND This research aims to use deep learning to create automated systems for better breast cancer detection and categorisation in mammogram images, helping medical professionals overcome challenges such as time consumption, feature extraction issues and limited training models. METHODS This research introduced a Lightweight Multihead attention Gannet Convolutional Neural Network (LMGCNN) to classify mammogram images effectively. It used wiener filtering, unsharp masking, and adaptive histogram equalisation to enhance images and remove noise, followed by Grey-Level Co-occurrence Matrix (GLCM) for feature extraction. Ideal feature selection is done by a self-adaptive quantum equilibrium optimiser with artificial bee colony. RESULTS The research assessed on two datasets, CBIS-DDSM and MIAS, achieving impressive accuracy rates of 98.2% and 99.9%, respectively, which highlight the superior performance of the LMGCNN model while accurately detecting breast cancer compared to previous models. CONCLUSION This method illustrates potential in aiding initial and accurate breast cancer detection, possibly leading to improved patient outcomes.
Collapse
Affiliation(s)
- Ramkumar Muthukrishnan
- Electronics and Communication Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
| | - Ashok Balasubramaniam
- Electrical and Electronics Engineering, Karpagam College of Engineering, Coimbatore, India
| | - Vijaipriya Krishnasamy
- Electronics and Communication Engineering, Sri Sai Ranganathan Engineering College, Coimbatore, India
| | - Sarath Kumar Ravichandran
- Electronics and Communication Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
10
|
Abdulredah AA, Fadhel MA, Alzubaidi L, Duan Y, Kherallah M, Charfi F. Towards unbiased skin cancer classification using deep feature fusion. BMC Med Inform Decis Mak 2025; 25:48. [PMID: 39891245 PMCID: PMC11786435 DOI: 10.1186/s12911-025-02889-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 01/21/2025] [Indexed: 02/03/2025] Open
Abstract
This paper introduces SkinWiseNet (SWNet), a deep convolutional neural network designed for the detection and automatic classification of potentially malignant skin cancer conditions. SWNet optimizes feature extraction through multiple pathways, emphasizing network width augmentation to enhance efficiency. The proposed model addresses potential biases associated with skin conditions, particularly in individuals with darker skin tones or excessive hair, by incorporating feature fusion to assimilate insights from diverse datasets. Extensive experiments were conducted using publicly accessible datasets to evaluate SWNet's effectiveness.This study utilized four datasets-Mnist-HAM10000, ISIC2019, ISIC2020, and Melanoma Skin Cancer-comprising skin cancer images categorized into benign and malignant classes. Explainable Artificial Intelligence (XAI) techniques, specifically Grad-CAM, were employed to enhance the interpretability of the model's decisions. Comparative analysis was performed with three pre-existing deep learning networks-EfficientNet, MobileNet, and Darknet. The results demonstrate SWNet's superiority, achieving an accuracy of 99.86% and an F1 score of 99.95%, underscoring its efficacy in gradient propagation and feature capture across various levels. This research highlights the significant potential of SWNet in advancing skin cancer detection and classification, providing a robust tool for accurate and early diagnosis. The integration of feature fusion enhances accuracy and mitigates biases associated with hair and skin tones. The outcomes of this study contribute to improved patient outcomes and healthcare practices, showcasing SWNet's exceptional capabilities in skin cancer detection and classification.
Collapse
Affiliation(s)
- Ali Atshan Abdulredah
- National School of Electronics and Telecoms of Sfax, University of Sfax, Sfax, Tunisia
| | - Mohammed A Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi-Qar, Iraq
| | - Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, Australia.
| | - Ye Duan
- School of Computing, Clemson University, Clemson, SC, USA
| | - Monji Kherallah
- Faculty of Science of Sfax, University of Sfax, Sfax, Tunisia
| | - Faiza Charfi
- Faculty of Science of Sfax, University of Sfax, Sfax, Tunisia
| |
Collapse
|
11
|
Naseri H, Safaei AA. Diagnosis and prognosis of melanoma from dermoscopy images using machine learning and deep learning: a systematic literature review. BMC Cancer 2025; 25:75. [PMID: 39806282 PMCID: PMC11727731 DOI: 10.1186/s12885-024-13423-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 12/31/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND Melanoma is a highly aggressive skin cancer, where early and accurate diagnosis is crucial to improve patient outcomes. Dermoscopy, a non-invasive imaging technique, aids in melanoma detection but can be limited by subjective interpretation. Recently, machine learning and deep learning techniques have shown promise in enhancing diagnostic precision by automating the analysis of dermoscopy images. METHODS This systematic review examines recent advancements in machine learning (ML) and deep learning (DL) applications for melanoma diagnosis and prognosis using dermoscopy images. We conducted a thorough search across multiple databases, ultimately reviewing 34 studies published between 2016 and 2024. The review covers a range of model architectures, including DenseNet and ResNet, and discusses datasets, methodologies, and evaluation metrics used to validate model performance. RESULTS Our results highlight that certain deep learning architectures, such as DenseNet and DCNN demonstrated outstanding performance, achieving over 95% accuracy on the HAM10000, ISIC and other datasets for melanoma detection from dermoscopy images. The review provides insights into the strengths, limitations, and future research directions of machine learning and deep learning methods in melanoma diagnosis and prognosis. It emphasizes the challenges related to data diversity, model interpretability, and computational resource requirements. CONCLUSION This review underscores the potential of machine learning and deep learning methods to transform melanoma diagnosis through improved diagnostic accuracy and efficiency. Future research should focus on creating accessible, large datasets and enhancing model interpretability to increase clinical applicability. By addressing these areas, machine learning and deep learning models could play a central role in advancing melanoma diagnosis and patient care.
Collapse
Affiliation(s)
- Hoda Naseri
- Department of Data Science, Faculty of Interdisciplinary Science and Technology, Tarbiat Modares University, Tehran, Iran
| | - Ali A Safaei
- Department of Data Science, Faculty of Interdisciplinary Science and Technology, Tarbiat Modares University, Tehran, Iran.
- Department of Medical Informatics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran.
| |
Collapse
|
12
|
Raju ASN, Venkatesh K, Padmaja B, Kumar CHNS, Patnala PRM, Lasisi A, Islam S, Razak A, Khan WA. Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition. Sci Rep 2024; 14:30052. [PMID: 39627293 PMCID: PMC11614869 DOI: 10.1038/s41598-024-81456-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 11/26/2024] [Indexed: 12/06/2024] Open
Abstract
Early detection of colorectal carcinoma (CRC), one of the most prevalent forms of cancer worldwide, significantly enhances the prognosis of patients. This research presents a new method for improving CRC detection using a deep learning ensemble with the Computer Aided Diagnosis (CADx). The method involves combining pre-trained convolutional neural network (CNN) models, such as ADaRDEV2I-22, DaRD-22, and ADaDR-22, using Vision Transformers (ViT) and XGBoost. The study addresses the challenges associated with imbalanced datasets and the necessity of sophisticated feature extraction in medical image analysis. Initially, the CKHK-22 dataset comprised 24 classes. However, we refined it to 14 classes, which led to an improvement in data balance and quality. This improvement enabled more precise feature extraction and improved classification results. We created two ensemble models: the first model used Vision Transformers to capture long-range spatial relationships in the images, while the second model combined CNNs with XGBoost to facilitate structured data classification. We implemented DCGAN-based augmentation to enhance the dataset's diversity. The tests showed big improvements in performance, with the ADaDR-22 + Vision Transformer group getting the best results, with a testing accuracy of 93.4% and an AUC of 98.8%. In contrast, the ADaDR-22 + XGBoost model had an AUC of 97.8% and an accuracy of 92.2%. These findings highlight the efficacy of the proposed ensemble models in detecting CRC and highlight the importance of using well-balanced, high-quality datasets. The proposed method significantly enhances the clinical diagnostic accuracy and the capabilities of medical image analysis or early CRC detection.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India
| | - B Padmaja
- Department of Computer Science and Engineering-AI&ML, Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, India
| | - C H N Santhosh Kumar
- Department of Computer Science and Engineering, Anurag Engineering College, Kodada, Telangana, 508206, India
| | | | - Ayodele Lasisi
- Department of Computer Science, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Saiful Islam
- Civil Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Abdul Razak
- Department of Mechanical Engineering, P. A. College of Engineering (Affiliated to Visvesvaraya Technological UniversityBelagavi), Mangaluru, India
| | - Wahaj Ahmad Khan
- School of Civil Engineering & Architecture, Institute of Technology, Dire-Dawa University, 1362, Dire Dawa, Ethiopia.
| |
Collapse
|
13
|
T PA, G S, T V, Selvan V P. Transforming Skin Cancer Diagnosis: A Deep Learning Approach with the Ham10000 Dataset. Cancer Invest 2024; 42:801-814. [PMID: 39523747 DOI: 10.1080/07357907.2024.2422602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 08/06/2024] [Accepted: 10/24/2024] [Indexed: 11/16/2024]
Abstract
Skin cancer (SC) is one of the three most common cancers worldwide. Melanoma has the deadliest potential to spread to other parts of the body among all SCs. For SC treatments to be effective, early detection is essential. The high degree of similarity between tumor and non-tumors makes SC diagnosis difficult even for experienced doctors. To address this issue, authors have developed a novel Deep Learning (DL) system capable of automatically classifying skin lesions into seven groups: actinic keratosis (AKIEC), melanoma (MEL), benign keratosis (BKL), melanocytic Nevi (NV), basal cell carcinoma (BCC), dermatofibroma (DF), and vascular (VASC) skin lesions. Authors introduced the Multi-Grained Enhanced Deep Cascaded Forest (Mg-EDCF) as a novel DL model. In this model, first, researchers utilized subsampled multigrained scanning (Mg-sc) to acquire micro features. Second, authors employed two types of Random Forest (RF) to create input features. Finally, the Enhanced Deep Cascaded Forest (EDCF) was utilized for classification. The HAM10000 dataset was used for implementing, training, and evaluating the proposed and Transfer Learning (TL) models such as ResNet, AlexNet, and VGG16. During the validation and training stages, the performance of the four networks was evaluated by comparing their accuracy and loss. The proposed method outperformed the competing models with an average accuracy score of 98.19%. Our proposed methodology was validated against existing state-of-the-art algorithms from recent publications, resulting in consistently greater accuracies than those of the classifiers.
Collapse
Affiliation(s)
- Priyeshkumar A T
- Department of Biomedical Engineering, Mahendra College of Engineering, Minnampalli, Salem, India
| | - Shyamala G
- Department of Biomedical Engineering, Mahendra College of Engineering, Minnampalli, Salem, India
| | - Vasanth T
- Department of Biomedical Engineering, Mahendra College of Engineering, Minnampalli, Salem, India
| | - Ponniyin Selvan V
- Department of Electronics and Communication Engineering, Mahendra College of Engineering, Minnampalli, Salem, India
| |
Collapse
|
14
|
Vardasca R, Mendes JG, Magalhaes C. Skin Cancer Image Classification Using Artificial Intelligence Strategies: A Systematic Review. J Imaging 2024; 10:265. [PMID: 39590729 PMCID: PMC11595075 DOI: 10.3390/jimaging10110265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 09/26/2024] [Accepted: 10/17/2024] [Indexed: 11/28/2024] Open
Abstract
The increasing incidence of and resulting deaths associated with malignant skin tumors are a public health problem that can be minimized if detection strategies are improved. Currently, diagnosis is heavily based on physicians' judgment and experience, which can occasionally lead to the worsening of the lesion or needless biopsies. Several non-invasive imaging modalities, e.g., confocal scanning laser microscopy or multiphoton laser scanning microscopy, have been explored for skin cancer assessment, which have been aligned with different artificial intelligence (AI) strategies to assist in the diagnostic task, based on several image features, thus making the process more reliable and faster. This systematic review concerns the implementation of AI methods for skin tumor classification with different imaging modalities, following the PRISMA guidelines. In total, 206 records were retrieved and qualitatively analyzed. Diagnostic potential was found for several techniques, particularly for dermoscopy images, with strategies yielding classification results close to perfection. Learning approaches based on support vector machines and artificial neural networks seem to be preferred, with a recent focus on convolutional neural networks. Still, detailed descriptions of training/testing conditions are lacking in some reports, hampering reproduction. The use of AI methods in skin cancer diagnosis is an expanding field, with future work aiming to construct optimal learning approaches and strategies. Ultimately, early detection could be optimized, improving patient outcomes, even in areas where healthcare is scarce.
Collapse
Affiliation(s)
- Ricardo Vardasca
- ISLA Santarem, Rua Teixeira Guedes 31, 2000-029 Santarem, Portugal
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
| | - Joaquim Gabriel Mendes
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
- Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
| | - Carolina Magalhaes
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Universidade do Porto, 4099-002 Porto, Portugal; (J.G.M.); (C.M.)
- Faculdade de Engenharia, Universidade do Porto, 4099-002 Porto, Portugal
| |
Collapse
|
15
|
Saghir U, Singh SK, Hasan M. Skin Cancer Image Segmentation Based on Midpoint Analysis Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2581-2596. [PMID: 38627267 PMCID: PMC11522265 DOI: 10.1007/s10278-024-01106-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 02/16/2024] [Accepted: 03/27/2024] [Indexed: 10/30/2024]
Abstract
Skin cancer affects people of all ages and is a common disease. The death toll from skin cancer rises with a late diagnosis. An automated mechanism for early-stage skin cancer detection is required to diminish the mortality rate. Visual examination with scanning or imaging screening is a common mechanism for detecting this disease, but due to its similarity to other diseases, this mechanism shows the least accuracy. This article introduces an innovative segmentation mechanism that operates on the ISIC dataset to divide skin images into critical and non-critical sections. The main objective of the research is to segment lesions from dermoscopic skin images. The suggested framework is completed in two steps. The first step is to pre-process the image; for this, we have applied a bottom hat filter for hair removal and image enhancement by applying DCT and color coefficient. In the next phase, a background subtraction method with midpoint analysis is applied for segmentation to extract the region of interest and achieves an accuracy of 95.30%. The ground truth for the validation of segmentation is accomplished by comparing the segmented images with validation data provided with the ISIC dataset.
Collapse
Affiliation(s)
- Uzma Saghir
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India
| | - Shailendra Kumar Singh
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India.
| | - Moin Hasan
- Dept. of Computer Science & Engineering, Jain Deemed-to-be-University, Bengaluru, 562112, India
| |
Collapse
|
16
|
Wang Z, Wang C, Peng L, Lin K, Xue Y, Chen X, Bao L, Liu C, Zhang J, Xie Y. Radiomic and deep learning analysis of dermoscopic images for skin lesion pattern decoding. Sci Rep 2024; 14:19781. [PMID: 39187551 PMCID: PMC11347612 DOI: 10.1038/s41598-024-70231-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Accepted: 08/14/2024] [Indexed: 08/28/2024] Open
Abstract
This study aims to explore the efficacy of a hybrid deep learning and radiomics approach, supplemented with patient metadata, in the noninvasive dermoscopic imaging-based diagnosis of skin lesions. We analyzed dermoscopic images from the International Skin Imaging Collaboration (ISIC) dataset, spanning 2016-2020, encompassing a variety of skin lesions. Our approach integrates deep learning with a comprehensive radiomics analysis, utilizing a vast array of quantitative image features to precisely quantify skin lesion patterns. The dataset includes cases of three, four, and eight different skin lesion types. Our methodology was benchmarked against seven classification methods from the ISIC 2020 challenge and prior research using a binary decision framework. The proposed hybrid model demonstrated superior performance in distinguishing benign from malignant lesions, achieving area under the receiver operating characteristic curve (AUROC) scores of 99%, 95%, and 96%, and multiclass decoding AUROCs of 98.5%, 94.9%, and 96.4%, with sensitivities of 97.6%, 93.9%, and 96.0% and specificities of 98.4%, 96.7%, and 96.9% in the internal ISIC 2018 challenge, as well as in the external Jinan and Longhua datasets, respectively. Our findings suggest that the integration of radiomics and deep learning, utilizing dermoscopic images, effectively captures the heterogeneity and pattern expression of skin lesions.
Collapse
Affiliation(s)
- Zheng Wang
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
| | - Chong Wang
- Department of Dermatology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Li Peng
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
| | - Kaibin Lin
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
| | - Yang Xue
- School of Computer Science, Hunan First Normal University, Changsha, 410205, China
| | - Xiao Chen
- Department of Dermatology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Linlin Bao
- Department of Dermatology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China
| | - Chao Liu
- Department of Dermatology, Longhua People's Hospital Affiliated to Southern Medical University, Shenzhen, 518109, Guangdong, China
| | - Jianglin Zhang
- Department of Dermatology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, 518020, Guangdong, China.
| | - Yang Xie
- Department of Dermatology, The Third Affiliated Hospital of Sun Yat-Sen University, Guangzhou, 510630, Guangdong, China.
| |
Collapse
|
17
|
Saleh N, Hassan MA, Salaheldin AM. Skin cancer classification based on an optimized convolutional neural network and multicriteria decision-making. Sci Rep 2024; 14:17323. [PMID: 39068205 PMCID: PMC11283527 DOI: 10.1038/s41598-024-67424-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 07/11/2024] [Indexed: 07/30/2024] Open
Abstract
Skin cancer is a type of cancer disease in which abnormal alterations in skin characteristics can be detected. It can be treated if it is detected early. Many artificial intelligence-based models have been developed for skin cancer detection and classification. Considering the development of numerous models according to various scenarios and selecting the optimum model was rarely considered in previous works. This study aimed to develop various models for skin cancer classification and select the optimum model. Convolutional neural networks (CNNs) in the form of AlexNet, Inception V3, MobileNet V2, and ResNet 50 were used for feature extraction. Feature reduction was carried out using two algorithms of the grey wolf optimizer (GWO) in addition to using the original features. Skin cancer images were classified into four classes based on six machine learning (ML) classifiers. As a result, 51 models were developed with different combinations of CNN algorithms, without GWO algorithms, with two GWO algorithms, and with six ML classifiers. To select the optimum model with the best results, the multicriteria decision-making approach was utilized to rank the alternatives by perimeter similarity (RAPS). Model training and testing were conducted using the International Skin Imaging Collaboration (ISIC) 2017 dataset. Based on nine evaluation metrics and according to the RAPS method, the AlexNet algorithm with a classical GWO yielded the optimum model, achieving a classification accuracy of 94.5%. This work presents the first study on benchmarking skin cancer classification with many models. Feature reduction not only reduces the time spent on training but also improves classification accuracy. The RAPS method has proven its robustness in the problem of selecting the best model for skin cancer classification.
Collapse
Affiliation(s)
- Neven Saleh
- Systems and Biomedical Engineering Department, Higher Institute of Engineering, EL Shorouk Academy, Cairo, Egypt.
- Electrical Communication and Electronic Systems Engineering Department, Engineering Faculty, October University for Modern Sciences and Arts, Giza, Egypt.
| | - Mohammed A Hassan
- Biomedical Engineering Department, Faculty of Engineering, Helwan University, Cairo, Egypt
| | - Ahmed M Salaheldin
- Systems and Biomedical Engineering Department, Higher Institute of Engineering, EL Shorouk Academy, Cairo, Egypt
| |
Collapse
|
18
|
Cui Y, Li Y, Miedema JR, Edmiston SN, Farag SW, Marron JS, Thomas NE. Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images-Nevus and Melanoma. Cancers (Basel) 2024; 16:2616. [PMID: 39123344 PMCID: PMC11311050 DOI: 10.3390/cancers16152616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/17/2024] [Accepted: 07/19/2024] [Indexed: 08/12/2024] Open
Abstract
Automated region of interest detection in histopathological image analysis is a challenging and important topic with tremendous potential impact on clinical practice. The deep learning methods used in computational pathology may help us to reduce costs and increase the speed and accuracy of cancer diagnosis. We started with the UNC Melanocytic Tumor Dataset cohort which contains 160 hematoxylin and eosin whole slide images of primary melanoma (86) and nevi (74). We randomly assigned 80% (134) as a training set and built an in-house deep learning method to allow for classification, at the slide level, of nevi and melanoma. The proposed method performed well on the other 20% (26) test dataset; the accuracy of the slide classification task was 92.3% and our model also performed well in terms of predicting the region of interest annotated by the pathologists, showing excellent performance of our model on melanocytic skin tumors. Even though we tested the experiments on a skin tumor dataset, our work could also be extended to other medical image detection problems to benefit the clinical evaluation and diagnosis of different tumors.
Collapse
Affiliation(s)
- Yi Cui
- Department of Economics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Yao Li
- Department of Statistics & Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (J.S.M.)
| | - Jayson R. Miedema
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Sharon N. Edmiston
- Lineberger Comprehensive Cancer Center, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Sherif W. Farag
- Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - James Stephen Marron
- Department of Statistics & Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (J.S.M.)
- Lineberger Comprehensive Cancer Center, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Nancy E. Thomas
- Lineberger Comprehensive Cancer Center, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Dermatology, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
19
|
Khan MA, Hamza A, Shabaz M, Kadry S, Rubab S, Bilal MA, Akbar MN, Kesavan SM. RETRACTED ARTICLE: Multiclass skin lesion classification using deep learning networks optimal information fusion. DISCOVER APPLIED SCIENCES 2024; 6:300. [DOI: 10.1007/s42452-024-05998-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 05/27/2024] [Indexed: 08/25/2024]
Abstract
AbstractA serious, all-encompassing, and deadly cancer that affects every part of the body is skin cancer. The most prevalent causes of skin lesions are UV radiation, which can damage human skin, and moles. If skin cancer is discovered early, it may be adequately treated. In order to diagnose skin lesions with less effort, dermatologists are increasingly turning to machine learning (ML) techniques and computer-aided diagnostic (CAD) systems. This paper proposes a computerized method for multiclass lesion classification using a fusion of optimal deep-learning model features. The dataset used in this work, ISIC2018, is imbalanced; therefore, augmentation is performed based on a few mathematical operations. After that, two pre-trained deep learning models (DarkNet-19 and MobileNet-V2) have been fine-tuned and trained on the selected dataset. After training, features are extracted from the average pool layer and optimized using a hybrid firefly optimization technique. The selected features are fused in two ways: (i) original serial approach and (ii) proposed threshold approach. Machine learning classifiers are used to classify the chosen features at the end. Using the ISIC2018 dataset, the experimental procedure produced an accuracy of 89.0%. Whereas, 87.34, 87.57, and 87.45 are sensitivity, precision, and F1 score respectively. At the end, comparison is also conducted with recent techniques, and it shows the proposed method shows improved accuracy along with other performance measures.
Collapse
|
20
|
Kandhro IA, Manickam S, Fatima K, Uddin M, Malik U, Naz A, Dandoush A. Performance evaluation of E-VGG19 model: Enhancing real-time skin cancer detection and classification. Heliyon 2024; 10:e31488. [PMID: 38826726 PMCID: PMC11141372 DOI: 10.1016/j.heliyon.2024.e31488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 05/16/2024] [Indexed: 06/04/2024] Open
Abstract
Skin cancer is a pervasive and potentially life-threatening disease. Early detection plays a crucial role in improving patient outcomes. Machine learning (ML) techniques, particularly when combined with pre-trained deep learning models, have shown promise in enhancing the accuracy of skin cancer detection. In this paper, we enhanced the VGG19 pre-trained model with max pooling and dense layer for the prediction of skin cancer. Moreover, we also explored the pre-trained models such as Visual Geometry Group 19 (VGG19), Residual Network 152 version 2 (ResNet152v2), Inception-Residual Network version 2 (InceptionResNetV2), Dense Convolutional Network 201 (DenseNet201), Residual Network 50 (ResNet50), Inception version 3 (InceptionV3), For training, skin lesions dataset is used with malignant and benign cases. The models extract features and divide skin lesions into two categories: malignant and benign. The features are then fed into machine learning methods, including Linear Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR) and Support Vector Machine (SVM), our results demonstrate that combining E-VGG19 model with traditional classifiers significantly improves the overall classification accuracy for skin cancer detection and classification. Moreover, we have also compared the performance of baseline classifiers and pre-trained models with metrics (recall, F1 score, precision, sensitivity, and accuracy). The experiment results provide valuable insights into the effectiveness of various models and classifiers for accurate and efficient skin cancer detection. This research contributes to the ongoing efforts to create automated technologies for detecting skin cancer that can help healthcare professionals and individuals identify potential skin cancer cases at an early stage, ultimately leading to more timely and effective treatments.
Collapse
Affiliation(s)
- Irfan Ali Kandhro
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Selvakumar Manickam
- National Advanced IPv6 Centre (NAv6), Universiti Sains Malaysia, Gelugor, Penang, 11800, Malaysia
| | - Kanwal Fatima
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Mueen Uddin
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| | - Urooj Malik
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Anum Naz
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Abdulhalim Dandoush
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| |
Collapse
|
21
|
K K, S K, J AK, B C. Enhancing Skin Cancer Classification using Efficient Net B0-B7 through Convolutional Neural Networks and Transfer Learning with Patient-Specific Data. Asian Pac J Cancer Prev 2024; 25:1795-1802. [PMID: 38809652 PMCID: PMC11318802 DOI: 10.31557/apjcp.2024.25.5.1795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 05/14/2024] [Indexed: 05/31/2024] Open
Abstract
BACKGROUND Skin cancer diagnosis challenges dermatologists due to its complex visual variations across diagnostic categories. Convolutional neural networks (CNNs), specifically the Efficient Net B0-B7 series, have shown superiority in multiclass skin cancer classification. This study addresses the limitations of visual examination by presenting a tailored preprocessing pipeline designed for Efficient Net models. Leveraging transfer learning with pre-trained ImageNet weights, the research aims to enhance diagnostic accuracy in an imbalanced multiclass classification context. METHODS The study develops a specialized image preprocessing pipeline involving image scaling, dataset augmentation, and artifact removal tailored to the nuances of Efficient Net models. Using the Efficient Net B0-B7 dataset, transfer learning fine-tunes CNNs with pre-trained ImageNet weights. Rigorous evaluation employs key metrics like Precision, Recall, Accuracy, F1 Score, and Confusion Matrices to assess the impact of transfer learning and fine-tuning on each Efficient Net variant's performance in classifying diverse skin cancer categories. RESULTS The research showcases the effectiveness of the tailored preprocessing pipeline for Efficient Net models. Transfer learning and fine-tuning significantly enhance the models' ability to discern diverse skin cancer categories. The evaluation of eight Efficient Net models (B0-B7) for skin cancer classification reveals distinct performance patterns across various cancer classes. While the majority class, Benign Kertosis, achieves high accuracy (>87%), challenges arise in accurately classifying Eczema classes. Melanoma, despite its minority representation (2.42% of images), attains an average accuracy of 80.51% across all models. However, suboptimal performance is observed in predicting warts molluscum (90.7%) and psoriasis (84.2%) instances, highlighting the need for targeted improvements in accurately identifying specific skin cancer types. CONCLUSION The study on skin cancer classification utilizes EfficientNets B0-B7 with transfer learning from ImageNet weights. The pinnacle performance is observed with EfficientNet-B7, achieving a groundbreaking top-1 accuracy of 84.4% and top-5 accuracy of 97.1%. Remarkably efficient, it is 8.4 times smaller than the leading CNN. Detailed per-class classification exactitudes through Confusion Matrices affirm its proficiency, signaling the potential of EfficientNets for precise dermatological image analysis.
Collapse
Affiliation(s)
- Kanchana K
- Department of Electrical and Electronics Engineering, Saveetha Engineering College, Tamil Nadu, India.
| | - Kavitha S
- Department of Electrical and Electronics Engineering, Saveetha Engineering College, Tamil Nadu, India.
| | - Anoop K J
- Department of Electrical and Electronics Engineering, VISAT Engineering College, Kerala, India.
| | - Chinthamani B
- Department of Electronics and Instrumentation Engineering, Easwari Engineering College, Tamil Nadu, India.
| |
Collapse
|
22
|
Quishpe-Usca A, Cuenca-Dominguez S, Arias-Viñansaca A, Bosmediano-Angos K, Villalba-Meneses F, Ramírez-Cando L, Tirado-Espín A, Cadena-Morejón C, Almeida-Galárraga D, Guevara C. The effect of hair removal and filtering on melanoma detection: a comparative deep learning study with AlexNet CNN. PeerJ Comput Sci 2024; 10:e1953. [PMID: 38660169 PMCID: PMC11041978 DOI: 10.7717/peerj-cs.1953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/03/2024] [Indexed: 04/26/2024]
Abstract
Melanoma is the most aggressive and prevalent form of skin cancer globally, with a higher incidence in men and individuals with fair skin. Early detection of melanoma is essential for the successful treatment and prevention of metastasis. In this context, deep learning methods, distinguished by their ability to perform automated and detailed analysis, extracting melanoma-specific features, have emerged. These approaches excel in performing large-scale analysis, optimizing time, and providing accurate diagnoses, contributing to timely treatments compared to conventional diagnostic methods. The present study offers a methodology to assess the effectiveness of an AlexNet-based convolutional neural network (CNN) in identifying early-stage melanomas. The model is trained on a balanced dataset of 10,605 dermoscopic images, and on modified datasets where hair, a potential obstructive factor, was detected and removed allowing for an assessment of how hair removal affects the model's overall performance. To perform hair removal, we propose a morphological algorithm combined with different filtering techniques for comparison: Fourier, Wavelet, average blur, and low-pass filters. The model is evaluated through 10-fold cross-validation and the metrics of accuracy, recall, precision, and the F1 score. The results demonstrate that the proposed model performs the best for the dataset where we implemented both a Wavelet filter and hair removal algorithm. It has an accuracy of 91.30%, a recall of 87%, a precision of 95.19%, and an F1 score of 90.91%.
Collapse
Affiliation(s)
- Angélica Quishpe-Usca
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Stefany Cuenca-Dominguez
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Araceli Arias-Viñansaca
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Karen Bosmediano-Angos
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Fernando Villalba-Meneses
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Lenin Ramírez-Cando
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Andrés Tirado-Espín
- School of Mathematical and Computational Sciences, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Carolina Cadena-Morejón
- School of Mathematical and Computational Sciences, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Diego Almeida-Galárraga
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Cesar Guevara
- Quantitative Methods Department, CUNEF Universidad, Madrid, Madrid, Spain
| |
Collapse
|
23
|
Naeem A, Anees T. DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images. PLoS One 2024; 19:e0297667. [PMID: 38507348 PMCID: PMC10954125 DOI: 10.1371/journal.pone.0297667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/11/2024] [Indexed: 03/22/2024] Open
Abstract
Skin cancer is a common cancer affecting millions of people annually. Skin cells inside the body that grow in unusual patterns are a sign of this invasive disease. The cells then spread to other organs and tissues through the lymph nodes and destroy them. Lifestyle changes and increased solar exposure contribute to the rise in the incidence of skin cancer. Early identification and staging are essential due to the high mortality rate associated with skin cancer. In this study, we presented a deep learning-based method named DVFNet for the detection of skin cancer from dermoscopy images. To detect skin cancer images are pre-processed using anisotropic diffusion methods to remove artifacts and noise which enhances the quality of images. A combination of the VGG19 architecture and the Histogram of Oriented Gradients (HOG) is used in this research for discriminative feature extraction. SMOTE Tomek is used to resolve the problem of imbalanced images in the multiple classes of the publicly available ISIC 2019 dataset. This study utilizes segmentation to pinpoint areas of significantly damaged skin cells. A feature vector map is created by combining the features of HOG and VGG19. Multiclassification is accomplished by CNN using feature vector maps. DVFNet achieves an accuracy of 98.32% on the ISIC 2019 dataset. Analysis of variance (ANOVA) statistical test is used to validate the model's accuracy. Healthcare experts utilize the DVFNet model to detect skin cancer at an early clinical stage.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
24
|
Hermosilla P, Soto R, Vega E, Suazo C, Ponce J. Skin Cancer Detection and Classification Using Neural Network Algorithms: A Systematic Review. Diagnostics (Basel) 2024; 14:454. [PMID: 38396492 PMCID: PMC10888121 DOI: 10.3390/diagnostics14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 02/07/2024] [Accepted: 02/10/2024] [Indexed: 02/25/2024] Open
Abstract
In recent years, there has been growing interest in the use of computer-assisted technology for early detection of skin cancer through the analysis of dermatoscopic images. However, the accuracy illustrated behind the state-of-the-art approaches depends on several factors, such as the quality of the images and the interpretation of the results by medical experts. This systematic review aims to critically assess the efficacy and challenges of this research field in order to explain the usability and limitations and highlight potential future lines of work for the scientific and clinical community. In this study, the analysis was carried out over 45 contemporary studies extracted from databases such as Web of Science and Scopus. Several computer vision techniques related to image and video processing for early skin cancer diagnosis were identified. In this context, the focus behind the process included the algorithms employed, result accuracy, and validation metrics. Thus, the results yielded significant advancements in cancer detection using deep learning and machine learning algorithms. Lastly, this review establishes a foundation for future research, highlighting potential contributions and opportunities to improve the effectiveness of skin cancer detection through machine learning.
Collapse
Affiliation(s)
- Pamela Hermosilla
- Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile (E.V.); (C.S.); (J.P.)
| | | | | | | | | |
Collapse
|
25
|
Song X, Guo S, Han L, Zhao Y, Cekderi AB, Wang G. Dermoscopic image colour correction based on gamma correction and multi-scale image fusion. J Eur Acad Dermatol Venereol 2024; 38:e172-e174. [PMID: 37708569 DOI: 10.1111/jdv.19516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 09/08/2023] [Indexed: 09/16/2023]
Affiliation(s)
- Xiaowei Song
- School of Automation, Beijing Institute of Technology University, Beijing, China
| | - Shuli Guo
- School of Automation, Beijing Institute of Technology University, Beijing, China
| | - Lina Han
- Department of Cardiology, The Second Medical Center, National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Yuanyuan Zhao
- School of Automation, Beijing Institute of Technology University, Beijing, China
| | - Anil Baris Cekderi
- School of Automation, Beijing Institute of Technology University, Beijing, China
| | - Guowei Wang
- School of Automation, Beijing Institute of Technology University, Beijing, China
| |
Collapse
|
26
|
Riaz S, Naeem A, Malik H, Naqvi RA, Loh WK. Federated and Transfer Learning Methods for the Classification of Melanoma and Nonmelanoma Skin Cancers: A Prospective Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:8457. [PMID: 37896548 PMCID: PMC10611214 DOI: 10.3390/s23208457] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/09/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023]
Abstract
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.
Collapse
Affiliation(s)
- Shafia Riaz
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
27
|
Bibi S, Khan MA, Shah JH, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics (Basel) 2023; 13:3063. [PMID: 37835807 PMCID: PMC10572512 DOI: 10.3390/diagnostics13193063] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/19/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.
Collapse
Affiliation(s)
- Sobia Bibi
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102-2801, Lebanon;
- Department of CS, HITEC University, Taxila 47080, Pakistan
| | - Jamal Hussain Shah
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
28
|
El-khatib H, Ștefan AM, Popescu D. Performance Improvement of Melanoma Detection Using a Multi-Network System Based on Decision Fusion. APPLIED SCIENCES 2023; 13:10536. [DOI: 10.3390/app131810536] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Abstract
The incidence of melanoma cases continues to rise, underscoring the critical need for early detection and treatment. Recent studies highlight the significance of deep learning in melanoma detection, leading to improved accuracy. The field of computer-assisted detection is extensively explored along all lines, especially in the medical industry, as the benefit in this field is to save hu-man lives. In this domain, this direction must be maximally exploited and introduced into routine controls to improve patient prognosis, disease prevention, reduce treatment costs, improve population management, and improve patient empowerment. All these new aspects were taken into consideration to implement an EHR system with an automated melanoma detection system. The first step, as presented in this paper, is to build a system based on the fusion of decisions from multiple neural networks, such as DarkNet-53, DenseNet-201, GoogLeNet, Inception-V3, InceptionResNet-V2, ResNet-50, ResNet-101, and compare this classifier with four other applications: Google Teachable Machine, Microsoft Azure Machine Learning, Google Vertex AI, and SalesForce Einstein Vision based on the F1 score for further integration into an EHR platform. We trained all models on two databases, ISIC 2020 and DermIS, to also test their adaptability to a wide range of images. Comparisons with state-of-the-art research and existing applications confirm the promising performance of the proposed system.
Collapse
Affiliation(s)
- Hassan El-khatib
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania
| | - Ana-Maria Ștefan
- Faculty of Electronics and Telecommunications, University Politehnica of Bucharest, 060042 Bucharest, Romania
| | - Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania
| |
Collapse
|
29
|
Radhika V, Chandana BS. MSCDNet-based multi-class classification of skin cancer using dermoscopy images. PeerJ Comput Sci 2023; 9:e1520. [PMID: 37705664 PMCID: PMC10495937 DOI: 10.7717/peerj-cs.1520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 07/18/2023] [Indexed: 09/15/2023]
Abstract
Background Skin cancer is a life-threatening disease, and early detection of skin cancer improves the chances of recovery. Skin cancer detection based on deep learning algorithms has recently grown popular. In this research, a new deep learning-based network model for the multiple skin cancer classification including melanoma, benign keratosis, melanocytic nevi, and basal cell carcinoma is presented. We propose an automatic Multi-class Skin Cancer Detection Network (MSCD-Net) model in this research. Methods The study proposes an efficient semantic segmentation deep learning model "DenseUNet" for skin lesion segmentation. The semantic skin lesions are segmented by using the DenseUNet model with a substantially deeper network and fewer trainable parameters. Some of the most relevant features are selected using Binary Dragonfly Algorithm (BDA). SqueezeNet-based classification can be made in the selected features. Results The performance of the proposed model is evaluated using the ISIC 2019 dataset. The DenseNet connections and UNet links are used by the proposed DenseUNet segmentation model, which produces low-level features and provides better segmentation results. The performance results of the proposed MSCD-Net model are superior to previous research in terms of effectiveness and efficiency on the standard ISIC 2019 dataset.
Collapse
Affiliation(s)
| | - B. Sai Chandana
- School of Computer Science Engineering, VIT-AP University, Amaravathi, India
| |
Collapse
|
30
|
Mehmood A, Gulzar Y, Ilyas QM, Jabbari A, Ahmad M, Iqbal S. SBXception: A Shallower and Broader Xception Architecture for Efficient Classification of Skin Lesions. Cancers (Basel) 2023; 15:3604. [PMID: 37509267 PMCID: PMC10377736 DOI: 10.3390/cancers15143604] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Skin cancer is a major public health concern around the world. Skin cancer identification is critical for effective treatment and improved results. Deep learning models have shown considerable promise in assisting dermatologists in skin cancer diagnosis. This study proposes SBXception: a shallower and broader variant of the Xception network. It uses Xception as the base model for skin cancer classification and increases its performance by reducing the depth and expanding the breadth of the architecture. We used the HAM10000 dataset, which contains 10,015 dermatoscopic images of skin lesions classified into seven categories, for training and testing the proposed model. Using the HAM10000 dataset, we fine-tuned the new model and reached an accuracy of 96.97% on a holdout test set. SBXception also achieved significant performance enhancement with 54.27% fewer training parameters and reduced training time compared to the base model. Our findings show that reducing and expanding the Xception model architecture can greatly improve its performance in skin cancer categorization.
Collapse
Affiliation(s)
- Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abdoh Jabbari
- College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia
| | - Muneer Ahmad
- Department of Human and Digital Interface, Woosong University, Daejeon 34606, Republic of Korea
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|