1
|
Ahmed A, Sun G, Bilal A, Li Y, Ebad SA. Precision and efficiency in skin cancer segmentation through a dual encoder deep learning model. Sci Rep 2025; 15:4815. [PMID: 39924555 PMCID: PMC11808120 DOI: 10.1038/s41598-025-88753-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Accepted: 01/30/2025] [Indexed: 02/11/2025] Open
Abstract
Skin cancer is a prevalent health concern, and accurate segmentation of skin lesions is crucial for early diagnosis. Existing methods for skin lesion segmentation often face trade-offs between efficiency and feature extraction capabilities. This paper proposes Dual Skin Segmentation (DuaSkinSeg), a deep-learning model, to address this gap by utilizing dual encoders for improved performance. DuaSkinSeg leverages a pre-trained MobileNetV2 for efficient local feature extraction. Subsequently, a Vision Transformer-Convolutional Neural Network (ViT-CNN) encoder-decoder architecture extracts higher-level features focusing on long-range dependencies. This approach aims to combine the efficiency of MobileNetV2 with the feature extraction capabilities of the ViT encoder for improved segmentation performance. To evaluate DuaSkinSeg's effectiveness, we conducted experiments on three publicly available benchmark datasets: ISIC 2016, ISIC 2017, and ISIC 2018. The results demonstrate that DuaSkinSeg achieves competitive performance compared to existing methods, highlighting the potential of the dual encoder architecture for accurate skin lesion segmentation.
Collapse
Affiliation(s)
- Asaad Ahmed
- School of Information Science and Technology, Beijing University of Technology, Beijing, 100124, China
| | - Guangmin Sun
- School of Information Science and Technology, Beijing University of Technology, Beijing, 100124, China
| | - Anas Bilal
- College of Information Science and Technology, Hainan Normal University, Haikou, 571158, China
| | - Yu Li
- School of Information Science and Technology, Beijing University of Technology, Beijing, 100124, China
| | - Shouki A Ebad
- Center for Scientific Research and Entrepreneurship, Northern Border University, Arar, 73213, Saudi Arabia.
| |
Collapse
|
2
|
Natha P, Tera SP, Chinthaginjala R, Rab SO, Narasimhulu CV, Kim TH. Boosting skin cancer diagnosis accuracy with ensemble approach. Sci Rep 2025; 15:1290. [PMID: 39779772 PMCID: PMC11711234 DOI: 10.1038/s41598-024-84864-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 12/27/2024] [Indexed: 01/11/2025] Open
Abstract
Skin cancer is common and deadly, hence a correct diagnosis at an early age is essential. Effective therapy depends on precise classification of the several skin cancer forms, each with special traits. Because dermoscopy and other sophisticated imaging methods produce detailed lesion images, early detection has been enhanced. It's still difficult to analyze the images to differentiate benign from malignant tumors, though. Better predictive modeling methods are needed since the diagnostic procedures used now frequently produce inaccurate and inconsistent results. In dermatology, Machine learning (ML) models are becoming essential for the automatic detection and classification of skin cancer lesions from image data. With the ensemble model, which mix several ML approaches to take use of their advantages and lessen their disadvantages, this work seeks to improve skin cancer predictions. We introduce a new method, the Max Voting method, for optimization of skin cancer classification. On the HAM10000 and ISIC 2018 datasets, we trained and assessed three distinct ML models: Random Forest (RF), Multi-layer Perceptron Neural Network (MLPN), and Support Vector Machine (SVM). Overall performance was increased by the combined predictions made with the Max Voting technique. Moreover, feature vectors that were optimally produced from image data by a Genetic Algorithm (GA) were given to the ML models. We demonstrate that the Max Voting method greatly improves predictive performance, reaching an accuracy of 94.70% and producing the best results for F1-measure, recall, and precision. The most dependable and robust approach turned out to be Max Voting, which combines the benefits of numerous pre-trained ML models to provide a new and efficient method for classifying skin cancer lesions.
Collapse
Affiliation(s)
- Priya Natha
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Green Fields, Vaddeswaram, Guntur, Andhra Pradesh, 522302, India
| | - Sivarama Prasad Tera
- Department of Electronics and Electrical Engineering, Indian Institute of Technology, Guwahati, Assam, 781039, India
| | - Ravikumar Chinthaginjala
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, 632014, India.
| | - Safia Obaidur Rab
- Department of Clinical Laboratory Sciences, College of Applied Medical Science, King Khalid University, Abha, Saudi Arabia
| | - C Venkata Narasimhulu
- Department of Electronics and Communication Engineering, Chaitanya Bharati Institute of Technology, Hyderabad, 500075, India
| | - Tae Hoon Kim
- School of Information and Electronic Engineering and Zhejiang Key Laboratory of Biomedical Intelligent Computing Technology, Zhejiang University of Science and Technology, No. 318, Hangzhou, Zhejiang, China.
| |
Collapse
|
3
|
Prakash UM, Iniyan S, Dutta AK, Alsubai S, Naga Ramesh JV, Mohanty SN, Dudekula KV. Multi-scale feature fusion of deep convolutional neural networks on cancerous tumor detection and classification using biomedical images. Sci Rep 2025; 15:1105. [PMID: 39774273 PMCID: PMC11707024 DOI: 10.1038/s41598-024-84949-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Accepted: 12/30/2024] [Indexed: 01/11/2025] Open
Abstract
In the present scenario, cancerous tumours are common in humans due to major changes in nearby environments. Skin cancer is a considerable disease detected among people. This cancer is the uncontrolled evolution of atypical skin cells. It occurs when DNA injury to skin cells, or a genetic defect, leads to an increase quickly and establishes malignant tumors. However, in rare instances, many types of skin cancer occur from DNA changes tempted by infrared light affecting skin cells. This disease is a worldwide health problem, so an accurate and appropriate diagnosis is needed for efficient treatment. Current developments in medical technology, like smart recognition and analysis utilizing machine learning (ML) and deep learning (DL) techniques, have transformed the analysis and treatment of these conditions. These approaches will be highly effective for the recognition of skin cancer utilizing biomedical imaging. This study develops a Multi-scale Feature Fusion of Deep Convolutional Neural Networks on Cancerous Tumor Detection and Classification (MFFDCNN-CTDC) model. The main aim of the MFFDCNN-CTDC model is to detect and classify cancerous tumours using biomedical imaging. To eliminate unwanted noise, the MFFDCNN-CTDC method initially utilizes a sobel filter (SF) for the image preprocessing stage. For the segmentation process, Unet3+ is employed, providing precise localization of tumour regions. Next, the MFFDCNN-CTDC model incorporates multi-scale feature fusion by combining ResNet50 and EfficientNet architectures, capitalizing on their complementary strengths in feature extraction from varying depths and scales of the input images. The convolutional autoencoder (CAE) model is utilized for the classification method. Finally, the parameter tuning process is performed through a hybrid fireworks whale optimization algorithm (FWWOA) to enhance the classification performance of the CAE model. A wide range of experiments is performed to authorize the performance of the MFFDCNN-CTDC approach. The experimental validation of the MFFDCNN-CTDC approach exhibited a superior accuracy value of 98.78% and 99.02% over existing techniques under ISIC 2017 and HAM10000 datasets.
Collapse
Affiliation(s)
- U M Prakash
- School of Computing, SRM Institute of Science and Technology, Kaatankulathur, Chennai, 603203, India
| | - S Iniyan
- School of Computing, SRM Institute of Science and Technology, Kaatankulathur, Chennai, 603203, India
| | - Ashit Kumar Dutta
- Department of Computer Science and Information Systems, College of Applied Sciences, AlMaarefa University, 13713, Ad Diriyah, Riyadh, Kingdom of Saudi Arabia
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences in Al-Kharj, Prince Sattam bin Abdulaziz University, P.O. Box 151, 11942, Al-Kharj, Saudi Arabia
| | - Janjhyam Venkata Naga Ramesh
- Department of Computer Science and Engineering, Graphic Era Hill University, Dehradun, Uttarakhand, India
- Department of Computer Science and Engineering, Graphic Era Deemed to Be University, Dehradun, Uttarakhand, India
| | - Sachi Nandan Mohanty
- School of Computer Science Engineering (SCOPE), VIT-AP University, Amravati, Andhra Pradesh, India
| | - Khasim Vali Dudekula
- School of Computer Science Engineering (SCOPE), VIT-AP University, Amravati, Andhra Pradesh, India.
| |
Collapse
|
4
|
Alotaibi A, AlSaeed D. Skin Cancer Detection Using Transfer Learning and Deep Attention Mechanisms. Diagnostics (Basel) 2025; 15:99. [PMID: 39795627 PMCID: PMC11720014 DOI: 10.3390/diagnostics15010099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 12/26/2024] [Accepted: 12/29/2024] [Indexed: 01/13/2025] Open
Abstract
Background/Objectives: Early and accurate diagnosis of skin cancer improves survival rates; however, dermatologists often struggle with lesion detection due to similar pigmentation. Deep learning and transfer learning models have shown promise in diagnosing skin cancers through image processing. Integrating attention mechanisms (AMs) with deep learning has further enhanced the accuracy of medical image classification. While significant progress has been made, further research is needed to improve the detection accuracy. Previous studies have not explored the integration of attention mechanisms with the pre-trained Xception transfer learning model for binary classification of skin cancer. This study aims to investigate the impact of various attention mechanisms on the Xception model's performance in detecting benign and malignant skin lesions. Methods: We conducted four experiments on the HAM10000 dataset. Three models integrated self-attention (SL), hard attention (HD), and soft attention (SF) mechanisms, while the fourth model used the standard Xception without attention mechanisms. Each mechanism analyzed features from the Xception model uniquely: self-attention examined the input relationships, hard-attention selected elements sparsely, and soft-attention distributed the focus probabilistically. Results: Integrating AMs into the Xception architecture effectively enhanced its performance. The accuracy of the Xception alone was 91.05%. With AMs, the accuracy increased to 94.11% using self-attention, 93.29% with soft attention, and 92.97% with hard attention. Moreover, the proposed models outperformed previous studies in terms of the recall metrics, which are crucial for medical investigations. Conclusions: These findings suggest that AMs can enhance performance in relation to complex medical imaging tasks, potentially supporting earlier diagnosis and improving treatment outcomes.
Collapse
Affiliation(s)
- Areej Alotaibi
- College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| | | |
Collapse
|
5
|
Ray A, Sarkar S, Schwenker F, Sarkar R. Decoding skin cancer classification: perspectives, insights, and advances through researchers' lens. Sci Rep 2024; 14:30542. [PMID: 39695157 DOI: 10.1038/s41598-024-81961-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 12/02/2024] [Indexed: 12/20/2024] Open
Abstract
Skin cancer is a significant global health concern, with timely and accurate diagnosis playing a critical role in improving patient outcomes. In recent years, computer-aided diagnosis systems have emerged as powerful tools for automated skin cancer classification, revolutionizing the field of dermatology. This survey analyzes 107 research papers published over the last 18 years, providing a thorough evaluation of advancements in classification techniques, with a focus on the growing integration of computer vision and artificial intelligence (AI) in enhancing diagnostic accuracy and reliability. The paper begins by presenting an overview of the fundamental concepts of skin cancer, addressing underlying challenges in accurate classification, and highlighting the limitations of traditional diagnostic methods. Extensive examination is devoted to a range of datasets, including the HAM10000 and the ISIC archive, among others, commonly employed by researchers. The exploration then delves into machine learning techniques coupled with handcrafted features, emphasizing their inherent limitations. Subsequent sections provide a comprehensive investigation into deep learning-based approaches, encompassing convolutional neural networks, transfer learning, attention mechanisms, ensemble techniques, generative adversarial networks, vision transformers, and segmentation-guided classification strategies, detailing various architectures, tailored for skin lesion analysis. The survey also sheds light on the various hybrid and multimodal techniques employed for classification. By critically analyzing each approach and highlighting its limitations, this survey provides researchers with valuable insights into the latest advancements, trends, and gaps in skin cancer classification. Moreover, it offers clinicians practical knowledge on the integration of AI tools to enhance diagnostic decision-making processes. This comprehensive analysis aims to bridge the gap between research and clinical practice, serving as a guide for the AI community to further advance the state-of-the-art in skin cancer classification systems.
Collapse
Affiliation(s)
- Amartya Ray
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Sujan Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081, Ulm, Germany.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
6
|
Munjal G, Bhardwaj P, Bhargava V, Singh S, Nagpal N. SkinSage XAI: An explainable deep learning solution for skin lesion diagnosis. HEALTH CARE SCIENCE 2024; 3:438-455. [PMID: 39735286 PMCID: PMC11671215 DOI: 10.1002/hcs2.121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 09/06/2024] [Accepted: 09/24/2024] [Indexed: 12/31/2024]
Abstract
Background Skin cancer poses a significant global health threat, with early detection being essential for successful treatment. While deep learning algorithms have greatly enhanced the categorization of skin lesions, the black-box nature of many models limits interpretability, posing challenges for dermatologists. Methods To address these limitations, SkinSage XAI utilizes advanced explainable artificial intelligence (XAI) techniques for skin lesion categorization. A data set of around 50,000 images from the Customized HAM10000, selected for diversity, serves as the foundation. The Inception v3 model is used for classification, supported by gradient-weighted class activation mapping and local interpretable model-agnostic explanations algorithms, which provide clear visual explanations for model outputs. Results SkinSage XAI demonstrated high performance, accurately categorizing seven types of skin lesions-dermatofibroma, benign keratosis, melanocytic nevus, vascular lesion, actinic keratosis, basal cell carcinoma, and melanoma. It achieved an accuracy of 96%, with precision at 96.42%, recall at 96.28%, f 1 score at 96.14%, and an area under the curve of 99.83%. Conclusions SkinSage XAI represents a significant advancement in dermatology and artificial intelligence by bridging gaps in accuracy and explainability. The system provides transparent, accurate diagnoses, improving decision-making for dermatologists and potentially enhancing patient outcomes.
Collapse
Affiliation(s)
- Geetika Munjal
- Amity School of Engineering and TechnologyAmity University NoidaNoidaUttar PradeshIndia
| | - Paarth Bhardwaj
- Amity School of Engineering and TechnologyAmity University NoidaNoidaUttar PradeshIndia
| | - Vaibhav Bhargava
- Amity School of Engineering and TechnologyAmity University NoidaNoidaUttar PradeshIndia
| | - Shivendra Singh
- Amity School of Engineering and TechnologyAmity University NoidaNoidaUttar PradeshIndia
| | - Nimish Nagpal
- Amity School of Engineering and TechnologyAmity University NoidaNoidaUttar PradeshIndia
| |
Collapse
|
7
|
Gómez-Martínez V, Chushig-Muzo D, Veierød MB, Granja C, Soguero-Ruiz C. Ensemble feature selection and tabular data augmentation with generative adversarial networks to enhance cutaneous melanoma identification and interpretability. BioData Min 2024; 17:46. [PMID: 39478549 PMCID: PMC11526724 DOI: 10.1186/s13040-024-00397-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 10/09/2024] [Indexed: 11/02/2024] Open
Abstract
BACKGROUND Cutaneous melanoma is the most aggressive form of skin cancer, responsible for most skin cancer-related deaths. Recent advances in artificial intelligence, jointly with the availability of public dermoscopy image datasets, have allowed to assist dermatologists in melanoma identification. While image feature extraction holds potential for melanoma detection, it often leads to high-dimensional data. Furthermore, most image datasets present the class imbalance problem, where a few classes have numerous samples, whereas others are under-represented. METHODS In this paper, we propose to combine ensemble feature selection (FS) methods and data augmentation with the conditional tabular generative adversarial networks (CTGAN) to enhance melanoma identification in imbalanced datasets. We employed dermoscopy images from two public datasets, PH2 and Derm7pt, which contain melanoma and not-melanoma lesions. To capture intrinsic information from skin lesions, we conduct two feature extraction (FE) approaches, including handcrafted and embedding features. For the former, color, geometric and first-, second-, and higher-order texture features were extracted, whereas for the latter, embeddings were obtained using ResNet-based models. To alleviate the high-dimensionality in the FE, ensemble FS with filter methods were used and evaluated. For data augmentation, we conducted a progressive analysis of the imbalance ratio (IR), related to the amount of synthetic samples created, and evaluated the impact on the predictive results. To gain interpretability on predictive models, we used SHAP, bootstrap resampling statistical tests and UMAP visualizations. RESULTS The combination of ensemble FS, CTGAN, and linear models achieved the best predictive results, achieving AUCROC values of 87% (with support vector machine and IR=0.9) and 76% (with LASSO and IR=1.0) for the PH2 and Derm7pt, respectively. We also identified that melanoma lesions were mainly characterized by features related to color, while not-melanoma lesions were characterized by texture features. CONCLUSIONS Our results demonstrate the effectiveness of ensemble FS and synthetic data in the development of models that accurately identify melanoma. This research advances skin lesion analysis, contributing to both melanoma detection and the interpretation of main features for its identification.
Collapse
Affiliation(s)
- Vanesa Gómez-Martínez
- Department of Signal Theory and Communications, Telematics and Computing Systems, Rey Juan Carlos University, Madrid, 28943, Spain.
| | - David Chushig-Muzo
- Department of Signal Theory and Communications, Telematics and Computing Systems, Rey Juan Carlos University, Madrid, 28943, Spain
| | - Marit B Veierød
- Oslo Centre for Biostatistics and Epidemiology, Department of Biostatistics, Institute of Basic Medical Sciences, University of Oslo, Oslo, Norway
| | - Conceição Granja
- Norwegian Centre for E-health Research, University Hospital of North Norway, Tromsø, 9019, Norway
| | - Cristina Soguero-Ruiz
- Department of Signal Theory and Communications, Telematics and Computing Systems, Rey Juan Carlos University, Madrid, 28943, Spain
| |
Collapse
|
8
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
9
|
Malik FS, Yousaf MH, Sial HA, Viriri S. Exploring dermoscopic structures for melanoma lesions' classification. Front Big Data 2024; 7:1366312. [PMID: 38590699 PMCID: PMC10999676 DOI: 10.3389/fdata.2024.1366312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 02/26/2024] [Indexed: 04/10/2024] Open
Abstract
Background Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology. Objectives In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures. Methodology The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks. Results In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy. Conclusions This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features.
Collapse
Affiliation(s)
- Fiza Saeed Malik
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Muhammad Haroon Yousaf
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
- School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa
| | | | - Serestina Viriri
- School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
10
|
Farhatullah, Chen X, Zeng D, Xu J, Nawaz R, Ullah R. Classification of Skin Lesion With Features Extraction Using Quantum Chebyshev Polynomials and Autoencoder From Wavelet-Transformed Images. IEEE ACCESS 2024; 12:193923-193936. [DOI: 10.1109/access.2024.3502513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Farhatullah
- School of Computer Science, China University of Geosciences, Wuhan, China
| | - Xin Chen
- School of Automation, China University of Geosciences, Wuhan, China
| | - Deze Zeng
- School of Computer Science, China University of Geosciences, Wuhan, China
| | - Jiafeng Xu
- School of Automation, China University of Geosciences, Wuhan, China
| | - Rab Nawaz
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, U.K
| | - Rahmat Ullah
- School of Computer Science, China University of Geosciences, Wuhan, China
| |
Collapse
|
11
|
Mirikharaji Z, Abhishek K, Bissoto A, Barata C, Avila S, Valle E, Celebi ME, Hamarneh G. A survey on deep learning for skin lesion segmentation. Med Image Anal 2023; 88:102863. [PMID: 37343323 DOI: 10.1016/j.media.2023.102863] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/01/2023] [Accepted: 05/31/2023] [Indexed: 06/23/2023]
Abstract
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.
Collapse
Affiliation(s)
- Zahra Mirikharaji
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Alceu Bissoto
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Técnico, Avenida Rovisco Pais, Lisbon 1049-001, Portugal
| | - Sandra Avila
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Eduardo Valle
- RECOD.ai Lab, School of Electrical and Computing Engineering, University of Campinas, Av. Albert Einstein 400, Campinas 13083-952, Brazil
| | - M Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
12
|
Ahmad N, Shah JH, Khan MA, Baili J, Ansari GJ, Tariq U, Kim YJ, Cha JH. A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AI. Front Oncol 2023; 13:1151257. [PMID: 37346069 PMCID: PMC10281646 DOI: 10.3389/fonc.2023.1151257] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 05/19/2023] [Indexed: 06/23/2023] Open
Abstract
Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.
Collapse
Affiliation(s)
- Naveed Ahmad
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Jamal Hussain Shah
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila, Pakistan
- Department of Informatics, University of Leicester, Leicester, United Kingdom
| | - Jamel Baili
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | | | - Usman Tariq
- Department of Management Information Systems, CoBA, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul, Republic of Korea
| | - Jae-Hyuk Cha
- Department of Computer Science, Hanyang University, Seoul, Republic of Korea
| |
Collapse
|
13
|
Kasmi R, Hagerty J, Young R, Lama N, Nepal J, Miinch J, Stoecker W, Stanley RJ. SharpRazor: Automatic removal of hair and ruler marks from dermoscopy images. Skin Res Technol 2023; 29:e13203. [PMID: 37113095 PMCID: PMC10234178 DOI: 10.1111/srt.13203] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 05/03/2022] [Indexed: 04/03/2023]
Abstract
BACKGROUND The removal of hair and ruler marks is critical in handcrafted image analysis of dermoscopic skin lesions. No other dermoscopic artifacts cause more problems in segmentation and structure detection. PURPOSE The aim of the work is to detect both white and black hair, artifacts and finally inpaint correctly the image. METHOD We introduce a new algorithm: SharpRazor, to detect hair and ruler marks and remove them from the image. Our multiple-filter approach detects hairs of varying widths within varying backgrounds, while avoiding detection of vessels and bubbles. The proposed algorithm utilizes grayscale plane modification, hair enhancement, segmentation using tri-directional gradients, and multiple filters for hair of varying widths. We develop an alternate entropy-based processing adaptive thresholding method. White or light-colored hair, and ruler marks are detected separately and added to the final hair mask. A classifier removes noise objects. Finally, a new technique of inpainting is presented, and this is utilized to remove the detected object from the lesion image. RESULTS The proposed algorithm is tested on two datasets, and compares with seven existing methods measuring accuracy, precision, recall, dice, and Jaccard scores. SharpRazor is shown to outperform existing methods. CONCLUSION The Shaprazor techniques show the promise to reach the purpose of removing and inpaint both dark and white hair in a wide variety of lesions.
Collapse
Affiliation(s)
- Reda Kasmi
- Faculty of Technology, Laboratoire de Technologie Industrielle et de l'Information (LTII)University of BejaiaBejaiaAlgeria
| | | | - Reagan Young
- Department of Electrical and Computer EngineeringMissouri University of Science and TechnologyRollaMissouriUSA
| | - Norsang Lama
- Department of Electrical and Computer EngineeringMissouri University of Science and TechnologyRollaMissouriUSA
| | - Januka Nepal
- Department of Electrical and Computer EngineeringMissouri University of Science and TechnologyRollaMissouriUSA
| | - Jessica Miinch
- Department of Electrical and Computer EngineeringMissouri University of Science and TechnologyRollaMissouriUSA
| | | | - R Joe Stanley
- Department of Electrical and Computer EngineeringMissouri University of Science and TechnologyRollaMissouriUSA
| |
Collapse
|
14
|
Ahmedt-Aristizabal D, Nguyen C, Tychsen-Smith L, Stacey A, Li S, Pathikulangara J, Petersson L, Wang D. Monitoring of Pigmented Skin Lesions Using 3D Whole Body Imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107451. [PMID: 36893580 DOI: 10.1016/j.cmpb.2023.107451] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 02/23/2023] [Accepted: 02/26/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Advanced artificial intelligence and machine learning have great potential to redefine how skin lesions are detected, mapped, tracked and documented. Here, we propose a 3D whole-body imaging system known as 3DSkin-mapper to enable automated detection, evaluation and mapping of skin lesions. METHODS A modular camera rig arranged in a cylindrical configuration was designed to automatically capture images of the entire skin surface of a subject synchronously from multiple angles. Based on the images, we developed algorithms for 3D model reconstruction, data processing and skin lesion detection and tracking based on deep convolutional neural networks. We also introduced a customised, user-friendly, and adaptable interface that enables individuals to interactively visualise, manipulate, and annotate the images. The interface includes built-in features such as mapping 2D skin lesions onto the corresponding 3D model. RESULTS The proposed system is developed for skin lesion screening, the focus of this paper is to introduce the system instead of clinical study. Using synthetic and real images we demonstrate the effectiveness of the proposed system by providing multiple views of a target skin lesion, enabling further 3D geometry analysis and longitudinal tracking. Skin lesions are identified as outliers which deserve more attention from a skin cancer physician. Our detector leverages expert annotated labels to learn representations of skin lesions, while capturing the effects of anatomical variability. It takes only a few seconds to capture the entire skin surface, and about half an hour to process and analyse the images. CONCLUSIONS Our experiments show that the proposed system allows fast and easy whole body 3D imaging. It can be used by dermatological clinics to conduct skin screening, detect and track skin lesions over time, identify suspicious lesions, and document pigmented lesions. The system can potentially save clinicians time and effort significantly. The 3D imaging and analysis has the potential to change the paradigm of whole body photography with many applications in skin diseases, including inflammatory and pigmentary disorders. With reduced time requirements for recording and documenting high-quality skin information, doctors could spend more time providing better-quality treatment based on more detailed and accurate information.
Collapse
Affiliation(s)
| | - Chuong Nguyen
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| | | | | | - Shenghong Li
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| | | | - Lars Petersson
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| | - Dadong Wang
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| |
Collapse
|
15
|
Wang Y, Wang Y, Cai J, Lee TK, Miao C, Wang ZJ. SSD-KD: A self-supervised diverse knowledge distillation method for lightweight skin lesion classification using dermoscopic images. Med Image Anal 2023; 84:102693. [PMID: 36462373 DOI: 10.1016/j.media.2022.102693] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 11/08/2022] [Accepted: 11/09/2022] [Indexed: 11/15/2022]
Abstract
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide. Over the last few years, computer-aided diagnosis has been rapidly developed and make great progress in healthcare and medical practices due to the advances in artificial intelligence, particularly with the adoption of convolutional neural networks. However, most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices. In this case, the knowledge distillation (KD) method has been proven as an efficient tool to help improve the adaptability of lightweight models under limited resources, meanwhile keeping a high-level representation capability. To bridge the gap, this study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin disease classification. Our method models an intra-instance relational feature representation and integrates it with existing KD research. A dual relational knowledge distillation architecture is self-supervised trained while the weighted softened outputs are also exploited to enable the student model to capture richer knowledge from the teacher model. To demonstrate the effectiveness of our method, we conduct experiments on ISIC 2019, a large-scale open-accessed benchmark of skin diseases dermoscopic images. Experiments show that our distilled MobileNetV2 can achieve an accuracy as high as 85% for the classification tasks of 8 different skin diseases with minimal parameters and computing requirements. Ablation studies confirm the effectiveness of our intra- and inter-instance relational knowledge integration strategy. Compared with state-of-the-art knowledge distillation techniques, the proposed method demonstrates improved performance. To the best of our knowledge, this is the first deep knowledge distillation application for multi-disease classification on the large-scale dermoscopy database. Our codes and models are available at https://github.com/enkiwang/Portable-Skin-Lesion-Diagnosis.
Collapse
Affiliation(s)
- Yongwei Wang
- Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore
| | - Yuheng Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Cancer Control Research Program, BC Cancer, Vancouver, BC, Canada.
| | - Jiayue Cai
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Tim K Lee
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Cancer Control Research Program, BC Cancer, Vancouver, BC, Canada
| | - Chunyan Miao
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Z Jane Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
16
|
Yue G, Wei P, Zhou T, Jiang Q, Yan W, Wang T. Toward Multicenter Skin Lesion Classification Using Deep Neural Network With Adaptively Weighted Balance Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:119-131. [PMID: 36063522 DOI: 10.1109/tmi.2022.3204646] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Recently, deep neural network-based methods have shown promising advantages in accurately recognizing skin lesions from dermoscopic images. However, most existing works focus more on improving the network framework for better feature representation but ignore the data imbalance issue, limiting their flexibility and accuracy across multiple scenarios in multi-center clinics. Generally, different clinical centers have different data distributions, which presents challenging requirements for the network's flexibility and accuracy. In this paper, we divert the attention from framework improvement to the data imbalance issue and propose a new solution for multi-center skin lesion classification by introducing a novel adaptively weighted balance (AWB) loss to the conventional classification network. Benefiting from AWB, the proposed solution has the following advantages: 1) it is easy to satisfy different practical requirements by only changing the backbone; 2) it is user-friendly with no tuning on hyperparameters; and 3) it adaptively enables small intraclass compactness and pays more attention to the minority class. Extensive experiments demonstrate that, compared with solutions equipped with state-of-the-art loss functions, the proposed solution is more flexible and more competent for tackling the multi-center imbalanced skin lesion classification task with considerable performance on two benchmark datasets. In addition, the proposed solution is proved to be effective in handling the imbalanced gastrointestinal disease classification task and the imbalanced DR grading task. Code is available at https://github.com/Weipeishan2021.
Collapse
|
17
|
A comprehensive analysis of dermoscopy images for melanoma detection via deep CNN features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
18
|
An Ensemble of Transfer Learning Models for the Prediction of Skin Cancers with Conditional Generative Adversarial Networks. Diagnostics (Basel) 2022; 12:diagnostics12123145. [PMID: 36553152 PMCID: PMC9777332 DOI: 10.3390/diagnostics12123145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/04/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
Skin cancer is one of the most severe forms of the disease, and it can spread to other parts of the body if not detected early. Therefore, diagnosing and treating skin cancer patients at an early stage is crucial. Since a manual skin cancer diagnosis is both time-consuming and expensive, an incorrect diagnosis is made due to the high similarity between the various skin cancers. Improved categorization of multiclass skin cancers requires the development of automated diagnostic systems. Herein, we propose a fully automatic method for classifying several skin cancers by fine-tuning the deep learning models VGG16, ResNet50, and ResNet101. Prior to model creation, the training dataset should undergo data augmentation using traditional image transformation techniques and Generative Adversarial Networks (GANs) to prevent class imbalance issues that may lead to model overfitting. In this study, we investigate the feasibility of creating dermoscopic images that have a realistic appearance using Conditional Generative Adversarial Network (CGAN) techniques. Thereafter, the traditional augmentation methods are used to augment our existing training set to improve the performance of pre-trained deep models on the skin cancer classification task. This improved performance is then compared to the models developed using the unbalanced dataset. In addition, we formed an ensemble of finely tuned transfer learning models, which we trained on balanced and unbalanced datasets. These models were used to make predictions about the data. With appropriate data augmentation, the proposed models attained an accuracy of 92% for VGG16, 92% for ResNet50, and 92.25% for ResNet101, respectively. The ensemble of these models increased the accuracy to 93.5%. A comprehensive discussion on the performance of the models concluded that using this method possibly leads to enhanced performance in skin cancer categorization compared to the efforts made in the past.
Collapse
|
19
|
SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability. PLoS One 2022; 17:e0276836. [PMID: 36315487 PMCID: PMC9621459 DOI: 10.1371/journal.pone.0276836] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/14/2022] [Indexed: 11/05/2022] Open
Abstract
Skin cancer is considered to be the most common human malignancy. Around 5 million new cases of skin cancer are recorded in the United States annually. Early identification and evaluation of skin lesions are of great clinical significance, but the disproportionate dermatologist-patient ratio poses a significant problem in most developing nations. Therefore a novel deep architecture, named as SkiNet, is proposed to provide faster screening solution and assistance to newly trained physicians in the process of clinical diagnosis of skin cancer. The main motive behind SkiNet's design and development is to provide a white box solution, addressing a critical problem of trust and interpretability which is crucial for the wider adoption of Computer-aided diagnosis systems by medical practitioners. The proposed SkiNet is a two-stage pipeline wherein the lesion segmentation is followed by the lesion classification. Monte Carlo dropout and test time augmentation techniques have been employed in the proposed method to estimate epistemic and aleatoric uncertainty. A novel segmentation model named Bayesian MultiResUNet is used to estimate the uncertainty on the predicted segmentation map. Saliency-based methods like XRAI, Grad-CAM and Guided Backprop are explored to provide post-hoc explanations of the deep learning models. The ISIC-2018 dataset is used to perform the experimentation and ablation studies. The results establish the robustness of the proposed model on the traditional benchmarks while addressing the black-box nature of such models to alleviate the skepticism of medical practitioners by incorporating transparency and confidence to the model's prediction.
Collapse
|
20
|
Wang Y, Fariah Haq N, Cai J, Kalia S, Lui H, Jane Wang Z, Lee TK. Multi-channel content based image retrieval method for skin diseases using similarity network fusion and deep community analysis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Aldhyani THH, Verma A, Al-Adhaileh MH, Koundal D. Multi-Class Skin Lesion Classification Using a Lightweight Dynamic Kernel Deep-Learning-Based Convolutional Neural Network. Diagnostics (Basel) 2022; 12:diagnostics12092048. [PMID: 36140447 PMCID: PMC9497471 DOI: 10.3390/diagnostics12092048] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/20/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
Skin is the primary protective layer of the internal organs of the body. Nowadays, due to increasing pollution and multiple other factors, various types of skin diseases are growing globally. With variable shapes and multiple types, the classification of skin lesions is a challenging task. Motivated by this spreading deformity in society, a lightweight and efficient model is proposed for the highly accurate classification of skin lesions. Dynamic-sized kernels are used in layers to obtain the best results, resulting in very few trainable parameters. Further, both ReLU and leakyReLU activation functions are purposefully used in the proposed model. The model accurately classified all of the classes of the HAM10000 dataset. The model achieved an overall accuracy of 97.85%, which is much better than multiple state-of-the-art heavy models. Further, our work is compared with some popular state-of-the-art and recent existing models.
Collapse
Affiliation(s)
- Theyazn H. H. Aldhyani
- Applied College in Abqaiq, King Faisal University, P.O. Box 400, Al-Ahsa 31982, Saudi Arabia
- Correspondence:
| | - Amit Verma
- School of Computer Science, University of Petroleum & Energy Studies, Dehradun 248007, India
| | - Mosleh Hmoud Al-Adhaileh
- Deanship of E-Learning and Distance Education, King Faisal University, P.O. Box 4000, Al-Ahsa 31982, Saudi Arabia
| | - Deepika Koundal
- School of Computer Science, University of Petroleum & Energy Studies, Dehradun 248007, India
| |
Collapse
|
22
|
Zhou J, Wu Z, Jiang Z, Huang K, Guo K, Zhao S. Background selection schema on deep learning-based classification of dermatological disease. Comput Biol Med 2022; 149:105966. [PMID: 36029748 DOI: 10.1016/j.compbiomed.2022.105966] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/28/2022] [Accepted: 08/13/2022] [Indexed: 11/03/2022]
Abstract
Skin diseases are one of the most common ailments affecting humans. Artificial intelligence based on deep learning can significantly improve the efficiency of identifying skin disorders and alleviate the scarcity of medical resources. However, the distribution of background information in dermatological datasets is imbalanced, causing generalized deep learning models to perform poorly in skin disease classification. We propose a deep learning schema that combines data preprocessing, data augmentation, and residual networks to study the influence of color-based background selection on a deep model's capacity to learn foreground lesion subject attributes in a skin disease classification problem. First, clinical photographs are annotated by dermatologists, and then the original background information is masked with unique colors to generate several subsets with distinct background colors. Sample-balanced training and test sets are generated using random over/undersampling and data augmentation techniques. Finally, the deep learning networks are independently trained on diverse subsets of backdrop colors to compare the performance of classifiers based on different background information. Extensive experiments demonstrate that color-based background information significantly affects the classification of skin diseases and that classifiers trained on the green subset achieve state-of-the-art performance for classifying black and red skin lesions.
Collapse
Affiliation(s)
- Jiancun Zhou
- School of Computer Science and Engineering, Central South University, Changsha 410083, China; College of Information and Electronic Engineering, Hunan City University, Yiyang 413000, China
| | - Zheng Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Zixi Jiang
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China
| | - Kai Huang
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China
| | - Kehua Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Shuang Zhao
- Department of Dermatology, Xiangya Hospital, Central South University, Changsha, China; Hunan Engineering Research Center of Skin Health and Disease, Xiangya Hospital, Central South University, Changsha, China; Hunan Key Laboratory of Skin Cancer and Psoriasis, Xiangya Hospital, Central South University, Changsha, China; National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, China.
| |
Collapse
|
23
|
Serrano C, Lazo M, Serrano A, Toledo-Pastrana T, Barros-Tornay R, Acha B. Clinically Inspired Skin Lesion Classification through the Detection of Dermoscopic Criteria for Basal Cell Carcinoma. J Imaging 2022; 8:197. [PMID: 35877641 PMCID: PMC9319034 DOI: 10.3390/jimaging8070197] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 12/10/2022] Open
Abstract
Background and Objective. Skin cancer is the most common cancer worldwide. One of the most common non-melanoma tumors is basal cell carcinoma (BCC), which accounts for 75% of all skin cancers. There are many benign lesions that can be confused with these types of cancers, leading to unnecessary biopsies. In this paper, a new method to identify the different BCC dermoscopic patterns present in a skin lesion is presented. In addition, this information is applied to classify skin lesions into BCC and non-BCC. Methods. The proposed method combines the information provided by the original dermoscopic image, introduced in a convolutional neural network (CNN), with deep and handcrafted features extracted from color and texture analysis of the image. This color analysis is performed by transforming the image into a uniform color space and into a color appearance model. To demonstrate the validity of the method, a comparison between the classification obtained employing exclusively a CNN with the original image as input and the classification with additional color and texture features is presented. Furthermore, an exhaustive comparison of classification employing different color and texture measures derived from different color spaces is presented. Results. Results show that the classifier with additional color and texture features outperforms a CNN whose input is only the original image. Another important achievement is that a new color cooccurrence matrix, proposed in this paper, improves the results obtained with other texture measures. Finally, sensitivity of 0.99, specificity of 0.94 and accuracy of 0.97 are achieved when lesions are classified into BCC or non-BCC. Conclusions. To the best of our knowledge, this is the first time that a methodology to detect all the possible patterns that can be present in a BCC lesion is proposed. This detection leads to a clinically explainable classification into BCC and non-BCC lesions. In this sense, the classification of the proposed tool is based on the detection of the dermoscopic features that dermatologists employ for their diagnosis.
Collapse
Affiliation(s)
- Carmen Serrano
- Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (M.L.); (B.A.)
| | - Manuel Lazo
- Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (M.L.); (B.A.)
| | - Amalia Serrano
- Hospital Universitario Virgen Macarena, Calle Dr. Fedriani, 3, 41009 Seville, Spain;
| | - Tomás Toledo-Pastrana
- Hospitales Quironsalud Infanta Luisa y Sagrado Corazón, Calle San Jacinto, 87, 41010 Seville, Spain;
| | | | - Begoña Acha
- Dpto. Teoría de la Señal y Comunicaciones, Universidad de Sevilla, Camino de los Descubrimientos s/n, 41092 Seville, Spain; (M.L.); (B.A.)
| |
Collapse
|
24
|
Oliveira B, Torres HR, Morais P, Baptista A, Fonseca J, Vilaca JL. Classification of Chronic Venous Disorders using an Ensemble Optimization of Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:516-519. [PMID: 36086619 DOI: 10.1109/embc48229.2022.9871502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Chronic Venous Disorders (CVD) of lower limbs are one of the most prevalent medical conditions, affecting 35% of adults in Europe and North America. The early diagnosis of CVD is critical, however, the diagnosis relies on a visual recognition of the various venous disorders which is time- consuming and dependent on the physician's expertise. Thus, automatic strategies for the classification of the CVD severity are claimed. This paper proposed an automatic ensemble-based strategy of Deep Convolutional Neural Networks (DCNN) for the classification of CVDs severity from medical images. First, a clinical dataset containing 1376 images of patients' legs with CVD of 5 different levels of severity was constructed. Then, the constructed dataset was randomly split into training, testing, and validation datasets. Subsequently, a set of DCNN were individually applied to the images for classification. Finally, instead of a traditional voting ensemble strategy, extracted feature vectors from each DCNN were concatenated and fed into a new ensemble optimization network. Experiments showed that the proposed strategy achieved a classification with 93.8%, 93.4%, 92.4% of accuracy, precision, and recall, respectively. Moreover, compared to the traditional ensemble strategy, improvement in the accuracy of ~2% was registered. The proposed strategy showed to be accurate and robust for the diagnosis of CVD severity from medical images. Nevertheless, further research using an extensive clinical database is required to validate the potential of this strategy. Clinical Relevance- An automatic classification of CVD to reduce the probability of underdiagnoses and promote the treatment of CVD in the early stages.
Collapse
|
25
|
Elashiri MA, Rajesh A, Nath Pandey S, Kumar Shukla S, Urooj S, Lay-Ekuakille A. Ensemble of weighted deep concatenated features for the skin disease classification model using modified long short term memory. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
26
|
Gouda W, Sama NU, Al-Waakid G, Humayun M, Jhanjhi NZ. Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning. Healthcare (Basel) 2022; 10:healthcare10071183. [PMID: 35885710 PMCID: PMC9324455 DOI: 10.3390/healthcare10071183] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 06/13/2022] [Accepted: 06/15/2022] [Indexed: 12/12/2022] Open
Abstract
An increasing number of genetic and metabolic anomalies have been determined to lead to cancer, generally fatal. Cancerous cells may spread to any body part, where they can be life-threatening. Skin cancer is one of the most common types of cancer, and its frequency is increasing worldwide. The main subtypes of skin cancer are squamous and basal cell carcinomas, and melanoma, which is clinically aggressive and responsible for most deaths. Therefore, skin cancer screening is necessary. One of the best methods to accurately and swiftly identify skin cancer is using deep learning (DL). In this research, the deep learning method convolution neural network (CNN) was used to detect the two primary types of tumors, malignant and benign, using the ISIC2018 dataset. This dataset comprises 3533 skin lesions, including benign, malignant, nonmelanocytic, and melanocytic tumors. Using ESRGAN, the photos were first retouched and improved. The photos were augmented, normalized, and resized during the preprocessing step. Skin lesion photos could be classified using a CNN method based on an aggregate of results obtained after many repetitions. Then, multiple transfer learning models, such as Resnet50, InceptionV3, and Inception Resnet, were used for fine-tuning. In addition to experimenting with several models (the designed CNN, Resnet50, InceptionV3, and Inception Resnet), this study’s innovation and contribution are the use of ESRGAN as a preprocessing step. Our designed model showed results comparable to the pretrained model. Simulations using the ISIC 2018 skin lesion dataset showed that the suggested strategy was successful. An 83.2% accuracy rate was achieved by the CNN, in comparison to the Resnet50 (83.7%), InceptionV3 (85.8%), and Inception Resnet (84%) models.
Collapse
Affiliation(s)
- Walaa Gouda
- Department of Computer Engineering and Network, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
- Electrical Engineering Department, Faculty of Engineering at Shoubra, Benha University, Cairo 4272077, Egypt
- Correspondence: (W.G.); (M.H.)
| | - Najm Us Sama
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Malaysia;
| | - Ghada Al-Waakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia;
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
- Correspondence: (W.G.); (M.H.)
| | - Noor Zaman Jhanjhi
- School of Computer Science and Engineering (SCE), Taylor’s University, Subang Jaya 47500, Malaysia;
| |
Collapse
|
27
|
Colored Texture Analysis Fuzzy Entropy Methods with a Dermoscopic Application. ENTROPY 2022; 24:e24060831. [PMID: 35741551 PMCID: PMC9223301 DOI: 10.3390/e24060831] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 06/09/2022] [Accepted: 06/11/2022] [Indexed: 02/05/2023]
Abstract
Texture analysis is a subject of intensive focus in research due to its significant role in the field of image processing. However, few studies focus on colored texture analysis and even fewer use information theory concepts. Entropy measures have been proven competent for gray scale images. However, to the best of our knowledge, there are no well-established entropy methods that deal with colored images yet. Therefore, we propose the recent colored bidimensional fuzzy entropy measure, FuzEnC2D, and introduce its new multi-channel approaches, FuzEnV2D and FuzEnM2D, for the analysis of colored images. We investigate their sensitivity to parameters and ability to identify images with different irregularity degrees, and therefore different textures. Moreover, we study their behavior with colored Brodatz images in different color spaces. After verifying the results with test images, we employ the three methods for analyzing dermoscopic images of malignant melanoma and benign melanocytic nevi. FuzEnC2D, FuzEnV2D, and FuzEnM2D illustrate a good differentiation ability between the two-similar in appearance-pigmented skin lesions. The results outperform those of a well-known texture analysis measure. Our work provides the first entropy measure studying colored images using both single and multi-channel approaches.
Collapse
|
28
|
Talavera-Martínez L, Bibiloni P, Giacaman A, Taberner R, Hernando LJDP, González-Hidalgo M. A novel approach for skin lesion symmetry classification with a deep learning model. Comput Biol Med 2022; 145:105450. [DOI: 10.1016/j.compbiomed.2022.105450] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 03/02/2022] [Accepted: 03/22/2022] [Indexed: 11/29/2022]
|
29
|
Painuli D, Bhardwaj S, Köse U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput Biol Med 2022; 146:105580. [PMID: 35551012 DOI: 10.1016/j.compbiomed.2022.105580] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/14/2022] [Accepted: 04/30/2022] [Indexed: 02/07/2023]
Abstract
Being a second most cause of mortality worldwide, cancer has been identified as a perilous disease for human beings, where advance stage diagnosis may not help much in safeguarding patients from mortality. Thus, efforts to provide a sustainable architecture with proven cancer prevention estimate and provision for early diagnosis of cancer is the need of hours. Advent of machine learning methods enriched cancer diagnosis area with its overwhelmed efficiency & low error-rate then humans. A significant revolution has been witnessed in the development of machine learning & deep learning assisted system for segmentation & classification of various cancers during past decade. This research paper includes a review of various types of cancer detection via different data modalities using machine learning & deep learning-based methods along with different feature extraction techniques and benchmark datasets utilized in the recent six years studies. The focus of this study is to review, analyse, classify, and address the recent development in cancer detection and diagnosis of six types of cancers i.e., breast, lung, liver, skin, brain and pancreatic cancer, using machine learning & deep learning techniques. Various state-of-the-art technique are clustered into same group and results are examined through key performance indicators like accuracy, area under the curve, precision, sensitivity, dice score on benchmark datasets and concluded with future research work challenges.
Collapse
Affiliation(s)
- Deepak Painuli
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India.
| | - Suyash Bhardwaj
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India
| | - Utku Köse
- Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey
| |
Collapse
|
30
|
Dual attention based network for skin lesion classification with auxiliary learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
31
|
Multiclass Skin Lesion Classification Using a Novel Lightweight Deep Learning Framework for Smart Healthcare. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052677] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Skin lesion classification has recently attracted significant attention. Regularly, physicians take much time to analyze the skin lesions because of the high similarity between these skin lesions. An automated classification system using deep learning can assist physicians in detecting the skin lesion type and enhance the patient’s health. The skin lesion classification has become a hot research area with the evolution of deep learning architecture. In this study, we propose a novel method using a new segmentation approach and wide-ShuffleNet for skin lesion classification. First, we calculate the entropy-based weighting and first-order cumulative moment (EW-FCM) of the skin image. These values are used to separate the lesion from the background. Then, we input the segmentation result into a new deep learning structure wide-ShuffleNet and determine the skin lesion type. We evaluated the proposed method on two large datasets: HAM10000 and ISIC2019. Based on our numerical results, EW-FCM and wide-ShuffleNet achieve more accuracy than state-of-the-art approaches. Additionally, the proposed method is superior lightweight and suitable with a small system like a mobile healthcare system.
Collapse
|
32
|
Yu Z, Nguyen J, Nguyen TD, Kelly J, Mclean C, Bonnington P, Zhang L, Mar V, Ge Z. Early Melanoma Diagnosis With Sequential Dermoscopic Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:633-646. [PMID: 34648437 DOI: 10.1109/tmi.2021.3120091] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Dermatologists often diagnose or rule out early melanoma by evaluating the follow-up dermoscopic images of skin lesions. However, existing algorithms for early melanoma diagnosis are developed using single time-point images of lesions. Ignoring the temporal, morphological changes of lesions can lead to misdiagnosis in borderline cases. In this study, we propose a framework for automated early melanoma diagnosis using sequential dermoscopic images. To this end, we construct our method in three steps. First, we align sequential dermoscopic images of skin lesions using estimated Euclidean transformations, extract the lesion growth region by computing image differences among the consecutive images, and then propose a spatio-temporal network to capture the dermoscopic changes from aligned lesion images and the corresponding difference images. Finally, we develop an early diagnosis module to compute probability scores of malignancy for lesion images over time. We collected 179 serial dermoscopic imaging data from 122 patients to verify our method. Extensive experiments show that the proposed model outperforms other commonly used sequence models. We also compared the diagnostic results of our model with those of seven experienced dermatologists and five registrars. Our model achieved higher diagnostic accuracy than clinicians (63.69% vs. 54.33%, respectively) and provided an earlier diagnosis of melanoma (60.7% vs. 32.7% of melanoma correctly diagnosed on the first follow-up images). These results demonstrate that our model can be used to identify melanocytic lesions that are at high-risk of malignant transformation earlier in the disease process and thereby redefine what is possible in the early detection of melanoma.
Collapse
|
33
|
Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks? APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042092] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool for the detection of BCC to provide a prioritization in the teledermatology consultation. Firstly, we analyze if a previous segmentation of the lesion improves the ulterior classification of the lesion. Secondly, we analyze three deep neural networks and ensemble architectures to distinguish between BCC and nevus, and BCC and other skin lesions. The best segmentation results are obtained with a SegNet deep neural network. A 98% accuracy for distinguishing BCC from nevus and a 95% accuracy classifying BCC vs. all lesions have been obtained. The proposed algorithm outperforms the winner of the challenge ISIC 2019 in almost all the metrics. Finally, we can conclude that when deep neural networks are used to classify, a previous segmentation of the lesion does not improve the classification results. Likewise, the ensemble of different neural network configurations improves the classification performance compared with individual neural network classifiers. Regarding the segmentation step, supervised deep learning-based methods outperform unsupervised ones.
Collapse
|
34
|
Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images. SENSORS 2022; 22:s22031134. [PMID: 35161878 PMCID: PMC8838143 DOI: 10.3390/s22031134] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 01/18/2022] [Accepted: 01/27/2022] [Indexed: 02/01/2023]
Abstract
Automatic melanoma detection from dermoscopic skin samples is a very challenging task. However, using a deep learning approach as a machine vision tool can overcome some challenges. This research proposes an automated melanoma classifier based on a deep convolutional neural network (DCNN) to accurately classify malignant vs. benign melanoma. The structure of the DCNN is carefully designed by organizing many layers that are responsible for extracting low to high-level features of the skin images in a unique fashion. Other vital criteria in the design of DCNN are the selection of multiple filters and their sizes, employing proper deep learning layers, choosing the depth of the network, and optimizing hyperparameters. The primary objective is to propose a lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency. For this study, dermoscopic images containing different cancer samples were obtained from the International Skin Imaging Collaboration datastores (ISIC 2016, ISIC2017, and ISIC 2020). We evaluated the model based on accuracy, precision, recall, specificity, and F1-score. The proposed DCNN classifier achieved accuracies of 81.41%, 88.23%, and 90.42% on the ISIC 2016, 2017, and 2020 datasets, respectively, demonstrating high performance compared with the other state-of-the-art networks. Therefore, this proposed approach could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life.
Collapse
|
35
|
An Improved and Robust Encoder–Decoder for Skin Lesion Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-021-06403-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
36
|
Benyahia S, Meftah B, Lézoray O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2021; 74:101701. [PMID: 34861582 DOI: 10.1016/j.tice.2021.101701] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 10/19/2022]
Abstract
For various forms of skin lesion, many different feature extraction methods have been investigated so far. Indeed, feature extraction is a crucial step in machine learning processes. In general, we can distinct handcrafted and deep learning features. In this paper, we investigate the efficiency of using 17 commonly pre-trained convolutional neural networks (CNN) architectures as feature extractors and of 24 machine learning classifiers to evaluate the classification of skin lesions from two different datasets: ISIC 2019 and PH2. In this research, we find out that a DenseNet201 combined with Fine KNN or Cubic SVM achieved the best results in accuracy (92.34% and 91.71%) for the ISIC 2019 dataset. The results also show that the suggested method outperforms others approaches with an accuracy of 99% on the PH2 dataset.
Collapse
Affiliation(s)
- Samia Benyahia
- Department of Computer Science, Faculty of Exact Sciences, University of Mascara, Mascara, Algeria
| | | | - Olivier Lézoray
- Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYC, Caen, France
| |
Collapse
|
37
|
Oukil S, Kasmi R, Mokrani K, García-Zapirain B. Automatic segmentation and melanoma detection based on color and texture features in dermoscopic images. Skin Res Technol 2021; 28:203-211. [PMID: 34779062 PMCID: PMC9907597 DOI: 10.1111/srt.13111] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 09/25/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE Melanoma is known as the most aggressive form of skin cancer and one of the fastest growing malignant tumors worldwide. Several computer-aided diagnosis systems for melanoma have been proposed, still, the algorithms encounter difficulties in the early stage of lesions. This paper aims to discriminate melanoma and benign skin lesion in dermoscopic images. METHODS The proposed algorithm is based on the color and texture of skin lesions by introducing a novel feature extraction technique. The algorithm uses an automatic segmentation based on k-means generating a fairly accurate mask for each lesion. The feature extraction consists of the existing and novel color and texture attributes measuring how color and texture vary inside the lesion. To find the optimal results, all the attributes are extracted from lesions in five different color spaces (RGB, HSV, Lab, XYZ, and YCbCr) and used as the inputs for three classifiers (K nearest neighbors, support vector machine , and artificial neural network). RESULTS The PH2 set is used to assess the performance of the proposed algorithm. The results of our algorithm are compared to the results of published articles that used the same dataset, and it shows that the proposed method outperforms the state of the art by attaining a sensitivity of 99.25%, specificity of 99.58%, and accuracy of 99.51%. CONCLUSION The final results show that the colors combined with texture are powerful and relevant attributes for melanoma detection and show improvement over the state of the art.
Collapse
Affiliation(s)
- S Oukil
- LTII Laboratory University of Bejaia-Algeria, Faculty of Technology, University of Bejaia, Bejaia, Algeria
| | - R Kasmi
- LTII Laboratory University of Bejaia-Algeria, Faculty of Technology, University of Bejaia, Bejaia, Algeria.,Electrical Engineering Department, University of Bouira, Bouira, Algeria
| | - K Mokrani
- LTII Laboratory University of Bejaia-Algeria, Faculty of Technology, University of Bejaia, Bejaia, Algeria
| | | |
Collapse
|
38
|
Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques. Cancers (Basel) 2021; 13:cancers13215256. [PMID: 34771421 PMCID: PMC8582408 DOI: 10.3390/cancers13215256] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 10/18/2021] [Accepted: 10/18/2021] [Indexed: 01/23/2023] Open
Abstract
Simple Summary This study aimed to investigate the efficacy of implementation of novel skin surface fractal dimension features as an auxiliary diagnostic method for melanoma recognition. We therefore examined the skin lesion classification accuracy of the kNN-CV algorithm and of the proposed Radial basis function neural network model. We found an increased accuracy of classification when the fractal analysis is added to the classical color distribution analysis. Our results indicate that by using a reliable classifier, more opportunities exist to detect timely cancerous skin lesions. Abstract (1) Background: An approach for skin cancer recognition and classification by implementation of a novel combination of features and two classifiers, as an auxiliary diagnostic method, is proposed. (2) Methods: The predictions are made by k-nearest neighbor with a 5-fold cross validation algorithm and a neural network model to assist dermatologists in the diagnosis of cancerous skin lesions. As a main contribution, this work proposes a descriptor that combines skin surface fractal dimension and relevant color area features for skin lesion classification purposes. The surface fractal dimension is computed using a 2D generalization of Higuchi’s method. A clustering method allows for the selection of the relevant color distribution in skin lesion images by determining the average percentage of color areas within the nevi and melanoma lesion areas. In a classification stage, the Higuchi fractal dimensions (HFDs) and the color features are classified, separately, using a kNN-CV algorithm. In addition, these features are prototypes for a Radial basis function neural network (RBFNN) classifier. The efficiency of our algorithms was verified by utilizing images belonging to the 7-Point, Med-Node, and PH2 databases; (3) Results: Experimental results show that the accuracy of the proposed RBFNN model in skin cancer classification is 95.42% for 7-Point, 94.71% for Med-Node, and 94.88% for PH2, which are all significantly better than that of the kNN algorithm. (4) Conclusions: 2D Higuchi’s surface fractal features have not been previously used for skin lesion classification purpose. We used fractal features further correlated to color features to create a RBFNN classifier that provides high accuracies of classification.
Collapse
|
39
|
Pereira PMM, Thomaz LA, Tavora LMN, Assuncao PAA, Fonseca-Pinto RM, Paiva RP, Faria SMMD. Melanoma classification using light-Fields with morlet scattering transform and CNN: Surface depth as a valuable tool to increase detection rate. Med Image Anal 2021; 75:102254. [PMID: 34649195 DOI: 10.1016/j.media.2021.102254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/27/2021] [Accepted: 09/22/2021] [Indexed: 11/15/2022]
Abstract
Medical image classification through learning-based approaches has been increasingly used, namely in the discrimination of melanoma. However, for skin lesion classification in general, such methods commonly rely on dermoscopic or other 2D-macro RGB images. This work proposes to exploit beyond conventional 2D image characteristics, by considering a third dimension (depth) that characterises the skin surface rugosity, which can be obtained from light-field images, such as those available in the SKINL2 dataset. To achieve this goal, a processing pipeline was deployed using a morlet scattering transform and a CNN model, allowing to perform a comparison between using 2D information, only 3D information, or both. Results show that discrimination between Melanoma and Nevus reaches an accuracy of 84.00, 74.00 or 94.00% when using only 2D, only 3D, or both, respectively. An increase of 14.29pp in sensitivity and 8.33pp in specificity is achieved when expanding beyond conventional 2D information by also using depth. When discriminating between Melanoma and all other types of lesions (a further imbalanced setting), an increase of 28.57pp in sensitivity and decrease of 1.19pp in specificity is achieved for the same test conditions. Overall the results of this work demonstrate significant improvements over conventional approaches.
Collapse
Affiliation(s)
- Pedro M M Pereira
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, Pinhal de Marrocos, Coimbra 3030-290, Portugal.
| | - Lucas A Thomaz
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Luis M N Tavora
- ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Pedro A A Assuncao
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Rui M Fonseca-Pinto
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Rui Pedro Paiva
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, Pinhal de Marrocos, Coimbra 3030-290, Portugal
| | - Sergio M M de Faria
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| |
Collapse
|
40
|
Skin Lesion Detection Algorithms in Whole Body Images. SENSORS 2021; 21:s21196639. [PMID: 34640959 PMCID: PMC8513024 DOI: 10.3390/s21196639] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 09/27/2021] [Accepted: 10/02/2021] [Indexed: 11/29/2022]
Abstract
Melanoma is one of the most lethal and rapidly growing cancers, causing many deaths each year. This cancer can be treated effectively if it is detected quickly. For this reason, many algorithms and systems have been developed to support automatic or semiautomatic detection of neoplastic skin lesions based on the analysis of optical images of individual moles. Recently, full-body systems have gained attention because they enable the analysis of the patient’s entire body based on a set of photos. This paper presents a prototype of such a system, focusing mainly on assessing the effectiveness of algorithms developed for the detection and segmentation of lesions. Three detection algorithms (and their fusion) were analyzed, one implementing deep learning methods and two classic approaches, using local brightness distribution and a correlation method. For fusion of algorithms, detection sensitivity = 0.95 and precision = 0.94 were obtained. Moreover, the values of the selected geometric parameters of segmented lesions were calculated and compared for all algorithms. The obtained results showed a high accuracy of the evaluated parameters (error of area estimation <10%), especially for lesions with dimensions greater than 3 mm, which are the most suspected of being neoplastic lesions.
Collapse
|
41
|
A Dermoscopic Skin Lesion Classification Technique Using YOLO-CNN and Traditional Feature Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021. [DOI: 10.1007/s13369-021-05571-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
42
|
|
43
|
DUMAN E, TOLAN Z. Comparing Popular CNN Models for an Imbalanced Dataset of Dermoscopic Images. COMPUTER SCIENCE 2021. [DOI: 10.53070/bbd.990574] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
44
|
Barhoumi W, Khelifa A. Skin lesion image retrieval using transfer learning-based approach for query-driven distance recommendation. Comput Biol Med 2021; 137:104825. [PMID: 34507152 DOI: 10.1016/j.compbiomed.2021.104825] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 08/11/2021] [Accepted: 08/29/2021] [Indexed: 12/27/2022]
Abstract
Content-Based Dermatological Lesion Retrieval (CBDLR) systems retrieve similar skin lesion images, with a pathology-confirmed diagnosis, for a given query image of a skin lesion. By producing an intuitive support to both inexperienced and experienced dermatologists, the early diagnosis through CBDLR screening can significantly enhance the patients' survival, while reducing the treatment cost. To deal with this issue, a CBDLR system is proposed in this study. This system integrates a similarity measure recommender which allows a dynamic selection of the adequate distance metric for each query image. The main contributions of this work reside in (i) the adoption of deep-learned features according to their performances for the classification of skin lesions into seven classes; and (ii) the automatic generation of ground truth that was investigated within the framework of transfer learning in order to recommend the most appropriate distance for any new query image. The proposed CBDLR system has been exhaustively evaluated using the challenging ISIC2018 and ISIC2019 datasets, and the obtained results show that the proposed system can provide a useful aided-decision while offering superior performances. Indeed, it outperforms similar CBDLR systems that adopt standard distances by at least 9% in terms of mAP@K.
Collapse
Affiliation(s)
- Walid Barhoumi
- Université de Tunis El Manar, Institut Supérieur d'Informatique, Research Team on Intelligent Systems in Imaging and Artificial Vision (SIIVA), LR16ES06 Laboratoire de recherche en Informatique, Modélisation et Traitement de l'Information et de la Connaissance (LIMTIC), 2 Rue Abou Rayhane Bayrouni, 2080, Ariana, Tunisia; Université de Carthage, Ecole Nationale d'Ingénieurs de Carthage, 45 Rue des Entrepreneurs, 2035, Tunis-Carthage, Tunisia.
| | - Afifa Khelifa
- Higher Institute of Technological Studies of Mahdia, 5111, Hiboun, Mahdia, Tunisia
| |
Collapse
|
45
|
Wang Y, Cai J, Louie DC, Wang ZJ, Lee TK. Incorporating clinical knowledge with constrained classifier chain into a multimodal deep network for melanoma detection. Comput Biol Med 2021; 137:104812. [PMID: 34507158 DOI: 10.1016/j.compbiomed.2021.104812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 08/25/2021] [Accepted: 08/25/2021] [Indexed: 10/20/2022]
Abstract
In recent years, vast developments in Computer-Aided Diagnosis (CAD) for skin diseases have generated much interest from clinicians and other eventual end-users of this technology. Introducing clinical domain knowledge to these machine learning strategies can help dispel the black box nature of these tools, strengthening clinician trust. Clinical domain knowledge also provides new information channels which can improve CAD diagnostic performance. In this paper, we propose a novel framework for malignant melanoma (MM) detection by fusing clinical images and dermoscopic images. The proposed method combines a multi-labeled deep feature extractor and clinically constrained classifier chain (CC). This allows the 7-point checklist, a clinician diagnostic algorithm, to be included in the decision level while maintaining the clinical importance of the major and minor criteria in the checklist. Our proposed framework achieved an average accuracy of 81.3% for detecting all criteria and melanoma when testing on a publicly available 7-point checklist dataset. This is the highest reported results, outperforming state-of-the-art methods in the literature by 6.4% or more. Analyses also show that the proposed system surpasses the single modality system of using either clinical images or dermoscopic images alone and the systems without adopting the approach of multi-label and clinically constrained classifier chain. Our carefully designed system demonstrates a substantial improvement over melanoma detection. By keeping the familiar major and minor criteria of the 7-point checklist and their corresponding weights, the proposed system may be more accepted by physicians as a human-interpretable CAD tool for automated melanoma detection.
Collapse
Affiliation(s)
- Yuheng Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| | - Jiayue Cai
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Daniel C Louie
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| | - Z Jane Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Tim K Lee
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| |
Collapse
|
46
|
Jiang S, Li H, Jin Z. A Visually Interpretable Deep Learning Framework for Histopathological Image-Based Skin Cancer Diagnosis. IEEE J Biomed Health Inform 2021; 25:1483-1494. [PMID: 33449890 DOI: 10.1109/jbhi.2021.3052044] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Owing to the high incidence rate and the severe impact of skin cancer, the precise diagnosis of malignant skin tumors is a significant goal, especially considering treatment is normally effective if the tumor is detected early. Limited published histopathological image sets and the lack of an intuitive correspondence between the features of lesion areas and a certain type of skin cancer pose a challenge to the establishment of high-quality and interpretable computer-aided diagnostic (CAD) systems. To solve this problem, a light-weight attention mechanism-based deep learning framework, namely, DRANet, is proposed to differentiate 11 types of skin diseases based on a real histopathological image set collected by us during the last 10 years. The CAD system can output not only the name of a certain disease but also a visualized diagnostic report showing possible areas related to the disease. The experimental results demonstrate that the DRANet obtains significantly better performance than baseline models (i.e., InceptionV3, ResNet50, VGG16, and VGG19) with comparable parameter size and competitive accuracy with fewer model parameters. Visualized results produced by the hidden layers of the DRANet actually highlight part of the class-specific regions of diagnostic points and are valuable for decision making in the diagnosis of skin diseases.
Collapse
|
47
|
Khan MA, Sharif M, Akram T, Damaševičius R, Maskeliūnas R. Skin Lesion Segmentation and Multiclass Classification Using Deep Learning Features and Improved Moth Flame Optimization. Diagnostics (Basel) 2021; 11:811. [PMID: 33947117 PMCID: PMC8145295 DOI: 10.3390/diagnostics11050811] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 04/23/2021] [Accepted: 04/26/2021] [Indexed: 11/18/2022] Open
Abstract
Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantonment 47040, Pakistan;
| | - Muhammad Sharif
- Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantonment 47040, Pakistan;
| | - Tallha Akram
- Department of Electrical Engineering, Wah Campus, COMSATS University Islamabad, Islamabad 45550, Pakistan;
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Rytis Maskeliūnas
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania;
| |
Collapse
|
48
|
Khan MA, Zhang YD, Sharif M, Akram T. Pixels to Classes: Intelligent Learning Framework for Multiclass Skin Lesion Localization and Classification. COMPUTERS & ELECTRICAL ENGINEERING 2021; 90:106956. [DOI: 10.1016/j.compeleceng.2020.106956] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
49
|
Adegun A, Viriri S. Deep learning techniques for skin lesion analysis and melanoma cancer detection: a survey of state-of-the-art. Artif Intell Rev 2021; 54:811-841. [DOI: 10.1007/s10462-020-09865-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
50
|
Iqbal I, Younus M, Walayat K, Kakar MU, Ma J. Automated multi-class classification of skin lesions through deep convolutional neural network with dermoscopic images. Comput Med Imaging Graph 2021; 88:101843. [PMID: 33445062 DOI: 10.1016/j.compmedimag.2020.101843] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Revised: 11/13/2020] [Accepted: 12/11/2020] [Indexed: 10/22/2022]
Abstract
As an analytic tool in medicine, deep learning has gained great attention and opened new ways for disease diagnosis. Recent studies validate the effectiveness of deep learning algorithms for binary classification of skin lesions (i.e., melanomas and nevi classes) with dermoscopic images. Nonetheless, those binary classification methods cannot be applied to the general clinical situation of skin cancer screening in which multi-class classification must be taken into account. The main objective of this research is to develop, implement, and calibrate an advanced deep learning model in the context of automated multi-class classification of skin lesions. The proposed Deep Convolutional Neural Network (DCNN) model is carefully designed with several layers, and multiple filter sizes, but fewer filters and parameters to improve efficacy and performance. Dermoscopic images are acquired from the International Skin Imaging Collaboration databases (ISIC-17, ISIC-18, and ISIC-19) for experiments. The experimental results of the proposed DCNN approach are presented in terms of precision, sensitivity, specificity, and other metrics. Specifically, it attains 94 % precision, 93 % sensitivity, and 91 % specificity in ISIC-17. It is demonstrated by the experimental results that this proposed DCNN approach outperforms state-of-the-art algorithms, exhibiting 0.964 area under the receiver operating characteristics (AUROC) in ISIC-17 for the classification of skin lesions and can be used to assist dermatologists in classifying skin lesions. As a result, this proposed approach provides a novel and feasible way for automating and expediting the skin lesion classification task as well as saving effort, time, and human life.
Collapse
Affiliation(s)
- Imran Iqbal
- Department of Information and Computational Sciences, School of Mathematical Sciences and LMAM, Peking University, Beijing, 100871, People's Republic of China.
| | - Muhammad Younus
- State Key Laboratory of Membrane Biology and Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine and Peking-Tsinghua Center for Life Sciences and PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China.
| | - Khuram Walayat
- Faculty of Engineering Technology, Department of Thermal and Fluid Engineering, University of Twente, Enschede, 7500 AE, Netherlands.
| | - Mohib Ullah Kakar
- Beijing Key Laboratory for Separation and Analysis in Biomedicine and Pharmaceuticals, Beijing Institute of Technology, Beijing, 100081, People's Republic of China.
| | - Jinwen Ma
- Department of Information and Computational Sciences, School of Mathematical Sciences and LMAM, Peking University, Beijing, 100871, People's Republic of China.
| |
Collapse
|