1
|
Adebayo OE, Chatelain B, Trucu D, Eftimie R. Deep Learning Approaches for the Classification of Keloid Images in the Context of Malignant and Benign Skin Disorders. Diagnostics (Basel) 2025; 15:710. [PMID: 40150053 PMCID: PMC11940829 DOI: 10.3390/diagnostics15060710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2025] [Revised: 03/02/2025] [Accepted: 03/08/2025] [Indexed: 03/29/2025] Open
Abstract
Background/Objectives: Misdiagnosing skin disorders leads to the administration of wrong treatments, sometimes with life-impacting consequences. Deep learning algorithms are becoming more and more used for diagnosis. While many skin cancer/lesion image classification studies focus on datasets containing dermatoscopic images and do not include keloid images, in this study, we focus on diagnosing keloid disorders amongst other skin lesions and combine two publicly available datasets containing non-dermatoscopic images: one dataset with keloid images and one with images of other various benign and malignant skin lesions (melanoma, basal cell carcinoma, squamous cell carcinoma, actinic keratosis, seborrheic keratosis, and nevus). Methods: Different Convolution Neural Network (CNN) models are used to classify these disorders as either malignant or benign, to differentiate keloids amongst different benign skin disorders, and furthermore to differentiate keloids among other similar-looking malignant lesions. To this end, we use the transfer learning technique applied to nine different base models: the VGG16, MobileNet, InceptionV3, DenseNet121, EfficientNetB0, Xception, InceptionRNV2, EfficientNetV2L, and NASNetLarge. We explore and compare the results of these models using performance metrics such as accuracy, precision, recall, F1score, and AUC-ROC. Results: We show that the VGG16 model (after fine-tuning) performs the best in classifying keloid images among other benign and malignant skin lesion images, with the following keloid class performance: an accuracy of 0.985, precision of 1.0, recall of 0.857, F1 score of 0.922 and AUC-ROC value of 0.996. VGG16 also has the best overall average performance (over all classes) in terms of the AUC-ROC and the other performance metrics. Using this model, we further attempt to predict the identification of three new non-dermatoscopic anonymised clinical images, classifying them as either malignant, benign, or keloid, and in the process, we identify some issues related to the collection and processing of such images. Finally, we also show that the DenseNet121 model has the best performance when differentiating keloids from other malignant disorders that have similar clinical presentations. Conclusions: The study emphasised the potential use of deep learning algorithms (and their drawbacks), to identify and classify benign skin disorders such as keloids, which are not usually investigated via these approaches (as opposed to cancers), mainly due to lack of available data.
Collapse
Affiliation(s)
- Olusegun Ekundayo Adebayo
- Laboratoire de Mathématiques de Besançon, Université Marie et Louis Pasteur, F-25000 Besançon, France;
| | - Brice Chatelain
- Service de Chirurgie Maxillo-Faciale, Stomatologie et Odontologie Hospitalière, CHU Besançon, F-25000 Besançon, France;
| | - Dumitru Trucu
- Division of Mathematics, University of Dundee, Dundee DD1 4HN, UK
| | - Raluca Eftimie
- Laboratoire de Mathématiques de Besançon, Université Marie et Louis Pasteur, F-25000 Besançon, France;
- Division of Mathematics, University of Dundee, Dundee DD1 4HN, UK
| |
Collapse
|
2
|
Hasan MZ, Rony MAH, Chowa SS, Bhuiyan MRI, Moustafa AA. GBCHV an advanced deep learning anatomy aware model for accurate classification of gallbladder cancer utilizing ultrasound images. Sci Rep 2025; 15:7120. [PMID: 40016258 PMCID: PMC11868569 DOI: 10.1038/s41598-025-89232-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Accepted: 02/04/2025] [Indexed: 03/01/2025] Open
Abstract
This study introduces a novel deep learning approach aimed at accurately classifying Gallbladder Cancer (GBC) into benign, malignant, and normal categories using ultrasound images from the challenging GBC USG (GBCU) dataset. The proposed methodology enhances image quality and specifies gallbladder wall boundaries by employing sophisticated image processing techniques like median filtering and contrast-limited adaptive histogram equalization. Unlike traditional convolutional neural networks, which struggle with complex spatial patterns, the proposed transformer-based model, GBC Horizontal-Vertical Transformer (GBCHV), incorporates a GBCHV-Trans block with self-attention mechanisms. In order to make the model anatomy-aware, the square-shaped input patches of the transformer are transformed into horizontal and vertical strips to obtain distinctive spatial relationships within gallbladder tissues. The novelty of this model lies in its anatomy-aware mechanism, which employs horizontal-vertical strip transformations to depict spatial relationships and complex anatomical features of the gallbladder more accurately. The proposed model achieved an overall diagnostic accuracy of 96.21% by performing an ablation study. A performance comparison between the proposed model and seven transfer learning models is further conducted, where the proposed model consistently outperformed the transfer learning models, showcasing its superior accuracy and robustness. Moreover, the decision-making process of the proposed model is further explained visually through the utilization of Gradient-weighted Class Activation Mapping (Grad-CAM). With the integration of advanced deep learning and image processing techniques, the GBCHV-Trans model offers a promising solution for precise and early-stage classification of GBC, surpassing conventional methods with superior accuracy and diagnostic efficacy.
Collapse
Affiliation(s)
- Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh.
| | - Md Awlad Hossen Rony
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Sadia Sultana Chowa
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Md Rahad Islam Bhuiyan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Ahmed A Moustafa
- School of Psychology, Faculty of Society and Design, Bond University, Gold Coast (City), QLD, Australia
- Department of Human Anatomy and Physiology, The Faculty of Health Sciences, University of Johannesburg, Johannesburg, South Africa
| |
Collapse
|
3
|
Park YW, Eom S, Kim S, Lim S, Park JE, Kim HS, You SC, Ahn SS, Lee SK. Differentiation of glioblastoma from solitary brain metastasis using deep ensembles: Empirical estimation of uncertainty for clinical reliability. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 254:108288. [PMID: 38941861 DOI: 10.1016/j.cmpb.2024.108288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 06/13/2024] [Accepted: 06/15/2024] [Indexed: 06/30/2024]
Abstract
BACKGROUND AND OBJECTIVES To develop a clinically reliable deep learning model to differentiate glioblastoma (GBM) from solitary brain metastasis (SBM) by providing predictive uncertainty estimates and interpretability. METHODS A total of 469 patients (300 GBM, 169 SBM) were enrolled in the institutional training set. Deep ensembles based on DenseNet121 were trained on multiparametric MRI. The model performance was validated in the external test set consisting of 143 patients (101 GBM, 42 SBM). Entropy values for each input were evaluated for uncertainty measurement; based on entropy values, the datasets were split to high- and low-uncertainty groups. In addition, entropy values of out-of-distribution (OOD) data from unknown class (257 patients with meningioma) were compared to assess uncertainty estimates of the model. The model interpretability was further evaluated by localization accuracy of the model. RESULTS On external test set, the area under the curve (AUC), accuracy, sensitivity and specificity of the deep ensembles were 0.83 (95 % confidence interval [CI] 0.76-0.90), 76.2 %, 54.8 % and 85.2 %, respectively. The performance was higher in the low-uncertainty group than in the high-uncertainty group, with AUCs of 0.91 (95 % CI 0.83-0.98) and 0.58 (95 % CI 0.44-0.71), indicating that assessment of uncertainty with entropy values ascertained reliable prediction in the low-uncertainty group. Further, deep ensembles classified a high proportion (90.7 %) of predictions on OOD data to be uncertain, showing robustness in dataset shift. Interpretability evaluated by localization accuracy provided further reliability in the "low-uncertainty and high-localization accuracy" subgroup, with an AUC of 0.98 (95 % CI 0.95-1.00). CONCLUSIONS Empirical assessment of uncertainty and interpretability in deep ensembles provides evidence for the robustness of prediction, offering a clinically reliable model in differentiating GBM from SBM.
Collapse
Affiliation(s)
- Yae Won Park
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Sujeong Eom
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea; Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Korea
| | - Seungwoo Kim
- Artificial Intelligence Graduate School, UNIST, Ulsan, Korea
| | - Sungbin Lim
- Department of Statistics, Korea University, Seoul, Korea
| | - Ji Eun Park
- Department of Radiology, University of Ulsan College of Medicine, Seoul, Korea
| | - Ho Sung Kim
- Department of Radiology, University of Ulsan College of Medicine, Seoul, Korea
| | - Seng Chan You
- Department of Biomedical Systems Informatics, Yonsei University College of Medicine, Seoul, Korea; Institute for Innovation in Digital Healthcare, Yonsei University, Seoul, Korea.
| | - Sung Soo Ahn
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea.
| | - Seung-Koo Lee
- Department of Radiology and Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
4
|
Pal S, Singh RP, Kumar A. Analysis of Hybrid Feature Optimization Techniques Based on the Classification Accuracy of Brain Tumor Regions Using Machine Learning and Further Evaluation Based on the Institute Test Data. J Med Phys 2024; 49:22-32. [PMID: 38828069 PMCID: PMC11141750 DOI: 10.4103/jmp.jmp_77_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 06/05/2024] Open
Abstract
Aim The goal of this study was to get optimal brain tumor features from magnetic resonance imaging (MRI) images and classify them based on the three groups of the tumor region: Peritumoral edema, enhancing-core, and necrotic tumor core, using machine learning classification models. Materials and Methods This study's dataset was obtained from the multimodal brain tumor segmentation challenge. A total of 599 brain MRI studies were employed, all in neuroimaging informatics technology initiative format. The dataset was divided into training, validation, and testing subsets online test dataset (OTD). The dataset includes four types of MRI series, which were combined together and processed for intensity normalization using contrast limited adaptive histogram equalization methodology. To extract radiomics features, a python-based library called pyRadiomics was employed. Particle-swarm optimization (PSO) with varying inertia weights was used for feature optimization. Inertia weight with a linearly decreasing strategy (W1), inertia weight with a nonlinear coefficient decreasing strategy (W2), and inertia weight with a logarithmic strategy (W3) were different strategies used to vary the inertia weight for feature optimization in PSO. These selected features were further optimized using the principal component analysis (PCA) method to further reducing the dimensionality and removing the noise and improve the performance and efficiency of subsequent algorithms. Support vector machine (SVM), light gradient boosting (LGB), and extreme gradient boosting (XGB) machine learning classification algorithms were utilized for the classification of images into different tumor regions using optimized features. The proposed method was also tested on institute test data (ITD) for a total of 30 patient images. Results For OTD test dataset, the classification accuracy of SVM was 0.989, for the LGB model (LGBM) was 0.992, and for the XGB model (XGBM) was 0.994, using the varying inertia weight-PSO optimization method and the classification accuracy of SVM was 0.996 for the LGBM was 0.998, and for the XGBM was 0.994, using PSO and PCA-a hybrid optimization technique. For ITD test dataset, the classification accuracy of SVM was 0.994 for the LGBM was 0.993, and for the XGBM was 0.997, using the hybrid optimization technique. Conclusion The results suggest that the proposed method can be used to classify a brain tumor as used in this study to classify the tumor region into three groups: Peritumoral edema, enhancing-core, and necrotic tumor core. This was done by extracting the different features of the tumor, such as its shape, grey level, gray-level co-occurrence matrix, etc., and then choosing the best features using hybrid optimal feature selection techniques. This was done without much human expertise and in much less time than it would take a person.
Collapse
Affiliation(s)
- Soniya Pal
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
- Batra Hospital and Medical Research Center, New Delhi, India
| | - Raj Pal Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
| | - Anuj Kumar
- Department of Radiotherapy, S. N. Medical College, Agra, Uttar Pradesh, India
| |
Collapse
|
5
|
Azeem M, Kiani K, Mansouri T, Topping N. SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network. Cancers (Basel) 2023; 16:108. [PMID: 38201535 PMCID: PMC10778045 DOI: 10.3390/cancers16010108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer is a widespread disease that typically develops on the skin due to frequent exposure to sunlight. Although cancer can appear on any part of the human body, skin cancer accounts for a significant proportion of all new cancer diagnoses worldwide. There are substantial obstacles to the precise diagnosis and classification of skin lesions because of morphological variety and indistinguishable characteristics across skin malignancies. Recently, deep learning models have been used in the field of image-based skin-lesion diagnosis and have demonstrated diagnostic efficiency on par with that of dermatologists. To increase classification efficiency and accuracy for skin lesions, a cutting-edge multi-layer deep convolutional neural network termed SkinLesNet was built in this study. The dataset used in this study was extracted from the PAD-UFES-20 dataset and was augmented. The PAD-UFES-20-Modified dataset includes three common forms of skin lesions: seborrheic keratosis, nevus, and melanoma. To comprehensively assess SkinLesNet's performance, its evaluation was expanded beyond the PAD-UFES-20-Modified dataset. Two additional datasets, HAM10000 and ISIC2017, were included, and SkinLesNet was compared to the widely used ResNet50 and VGG16 models. This broader evaluation confirmed SkinLesNet's effectiveness, as it consistently outperformed both benchmarks across all datasets.
Collapse
Affiliation(s)
- Muhammad Azeem
- School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK; (K.K.); (T.M.); (N.T.)
| | | | | | | |
Collapse
|
6
|
Antar S, Abd El-Sattar HKH, Abdel-Rahman MH, F M Ghaleb F. COVID-19 infection segmentation using hybrid deep learning and image processing techniques. Sci Rep 2023; 13:22737. [PMID: 38123587 PMCID: PMC10733411 DOI: 10.1038/s41598-023-49337-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Accepted: 12/07/2023] [Indexed: 12/23/2023] Open
Abstract
The coronavirus disease 2019 (COVID-19) epidemic has become a worldwide problem that continues to affect people's lives daily, and the early diagnosis of COVID-19 has a critical importance on the treatment of infected patients for medical and healthcare organizations. To detect COVID-19 infections, medical imaging techniques, including computed tomography (CT) scan images and X-ray images, are considered some of the helpful medical tests that healthcare providers carry out. However, in addition to the difficulty of segmenting contaminated areas from CT scan images, these approaches also offer limited accuracy for identifying the virus. Accordingly, this paper addresses the effectiveness of using deep learning (DL) and image processing techniques, which serve to expand the dataset without the need for any augmentation strategies, and it also presents a novel approach for detecting COVID-19 virus infections in lung images, particularly the infection prediction issue. In our proposed method, to reveal the infection, the input images are first preprocessed using a threshold then resized to 128 × 128. After that, a density heat map tool is used for coloring the resized lung images. The three channels (red, green, and blue) are then separated from the colored image and are further preprocessed through image inverse and histogram equalization, and are subsequently fed, in independent directions, into three separate U-Nets with the same architecture for segmentation. Finally, the segmentation results are combined and run through a convolution layer one by one to get the detection. Several evaluation metrics using the CT scan dataset were used to measure the performance of the proposed approach in comparison with other state-of-the-art techniques in terms of accuracy, sensitivity, precision, and the dice coefficient. The experimental results of the proposed approach reached 99.71%, 0.83, 0.87, and 0.85, respectively. These results show that coloring the CT scan images dataset and then dividing each image into its RGB image channels can enhance the COVID-19 detection, and it also increases the U-Net power in the segmentation when merging the channel segmentation results. In comparison to other existing segmentation techniques employing bigger 512 × 512 images, this study is one of the few that can rapidly and correctly detect the COVID-19 virus with high accuracy on smaller 128 × 128 images using the metrics of accuracy, sensitivity, precision, and dice coefficient.
Collapse
Affiliation(s)
- Samar Antar
- Computer Science Division, Department of Mathematics, Faculty of Science, Ain Shams University, Abbassia, Cairo, 11566, Egypt
| | | | - Mohammad H Abdel-Rahman
- Computer Science Division, Department of Mathematics, Faculty of Science, Ain Shams University, Abbassia, Cairo, 11566, Egypt
| | - Fayed F M Ghaleb
- Computer Science Division, Department of Mathematics, Faculty of Science, Ain Shams University, Abbassia, Cairo, 11566, Egypt
| |
Collapse
|
7
|
Sufyan M, Shokat Z, Ashfaq UA. Artificial intelligence in cancer diagnosis and therapy: Current status and future perspective. Comput Biol Med 2023; 165:107356. [PMID: 37688994 DOI: 10.1016/j.compbiomed.2023.107356] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/21/2023] [Accepted: 08/12/2023] [Indexed: 09/11/2023]
Abstract
Artificial intelligence (AI) in healthcare plays a pivotal role in combating many fatal diseases, such as skin, breast, and lung cancer. AI is an advanced form of technology that uses mathematical-based algorithmic principles similar to those of the human mind for cognizing complex challenges of the healthcare unit. Cancer is a lethal disease with many etiologies, including numerous genetic and epigenetic mutations. Cancer being a multifactorial disease is difficult to be diagnosed at an early stage. Therefore, genetic variations and other leading factors could be identified in due time through AI and machine learning (ML). AI is the synergetic approach for mining the drug targets, their mechanism of action, and drug-organism interaction from massive raw data. This synergetic approach is also facing several challenges in data mining but computational algorithms from different scientific communities for multi-target drug discovery are highly helpful to overcome the bottlenecks in AI for drug-target discovery. AI and ML could be the epicenter in the medical world for the diagnosis, treatment, and evaluation of almost any disease in the near future. In this comprehensive review, we explore the immense potential of AI and ML when integrated with the biological sciences, specifically in the context of cancer research. Our goal is to illuminate the many ways in which AI and ML are being applied to the study of cancer, from diagnosis to individualized treatment. We highlight the prospective role of AI in supporting oncologists and other medical professionals in making informed decisions and improving patient outcomes by examining the intersection of AI and cancer control. Although AI-based medical therapies show great potential, many challenges must be overcome before they can be implemented in clinical practice. We critically assess the current hurdles and provide insights into the future directions of AI-driven approaches, aiming to pave the way for enhanced cancer interventions and improved patient care.
Collapse
Affiliation(s)
- Muhammad Sufyan
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| | - Zeeshan Shokat
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| | - Usman Ali Ashfaq
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| |
Collapse
|
8
|
Hussain M, Khan MA, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics (Basel) 2023; 13:2869. [PMID: 37761236 PMCID: PMC10527569 DOI: 10.3390/diagnostics13182869] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 08/30/2023] [Accepted: 09/01/2023] [Indexed: 09/29/2023] Open
Abstract
Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top-bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 13-5053, Lebanon
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
9
|
Naqvi M, Gilani SQ, Syed T, Marques O, Kim HC. Skin Cancer Detection Using Deep Learning-A Review. Diagnostics (Basel) 2023; 13:1911. [PMID: 37296763 PMCID: PMC10252190 DOI: 10.3390/diagnostics13111911] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 05/25/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
Skin cancer is one the most dangerous types of cancer and is one of the primary causes of death worldwide. The number of deaths can be reduced if skin cancer is diagnosed early. Skin cancer is mostly diagnosed using visual inspection, which is less accurate. Deep-learning-based methods have been proposed to assist dermatologists in the early and accurate diagnosis of skin cancers. This survey reviewed the most recent research articles on skin cancer classification using deep learning methods. We also provided an overview of the most common deep-learning models and datasets used for skin cancer classification.
Collapse
Affiliation(s)
- Maryam Naqvi
- Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Tehreem Syed
- Department of Electrical Engineering and Computer Engineering, Technische Universität Dresden, 01069 Dresden, Germany
| | - Oge Marques
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Hee-Cheol Kim
- Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Republic of Korea
| |
Collapse
|
10
|
Taribagil P, Hogg HDJ, Balaskas K, Keane PA. Integrating artificial intelligence into an ophthalmologist’s workflow: obstacles and opportunities. EXPERT REVIEW OF OPHTHALMOLOGY 2023. [DOI: 10.1080/17469899.2023.2175672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Affiliation(s)
- Priyal Taribagil
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - HD Jeffry Hogg
- Medical Retina Department, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Department of Population Health Science, Population Health Science Institute, Newcastle University, Newcastle upon Tyne, UK
- Department of Ophthalmology, Newcastle upon Tyne Hospitals NHS Foundation Trust, Freeman Road, Newcastle upon Tyne, UK
| | - Konstantinos Balaskas
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Medical Retina, Institute of Ophthalmology, University College of London Institute of Ophthalmology, London, UK
| | - Pearse A Keane
- NIHR Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
- Medical Retina, Institute of Ophthalmology, University College of London Institute of Ophthalmology, London, UK
| |
Collapse
|
11
|
A Survey on Computer-Aided Intelligent Methods to Identify and Classify Skin Cancer. INFORMATICS 2022. [DOI: 10.3390/informatics9040099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Melanoma is one of the skin cancer types that is more dangerous to human society. It easily spreads to other parts of the human body. An early diagnosis is necessary for a higher survival rate. Computer-aided diagnosis (CAD) is suitable for providing precise findings before the critical stage. The computer-aided diagnostic process includes preprocessing, segmentation, feature extraction, and classification. This study discusses the advantages and disadvantages of various computer-aided algorithms. It also discusses the current approaches, problems, and various types of datasets for skin images. Information about possible future works is also highlighted in this paper. The inferences derived from this survey will be useful for researchers carrying out research in skin cancer image analysis.
Collapse
|