1
|
Gunasundari B, Thiagarajan R. Evaluation of High-Dimensional Data Classification for Skin Malignancy Detection Using DL-Based Techniques. Cancer Invest 2024:1-25. [PMID: 38767503 DOI: 10.1080/07357907.2024.2345184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/16/2024] [Indexed: 05/22/2024]
Abstract
Skin cancer can be detected through visual screening and skin analysis based on the biopsy and pathological state of the human body. The survival rate of cancer patients is low, and millions of people are diagnosed annually. By determining the different comparative analyses, the skin malignancy classification is evaluated. Using the Isomap with the vision transformer, we analyze the high-dimensional images with dimensionality reduction. Skin cancer can present with severe cases and life-threatening symptoms. Overall performance evaluation and classification tend to improve the accuracy of the high-dimensional skin lesion dataset when completed. In deep learning methodologies, the distinct phases of skin malignancy classification are determined by its accuracy, specificity, F1 recall, and sensitivity while implementing the classification methodology. A nonlinear dimensionality reduction technique called Isomap preserves the data's underlying nonlinear relationships intact. This is essential for the categorization of skin malignancies, as the features that separate malignant from benign skin lesions may not be linearly separable. Isomap decreases the data's complexity while maintaining its essential characteristics, making it simpler to analyze and explain the findings. High-dimensional datasets for skin lesions have been evaluated and classified more effectively when evaluated and classified using Isomap with the vision transformer.
Collapse
Affiliation(s)
- B Gunasundari
- Department of Computer Science and Engineering, Prathyusha Engineering College, Chennai, Tamil Nadu, India
| | - R Thiagarajan
- Department of Computer Science and Engineering, Prathyusha Engineering College, Chennai, Tamil Nadu, India
| |
Collapse
|
2
|
Tang X, Rashid Sheykhahmad F. Boosted dipper throated optimization algorithm-based Xception neural network for skin cancer diagnosis: An optimal approach. Heliyon 2024; 10:e26415. [PMID: 38449650 PMCID: PMC10915520 DOI: 10.1016/j.heliyon.2024.e26415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 02/10/2024] [Accepted: 02/13/2024] [Indexed: 03/08/2024] Open
Abstract
Skin cancer is a prevalent form of cancer that necessitates prompt and precise detection. However, current diagnostic methods for skin cancer are either invasive, time-consuming, or unreliable. Consequently, there is a demand for an innovative and efficient approach to diagnose skin cancer that utilizes non-invasive and automated techniques. In this study, a unique method has been proposed for diagnosing skin cancer by employing an Xception neural network that has been optimized using Boosted Dipper Throated Optimization (BDTO) algorithm. The Xception neural network is a deep learning model capable of extracting high-level features from skin dermoscopy images, while the BDTO algorithm is a bio-inspired optimization technique that can determine the optimal parameters and weights for the Xception neural network. To enhance the quality and diversity of the images, the ISIC dataset is utilized, a widely accepted benchmark system for skin cancer diagnosis, and various image preprocessing and data augmentation techniques were implemented. By comparing the method with several contemporary approaches, it has been demonstrated that the method outperforms others in detecting skin cancer. The method achieves an average precision of 94.936%, an average accuracy of 94.206%, and an average recall of 97.092% for skin cancer diagnosis, surpassing the performance of alternative methods. Additionally, the 5-fold ROC curve and error curve have been presented for the data validation to showcase the superiority and robustness of the method.
Collapse
Affiliation(s)
- Xiaofei Tang
- School of Computer Science and Software Engineering, University of Science and Technology Liaoning, Anshan, 114051, Liaoning, China
| | - Fatima Rashid Sheykhahmad
- Ardabil Branch, Islamic Azad University, Ardabil, Iran
- College of Technical Engineering, The Islamic University, Najaf, Iraq
| |
Collapse
|
3
|
Pan X, Mu Y, Ma C, He Q. TFCNet: A texture-aware and fine-grained feature compensated polyp detection network. Comput Biol Med 2024; 171:108144. [PMID: 38382386 DOI: 10.1016/j.compbiomed.2024.108144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 01/14/2024] [Accepted: 02/12/2024] [Indexed: 02/23/2024]
Abstract
PURPOSE Abnormal tissue detection is a prerequisite for medical image analysis and computer-aided diagnosis and treatment. The use of neural networks (CNN) to achieve accurate detection of intestinal polyps is beneficial to the early diagnosis and treatment of colorectal cancer. Currently, image detection models using multi-scale feature processing perform well in polyp detection. However, these methods do not fully consider the misalignment of information in the process of feature scale change, resulting in the loss of fine-grained features, and eventually cause the missed and false detection of targets. METHOD To solve this problem, a texture-aware and fine-grained feature compensated polyp detection network (TFCNet) is proposed in this paper. Firstly, design Texture Awareness Module (TAM) to excavate the rich texture information from the low-level layers and utilize high-level semantic information for background suppression, thereby capturing purer fine-grained features. Secondly, the Texture Feature Enhancement Module (TFEM) is designed to enhance the low-level texture information in TAM, and the enhanced texture features were fused with the high-level features. By making full use of the low-level texture features and multi-scale context information, the semantic consistency and integrity of the features were ensured. Finally, the Residual Pyramid Splittable Attention Module (RPSA) is designed to balance the loss of channel information caused by skip connections, and further improve the detection performance of the network. RESULTS Experimental results on 4 datasets demonstrate that the TFCNet network outperforms existing methods. Particularly, on the large dataset PolypSets, the mAP@0.5-0.95 has been improved to 88.9%. On the small datasets CVC-ClinicDB and Kvasir, the mAP@0.5-0.95 is increased by 2% and 1.6%, respectively, compared to the baseline, showcasing a significant superiority over competing methods.
Collapse
Affiliation(s)
- Xiaoying Pan
- Shanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, 710121, China; School of Computer Science & Technology, Xi'an University of Post & Telecommunications, Xi'an, 710121, China.
| | - Yaya Mu
- Shanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, 710121, China; School of Computer Science & Technology, Xi'an University of Post & Telecommunications, Xi'an, 710121, China
| | - Chenyang Ma
- Shanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, 710121, China; School of Computer Science & Technology, Xi'an University of Post & Telecommunications, Xi'an, 710121, China
| | - Qiqi He
- Shanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an, 710121, China; School of Computer Science & Technology, Xi'an University of Post & Telecommunications, Xi'an, 710121, China
| |
Collapse
|
4
|
Kumar Lilhore U, Simaiya S, Sharma YK, Kaswan KS, Rao KBVB, Rao VVRM, Baliyan A, Bijalwan A, Alroobaea R. A precise model for skin cancer diagnosis using hybrid U-Net and improved MobileNet-V3 with hyperparameters optimization. Sci Rep 2024; 14:4299. [PMID: 38383520 PMCID: PMC10881962 DOI: 10.1038/s41598-024-54212-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 02/09/2024] [Indexed: 02/23/2024] Open
Abstract
Skin cancer is a frequently occurring and possibly deadly disease that necessitates prompt and precise diagnosis in order to ensure efficacious treatment. This paper introduces an innovative approach for accurately identifying skin cancer by utilizing Convolution Neural Network architecture and optimizing hyperparameters. The proposed approach aims to increase the precision and efficacy of skin cancer recognition and consequently enhance patients' experiences. This investigation aims to tackle various significant challenges in skin cancer recognition, encompassing feature extraction, model architecture design, and optimizing hyperparameters. The proposed model utilizes advanced deep-learning methodologies to extract complex features and patterns from skin cancer images. We enhance the learning procedure of deep learning by integrating Standard U-Net and Improved MobileNet-V3 with optimization techniques, allowing the model to differentiate malignant and benign skin cancers. Also substituted the crossed-entropy loss function of the Mobilenet-v3 mathematical framework with a bias loss function to enhance the accuracy. The model's squeeze and excitation component was replaced with the practical channel attention component to achieve parameter reduction. Integrating cross-layer connections among Mobile modules has been proposed to leverage synthetic features effectively. The dilated convolutions were incorporated into the model to enhance the receptive field. The optimization of hyperparameters is of utmost importance in improving the efficiency of deep learning models. To fine-tune the model's hyperparameter, we employ sophisticated optimization methods such as the Bayesian optimization method using pre-trained CNN architecture MobileNet-V3. The proposed model is compared with existing models, i.e., MobileNet, VGG-16, MobileNet-V2, Resnet-152v2 and VGG-19 on the "HAM-10000 Melanoma Skin Cancer dataset". The empirical findings illustrate that the proposed optimized hybrid MobileNet-V3 model outperforms existing skin cancer detection and segmentation techniques based on high precision of 97.84%, sensitivity of 96.35%, accuracy of 98.86% and specificity of 97.32%. The enhanced performance of this research resulted in timelier and more precise diagnoses, potentially contributing to life-saving outcomes and mitigating healthcare expenditures.
Collapse
Affiliation(s)
- Umesh Kumar Lilhore
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, 140413, India
| | - Sarita Simaiya
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, 140413, India
| | - Yogesh Kumar Sharma
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Greenfield, Vaddeswaram, Guntur, AP, India
| | - Kuldeep Singh Kaswan
- School of Computing Science and Engineering, Galgotias University, Greater Noida, Uttar Pradesh, India
| | - K B V Brahma Rao
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Greenfield, Vaddeswaram, Guntur, AP, India
| | - V V R Maheswara Rao
- Departmentt of Computer Science and Engineering, Shri Vishnu Engineering College for Women (A), Bhimavaram, India
| | - Anupam Baliyan
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, 140413, India
| | | | - Roobaea Alroobaea
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, 21944, Taif, Saudi Arabia
| |
Collapse
|
5
|
Ali MU, Khalid M, Alshanbari H, Zafar A, Lee SW. Enhancing Skin Lesion Detection: A Multistage Multiclass Convolutional Neural Network-Based Framework. Bioengineering (Basel) 2023; 10:1430. [PMID: 38136020 PMCID: PMC10741172 DOI: 10.3390/bioengineering10121430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/07/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
The early identification and treatment of various dermatological conditions depend on the detection of skin lesions. Due to advancements in computer-aided diagnosis and machine learning approaches, learning-based skin lesion analysis methods have attracted much interest recently. Employing the concept of transfer learning, this research proposes a deep convolutional neural network (CNN)-based multistage and multiclass framework to categorize seven types of skin lesions. In the first stage, a CNN model was developed to classify skin lesion images into two classes, namely benign and malignant. In the second stage, the model was then used with the transfer learning concept to further categorize benign lesions into five subcategories (melanocytic nevus, actinic keratosis, benign keratosis, dermatofibroma, and vascular) and malignant lesions into two subcategories (melanoma and basal cell carcinoma). The frozen weights of the CNN developed-trained with correlated images benefited the transfer learning using the same type of images for the subclassification of benign and malignant classes. The proposed multistage and multiclass technique improved the classification accuracy of the online ISIC2018 skin lesion dataset by up to 93.4% for benign and malignant class identification. Furthermore, a high accuracy of 96.2% was achieved for subclassification of both classes. Sensitivity, specificity, precision, and F1-score metrics further validated the effectiveness of the proposed multistage and multiclass framework. Compared to existing CNN models described in the literature, the proposed approach took less time to train and had a higher classification rate.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea;
| | - Majdi Khalid
- Department of Computer Science and Artificial Intelligence, College of Computers, Umm Al-Qura University, Makkah 21955, Saudi Arabia; (M.K.); (H.A.)
| | - Hanan Alshanbari
- Department of Computer Science and Artificial Intelligence, College of Computers, Umm Al-Qura University, Makkah 21955, Saudi Arabia; (M.K.); (H.A.)
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea;
| | - Seung Won Lee
- Department of Precision Medicine, Sungkyunkwan University School of Medicine, Suwon 16419, Republic of Korea
| |
Collapse
|