1
|
Behara K, Bhero E, Agee JT. Grid-Based Structural and Dimensional Skin Cancer Classification with Self-Featured Optimized Explainable Deep Convolutional Neural Networks. Int J Mol Sci 2024; 25:1546. [PMID: 38338828 PMCID: PMC10855492 DOI: 10.3390/ijms25031546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/16/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
Skin cancer is a severe and potentially lethal disease, and early detection is critical for successful treatment. Traditional procedures for diagnosing skin cancer are expensive, time-intensive, and necessitate the expertise of a medical practitioner. In recent years, many researchers have developed artificial intelligence (AI) tools, including shallow and deep machine learning-based approaches, to diagnose skin cancer. However, AI-based skin cancer diagnosis faces challenges in complexity, low reproducibility, and explainability. To address these problems, we propose a novel Grid-Based Structural and Dimensional Explainable Deep Convolutional Neural Network for accurate and interpretable skin cancer classification. This model employs adaptive thresholding for extracting the region of interest (ROI), using its dynamic capabilities to enhance the accuracy of identifying cancerous regions. The VGG-16 architecture extracts the hierarchical characteristics of skin lesion images, leveraging its recognized capabilities for deep feature extraction. Our proposed model leverages a grid structure to capture spatial relationships within lesions, while the dimensional features extract relevant information from various image channels. An Adaptive Intelligent Coney Optimization (AICO) algorithm is employed for self-feature selected optimization and fine-tuning the hyperparameters, which dynamically adapts the model architecture to optimize feature extraction and classification. The model was trained and tested using the ISIC dataset of 10,015 dermascope images and the MNIST dataset of 2357 images of malignant and benign oncological diseases. The experimental results demonstrated that the model achieved accuracy and CSI values of 0.96 and 0.97 for TP 80 using the ISIC dataset, which is 17.70% and 16.49% more than lightweight CNN, 20.83% and 19.59% more than DenseNet, 18.75% and 17.53% more than CNN, 6.25% and 6.18% more than Efficient Net-B0, 5.21% and 5.15% over ECNN, 2.08% and 2.06% over COA-CAN, and 5.21% and 5.15% more than ARO-ECNN. Additionally, the AICO self-feature selected ECNN model exhibited minimal FPR and FNR of 0.03 and 0.02, respectively. The model attained a loss of 0.09 for ISIC and 0.18 for the MNIST dataset, indicating that the model proposed in this research outperforms existing techniques. The proposed model improves accuracy, interpretability, and robustness for skin cancer classification, ultimately aiding clinicians in early diagnosis and treatment.
Collapse
Affiliation(s)
- Kavita Behara
- Department of Electrical Engineering, Mangosuthu University of Technology, Durban 4031, South Africa;
| | - Ernest Bhero
- Discipline of Electrical, Electronic and Computer Engineering, University of KwaZulu Natal, Durban 4041, South Africa;
| | - John Terhile Agee
- Discipline of Electrical, Electronic and Computer Engineering, University of KwaZulu Natal, Durban 4041, South Africa;
| |
Collapse
|
2
|
Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, Tuncer T, Tan RS, Palmer E, Azizan MMB, Kadri NA, Acharya UR. Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images. Sci Rep 2022; 12:17297. [PMID: 36241674 PMCID: PMC9568538 DOI: 10.1038/s41598-022-21380-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 09/27/2022] [Indexed: 01/10/2023] Open
Abstract
Pain intensity classification using facial images is a challenging problem in computer vision research. This work proposed a patch and transfer learning-based model to classify various pain intensities using facial images. The input facial images were segmented into dynamic-sized horizontal patches or "shutter blinds". A lightweight deep network DarkNet19 pre-trained on ImageNet1K was used to generate deep features from the shutter blinds and the undivided resized segmented input facial image. The most discriminative features were selected from these deep features using iterative neighborhood component analysis, which were then fed to a standard shallow fine k-nearest neighbor classifier for classification using tenfold cross-validation. The proposed shutter blinds-based model was trained and tested on datasets derived from two public databases-University of Northern British Columbia-McMaster Shoulder Pain Expression Archive Database and Denver Intensity of Spontaneous Facial Action Database-which both comprised four pain intensity classes that had been labeled by human experts using validated facial action coding system methodology. Our shutter blinds-based classification model attained more than 95% overall accuracy rates on both datasets. The excellent performance suggests that the automated pain intensity classification model can be deployed to assist doctors in the non-verbal detection of pain using facial images in various situations (e.g., non-communicative patients or during surgery). This system can facilitate timely detection and management of pain.
Collapse
Affiliation(s)
- Prabal Datta Barua
- grid.1048.d0000 0004 0473 0844School of Business (Information System), University of Southern Queensland, Toowoomba, QLD 4350 Australia ,grid.117476.20000 0004 1936 7611Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007 Australia
| | - Nursena Baygin
- grid.16487.3c0000 0000 9216 0511Department of Computer Engineering, College of Engineering, Kafkas University, Kars, Turkey
| | - Sengul Dogan
- grid.411320.50000 0004 0574 1529Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- grid.449062.d0000 0004 0399 2738Department of Computer Engineering, College of Engineering, Ardahan University, Ardahan, Turkey
| | - N. Arunkumar
- Rathinam College of Engineering, Coimbatore, India
| | - Hamido Fujita
- Faculty of Information Technology, HUTECH University of Technology, Ho Chi Minh City, Viet Nam ,grid.4489.10000000121678994Andalusian Research Institute in Data Science and Computational Intelligence, University of Granada, Granada, Spain ,grid.443998.b0000 0001 2172 3919Regional Research Center, Iwate Prefectural University, Iwate, Japan
| | - Turker Tuncer
- grid.411320.50000 0004 0574 1529Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Ru-San Tan
- grid.419385.20000 0004 0620 9905Department of Cardiology, National Heart Centre Singapore, Singapore, Singapore ,grid.428397.30000 0004 0385 0924Duke-NUS Medical School, Singapore, Singapore
| | - Elizabeth Palmer
- grid.430417.50000 0004 0640 6474Centre of Clinical Genetics, Sydney Children’s Hospitals Network, Randwick, 2031 Australia ,grid.1005.40000 0004 4902 0432School of Women’s and Children’s Health, University of New South Wales, Randwick, 2031 Australia
| | - Muhammad Mokhzaini Bin Azizan
- grid.462995.50000 0001 2218 9236Department of Electrical and Electronic Engineering, Faculty of Engineering and Built Environment, Universiti Sains Islam Malaysia (USIM), Nilai, Malaysia
| | - Nahrizul Adib Kadri
- grid.10347.310000 0001 2308 5949Department of Biomedical Engineering, Faculty of Engineering, University Malaya, 50603 Kuala Lumpur, Malaysia
| | - U. Rajendra Acharya
- grid.462630.50000 0000 9158 4937Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore, 599489 Singapore ,grid.443365.30000 0004 0388 6484Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore, Singapore ,grid.252470.60000 0000 9263 9645Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
3
|
Role of Four-Chamber Heart Ultrasound Images in Automatic Assessment of Fetal Heart: A Systematic Understanding. INFORMATICS 2022. [DOI: 10.3390/informatics9020034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The fetal echocardiogram is useful for monitoring and diagnosing cardiovascular diseases in the fetus in utero. Importantly, it can be used for assessing prenatal congenital heart disease, for which timely intervention can improve the unborn child’s outcomes. In this regard, artificial intelligence (AI) can be used for the automatic analysis of fetal heart ultrasound images. This study reviews nondeep and deep learning approaches for assessing the fetal heart using standard four-chamber ultrasound images. The state-of-the-art techniques in the field are described and discussed. The compendium demonstrates the capability of automatic assessment of the fetal heart using AI technology. This work can serve as a resource for research in the field.
Collapse
|