1
|
Pacal I, Ozdemir B, Zeynalov J, Gasimov H, Pacal N. A novel CNN-ViT-based deep learning model for early skin cancer diagnosis. Biomed Signal Process Control 2025; 104:107627. [DOI: 10.1016/j.bspc.2025.107627] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2025]
|
2
|
Sinamenye JH, Chatterjee A, Shrestha R. Potato plant disease detection: leveraging hybrid deep learning models. BMC PLANT BIOLOGY 2025; 25:647. [PMID: 40380088 PMCID: PMC12082912 DOI: 10.1186/s12870-025-06679-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 05/05/2025] [Indexed: 05/19/2025]
Abstract
Agriculture, a crucial sector for global economic development and sustainable food production, faces significant challenges in detecting and managing crop diseases. These diseases can greatly impact yield and productivity, making early and accurate detection vital, especially in staple crops like potatoes. Traditional manual methods, as well as some existing machine learning and deep learning techniques, often lack accuracy and generalizability due to factors such as variability in real-world conditions. This study proposes a novel approach to improve potato plant disease detection and identification using a hybrid deep-learning model, EfficientNetV2B3+ViT. This model combines the strengths of a Convolutional Neural Network - EfficientNetV2B3 and a Vision Transformer (ViT). It has been trained on a diverse potato leaf image dataset, the "Potato Leaf Disease Dataset", which reflects real-world agricultural conditions. The proposed model achieved an accuracy of 85.06 % , representing an 11.43 % improvement over the results of the previous study. These results highlight the effectiveness of the hybrid model in complex agricultural settings and its potential to improve potato plant disease detection and identification.
Collapse
Affiliation(s)
| | - Ayan Chatterjee
- Department of Digital Technology, STIFTELSEN NILU, Kjeller, Norway
| | - Raju Shrestha
- Department of Computer Science, Oslo Metropolitan University (OsloMet), Oslo, Norway
| |
Collapse
|
3
|
Pei G, Qian X, Zhou B, Liu Z, Wu W. Research on agricultural disease recognition methods based on very large Kernel convolutional network-RepLKNet. Sci Rep 2025; 15:16843. [PMID: 40374696 PMCID: PMC12081735 DOI: 10.1038/s41598-025-01553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Accepted: 05/07/2025] [Indexed: 05/17/2025] Open
Abstract
Agricultural diseases pose significant challenges to plant production. With the rapid advancement of deep learning, the accuracy and efficiency of plant disease identification have substantially improved. However, conventional convolutional neural networks that rely on multi-layer small-kernel structures are limited in capturing long-range dependencies and global contextual information due to their constrained receptive fields. To overcome these limitations, this study proposes a plant disease recognition method based on RepLKNet, a convolutional architecture with large kernel designs that significantly expand the receptive field and enhance feature representation. Transfer learning is incorporated to further improve training efficiency and model performance. Experiments conducted on the Plant Diseases Training Dataset, comprising 95,865 images across 61 disease categories, demonstrate the effectiveness of the proposed method. Under five-fold cross-validation, the model achieved an overall accuracy (OA) of 96.03%, an average accuracy (AA) of 94.78%, and a Kappa coefficient of 95.86%. Compared with ResNet50 (OA: 95.62%) and GoogleNet (OA: 94.98%), the proposed model demonstrates competitive or superior performance. Ablation experiments reveal that replacing large kernels with 3×3 or 5×5 convolutions results in accuracy reductions of up to 1.1% in OA and 1.3% in AA, confirming the effectiveness of the large kernel design. These results demonstrate the robustness and superior capability of RepLKNet in plant disease recognition tasks.
Collapse
Affiliation(s)
- Guoquan Pei
- College of Big Data, Yunnan Agricultural University, Kunming, 650201, China
| | - Xueying Qian
- College of Big Data, Yunnan Agricultural University, Kunming, 650201, China
| | - Bing Zhou
- College of Science, Yunnan Agricultural University, Kunming, 650201, China
| | - Zigao Liu
- Yunnan Traceability Technology Co. Ltd., Kunming, 650201, China
| | - Wendou Wu
- College of Big Data, Yunnan Agricultural University, Kunming, 650201, China.
| |
Collapse
|
4
|
Ince S, Kunduracioglu I, Algarni A, Bayram B, Pacal I. Deep learning for cerebral vascular occlusion segmentation: A novel ConvNeXtV2 and GRN-integrated U-Net framework for diffusion-weighted imaging. Neuroscience 2025; 574:42-53. [PMID: 40204150 DOI: 10.1016/j.neuroscience.2025.04.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2025] [Revised: 03/26/2025] [Accepted: 04/05/2025] [Indexed: 04/11/2025]
Abstract
Cerebral vascular occlusion is a serious condition that can lead to stroke and permanent neurological damage due to insufficient oxygen and nutrients reaching brain tissue. Early diagnosis and accurate segmentation are critical for effective treatment planning. Due to its high soft tissue contrast, Magnetic Resonance Imaging (MRI) is commonly used for detecting these occlusions such as ischemic stroke. However, challenges such as low contrast, noise, and heterogeneous lesion structures in MRI images complicate manual segmentation and often lead to misinterpretations. As a result, deep learning-based Computer-Aided Diagnosis (CAD) systems are essential for faster and more accurate diagnosis and treatment methods, although they can sometimes face challenges such as high computational costs and difficulties in segmenting small or irregular lesions. This study proposes a novel U-Net architecture enhanced with ConvNeXtV2 blocks and GRN-based Multi-Layer Perceptrons (MLP) to address these challenges in cerebral vascular occlusion segmentation. This is the first application of ConvNeXtV2 in this domain. The proposed model significantly improves segmentation accuracy, even in low-contrast regions, while maintaining high computational efficiency, which is crucial for real-world clinical applications. To reduce false positives and improve overall accuracy, small lesions (≤5 pixels) were removed in the preprocessing step with the support of expert clinicians. Experimental results on the ISLES 2022 dataset showed superior performance with an Intersection over Union (IoU) of 0.8015 and a Dice coefficient of 0.8894. Comparative analyses indicate that the proposed model achieves higher segmentation accuracy than existing U-Net variants and other methods, offering a promising solution for clinical use.
Collapse
Affiliation(s)
- Suat Ince
- Department of Radiology, University of Health Sciences, Van Education and Research Hospital, 65000 Van, Turkey.
| | - Ismail Kunduracioglu
- Department of Computer Engineering, Faculty of Engineering, Igdir University, 76000 Igdir, Turkey.
| | - Ali Algarni
- Department of Informatics and Computer Systems, College of Computer Science, King Khalid University, Abha 61421, Saudi Arabia.
| | - Bilal Bayram
- Department of Neurology, University of Health Sciences, Van Education and Research Hospital, 65000 Van, Turkey.
| | - Ishak Pacal
- Department of Computer Engineering, Faculty of Engineering, Igdir University, 76000 Igdir, Turkey; Department of Electronics and Information Technologies, Faculty of Architecture and Engineering, Nakhchivan State University, AZ 7012 Nakhchivan, Azerbaijan.
| |
Collapse
|
5
|
Sharma J, Al-Huqail AA, Almogren A, Doshi H, Jayaprakash B, Bharathi B, Ur Rehman A, Hussen S. Deep learning based ensemble model for accurate tomato leaf disease classification by leveraging ResNet50 and MobileNetV2 architectures. Sci Rep 2025; 15:13904. [PMID: 40263518 PMCID: PMC12015254 DOI: 10.1038/s41598-025-98015-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2025] [Accepted: 04/08/2025] [Indexed: 04/24/2025] Open
Abstract
Global food security depends on tomato growing, but several fungal, bacterial, and viral illnesses seriously reduce productivity and quality, therefore causing major financial losses. Reducing these impacts depends on early, exact diagnosis of diseases. This work provides a deep learning-based ensemble model for tomato leaf disease classification combining MobileNetV2 and ResNet50. To improve feature extraction, the models were tweaked by changing their output layers with GlobalAverage Pooling2D, Batch Normalization, Dropout, and Dense layers. To take use of their complimentary qualities, the feature maps from both models were combined. This study uses a publicly available dataset from Kaggle for tomato leaf disease classification. Training on a dataset of 11,000 annotated pictures spanning 10 disease categories, including bacterial spot, early blight, late blight, leaf mold, septoria leaf spot, spider mites, target spot, yellow leaf curl virus, mosaic virus, and healthy leaves. Data preprocessing included image resizing and splitting, along with an 80-10-10 split, allocating 80% for training, 10% for testing, and 10% for validation to ensure a balanced evaluation. The proposed model with a 99.91% test accuracy, the suggested model was quite remarkable. Furthermore, guaranteeing strong classification performance across all disease categories, the model showed great precision (99.92%), recall (99.90%), and an F1-score of 99.91%. With few misclassifications, the confusion matrix verified almost flawless classification even further. These findings show how well deep learning can automate tomato disease diagnosis, therefore providing a scalable and quite accurate solution for smart agriculture. By means of early intervention and precision agriculture techniques, the suggested strategy has the potential to improve crop health monitoring, reduce economic losses, and encourage sustainable farming practices.
Collapse
Affiliation(s)
- Jatin Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, India
| | - Asma A Al-Huqail
- Department of Botany and Microbiology, College of Science, King Saud University, P.O. Box 2455, Riyadh, 11451, Saudi Arabia
| | - Ahmad Almogren
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, 11633, Saudi Arabia
| | - Hardik Doshi
- Marwadi University Research Center, Department of Computer Engineering, Faculty of Engineering & Technology Marwadi University, Rajkot, Gujarat, 360003, India
| | - B Jayaprakash
- Department of Computer Science & IT, School of Sciences, Jain (Deemed to be University), Bangalore, Karnataka, India
| | - B Bharathi
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Ateeq Ur Rehman
- School of Computing, Gachon University, Seongnam-si, 13120, Republic of Korea.
| | - Seada Hussen
- Department of Electrical Power, Adama Science and Technology University, Adama, 1888, Ethiopia.
| |
Collapse
|
6
|
Goyal A, Lakhwani K. Integrating advanced deep learning techniques for enhanced detection and classification of citrus leaf and fruit diseases. Sci Rep 2025; 15:12659. [PMID: 40221550 PMCID: PMC11993616 DOI: 10.1038/s41598-025-97159-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2025] [Accepted: 04/02/2025] [Indexed: 04/14/2025] Open
Abstract
In this study, we evaluate the performance of four deep learning models, EfficientNetB0, ResNet50, DenseNet121, and InceptionV3, for the classification of citrus diseases from images. Extensive experiments were conducted on a dataset of 759 images distributed across 9 disease classes, including Black spot, Canker, Greening, Scab, Melanose, and healthy examples of fruits and leaves. Both InceptionV3 and DenseNet121 achieved a test accuracy of 99.12%, with a macro average F1-score of approximately 0.986 and a weighted average F1-score of 0.991, indicating exceptional performance in terms of precision and recall across the majority of the classes. ResNet50 and EfficientNetB0 attained test accuracies of 84.58% and 80.18%, respectively, reflecting moderate performance in comparison. These research results underscore the promise of modern convolutional neural networks for accurate and timely detection of citrus diseases, thereby providing effective tools for farmers and agricultural professionals to implement proactive disease management, reduce crop losses, and improve yield quality.
Collapse
Affiliation(s)
- Archna Goyal
- Department of Computer Science and Engineering, JECRC University, Jaipur, 303905, Rajsthan, India.
| | - Kamlesh Lakhwani
- Department of Computer Science and Engineering, JECRC University, Jaipur, 303905, Rajsthan, India
| |
Collapse
|
7
|
Wang Y, Wang Q, Su Y, Jing B, Feng M. Detection of kidney bean leaf spot disease based on a hybrid deep learning model. Sci Rep 2025; 15:11185. [PMID: 40169647 PMCID: PMC11961604 DOI: 10.1038/s41598-025-93742-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2025] [Accepted: 03/10/2025] [Indexed: 04/03/2025] Open
Abstract
Rapid diagnosis of kidney bean leaf spot disease is crucial for ensuring crop health and increasing yield. However, traditional machine learning methods face limitations in feature extraction, while deep learning approaches, despite their advantages, are computationally expensive and do not always yield optimal results. Moreover, reliable datasets for kidney bean leaf spot disease remain scarce. To address these challenges, this study constructs the first-ever kidney bean leaf spot disease (KBLD) dataset, filling a significant gap in the field. Based on this dataset, a novel hybrid deep learning model framework is proposed, which integrates deep learning models (EfficientNet-B7, MobileNetV3, ResNet50, and VGG16) for feature extraction with machine learning algorithms (Logistic Regression, Random Forest, AdaBoost, and Stochastic Gradient Boosting) for classification. By leveraging the Optuna tool for hyperparameter optimization, 16 combined models were evaluated. Experimental results show that the hybrid model combining EfficientNet-B7 and Stochastic Gradient Boosting achieves the highest detection accuracy of 96.26% on the KBLD dataset, with an F1-score of 0.97. The innovations of this study lie in the construction of a high-quality KBLD dataset and the development of a novel framework combining deep learning and machine learning, significantly improving the detection efficiency and accuracy of kidney bean leaf spot disease. This research provides a new approach for intelligent diagnosis and management of crop diseases in precision agriculture, contributing to increased agricultural productivity and ensuring food security.
Collapse
Affiliation(s)
- Yiwei Wang
- College of Agriculture, Shanxi Agricultural University, Jinzhong, China
| | - Qianyu Wang
- College of Agriculture, Shanxi Agricultural University, Jinzhong, China
| | - Yue Su
- College of Agriculture, Shanxi Agricultural University, Jinzhong, China
| | - Binghan Jing
- College of Agriculture, Shanxi Agricultural University, Jinzhong, China
| | - Meichen Feng
- College of Agriculture, Shanxi Agricultural University, Jinzhong, China.
| |
Collapse
|
8
|
Wang C, Xia Y, Xia L, Wang Q, Gu L. Dual discriminator GAN-based synthetic crop disease image generation for precise crop disease identification. PLANT METHODS 2025; 21:46. [PMID: 40159478 PMCID: PMC11955132 DOI: 10.1186/s13007-025-01361-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2025] [Accepted: 03/09/2025] [Indexed: 04/02/2025]
Abstract
Deep learning-based computer vision technology significantly improves the accuracy and efficiency of crop disease detection. However, the scarcity of crop disease images leads to insufficient training data, limiting the accuracy of disease recognition and the generalization ability of deep learning models. Therefore, increasing the number and diversity of high-quality disease images is crucial for enhancing disease monitoring performance. We design a frequency-domain and wavelet image augmentation network with a dual discriminator structure (FHWD). The first discriminator distinguishes between real and generated images, while the second high-frequency discriminator is specifically used to distinguish between the high-frequency components of both. High-frequency details play a crucial role in the sharpness, texture, and fine-grained structures of an image, which are essential for realistic image generation. During training, we combine the proposed wavelet loss and Fast Fourier Transform loss functions. These loss functions guide the model to focus on image details through multi-band constraints and frequency domain transformation, improving the authenticity of lesions and textures, thereby enhancing the visual quality of the generated images. We compare the generation performance of different models on ten crop diseases from the PlantVillage dataset. The experimental results show that the images generated by FHWD contain more realistic leaf disease lesions, with higher image quality that better aligns with human visual perception. Additionally, in classification tasks involving nine types of tomato leaf diseases from the PlantVillage dataset, FHWD-enhanced data improve classification accuracy by an average of 7.25% for VGG16, GoogleNet, and ResNet18 models.Our results show that FHWD is an effective image augmentation tool that effectively addresses the scarcity of crop disease images and provides more diverse and enriched training data for disease recognition models.
Collapse
Affiliation(s)
- Chao Wang
- School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, 230036, China
- Anhui Provincial Engineering Research Center for Agricultural Information Perception and Intelligent Computing, Hefei, China
- Key Laboratory of Agricultural Electronic Commerce of the Ministry of Agriculture, Hefei, China
| | - Yuting Xia
- School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, 230036, China
- Anhui Provincial Engineering Research Center for Agricultural Information Perception and Intelligent Computing, Hefei, China
- Key Laboratory of Agricultural Electronic Commerce of the Ministry of Agriculture, Hefei, China
| | - Lunlong Xia
- School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, 230036, China
- Anhui Provincial Engineering Research Center for Agricultural Information Perception and Intelligent Computing, Hefei, China
- Key Laboratory of Agricultural Electronic Commerce of the Ministry of Agriculture, Hefei, China
| | - Qingyong Wang
- School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, 230036, China
- Anhui Provincial Engineering Research Center for Agricultural Information Perception and Intelligent Computing, Hefei, China
- Key Laboratory of Agricultural Electronic Commerce of the Ministry of Agriculture, Hefei, China
| | - Lichuan Gu
- School of Information and Artificial Intelligence, Anhui Agricultural University, Hefei, 230036, China.
- Anhui Provincial Engineering Research Center for Agricultural Information Perception and Intelligent Computing, Hefei, China.
- Key Laboratory of Agricultural Electronic Commerce of the Ministry of Agriculture, Hefei, China.
| |
Collapse
|
9
|
Faisal HM, Aqib M, Rehman SU, Mahmood K, Obregon SA, Iglesias RC, Ashraf I. Detection of cotton crops diseases using customized deep learning model. Sci Rep 2025; 15:10766. [PMID: 40155421 PMCID: PMC11953249 DOI: 10.1038/s41598-025-94636-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Accepted: 03/17/2025] [Indexed: 04/01/2025] Open
Abstract
The agricultural industry is experiencing revolutionary changes through the latest advances in artificial intelligence and deep learning-based technologies. These powerful tools are being used for a variety of tasks including crop yield estimation, crop maturity assessment, and disease detection. The cotton crop is an essential source of revenue for many countries highlighting the need to protect it from deadly diseases that can drastically reduce yields. Early and accurate disease detection is quite crucial for preventing economic losses in the agricultural sector. Thanks to deep learning algorithms, researchers have developed innovative disease detection approaches that can help safeguard the cotton crop and promote economic growth. This study presents dissimilar state-of-the-art deep learning models for disease recognition including VGG16, DenseNet, EfficientNet, InceptionV3, MobileNet, NasNet, and ResNet models. For this purpose, real cotton disease data is collected from fields and preprocessed using different well-known techniques before using as input to deep learning models. Experimental analysis reveals that the ResNet152 model outperforms all other deep learning models, making it a practical and efficient approach for cotton disease recognition. By harnessing the power of deep learning and artificial intelligence, we can help protect the cotton crop and ensure a prosperous future for the agricultural sector.
Collapse
Affiliation(s)
- Hafiz Muhammad Faisal
- University Institute of Information Technology (UIIT), PMAS-Arid Agriculture University Rawalpindi, Rawalpindi, 46300, Pakistan
| | - Muhammad Aqib
- University Institute of Information Technology (UIIT), PMAS-Arid Agriculture University Rawalpindi, Rawalpindi, 46300, Pakistan
| | - Saif Ur Rehman
- University Institute of Information Technology (UIIT), PMAS-Arid Agriculture University Rawalpindi, Rawalpindi, 46300, Pakistan.
| | - Khalid Mahmood
- Institute of Computational Intelligence, Faculty of Computing, Gomal University, D.I. Khan, 29220, Pakistan
| | - Silvia Aparicio Obregon
- Universidad Europea del Atlántico, Isabel Torres 21, Santander, 39011, Spain
- Universidad Internacional Iberoamericana, Campeche, 24560, Mexico
- Universidad Internacional Iberoamericana Arecibo, Puerto Rico, 00613, USA
| | - Rubén Calderón Iglesias
- Universidad Europea del Atlántico, Isabel Torres 21, Santander, 39011, Spain
- Universidade Internacional do Cuanza, Cuito, Bie, Angola
- Universidad de La Romana, La Romana, Dominican Republic
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea.
| |
Collapse
|
10
|
Ergün E. High precision banana variety identification using vision transformer based feature extraction and support vector machine. Sci Rep 2025; 15:10366. [PMID: 40133576 PMCID: PMC11937298 DOI: 10.1038/s41598-025-95466-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2025] [Accepted: 03/21/2025] [Indexed: 03/27/2025] Open
Abstract
Bananas, renowned for their delightful flavor, exceptional nutritional value, and digestibility, are among the most widely consumed fruits globally. The advent of advanced image processing, computer vision, and deep learning (DL) techniques has revolutionized agricultural diagnostics, offering innovative and automated solutions for detecting and classifying fruit varieties. Despite significant progress in DL, the accurate classification of banana varieties remains challenging, particularly due to the difficulty in identifying subtle features at early developmental stages. To address these challenges, this study presents a novel hybrid framework that integrates the Vision Transformer (ViT) model for global semantic feature representation with the robust classification capabilities of Support Vector Machines. The proposed framework was rigorously evaluated on two datasets: the four-class BananaImageBD and the six-class BananaSet. To mitigate data imbalance issues, a robust evaluation strategy was employed, resulting in a remarkable classification accuracy rate (CAR) of 99.86%[Formula: see text]0.099 for BananaSet and 99.70%[Formula: see text]0.17 for BananaImageBD, surpassing traditional methods by a margin of 1.77%. The ViT model, leveraging self-supervised and semi-supervised learning mechanisms, demonstrated exceptional promise in extracting nuanced features critical for agricultural applications. By combining ViT features with cutting-edge machine learning classifiers, the proposed system establishes a new benchmark in precision and reliability for the automated detection and classification of banana varieties. These findings underscore the potential of hybrid DL frameworks in advancing agricultural diagnostics and pave the way for future innovations in the domain.
Collapse
Affiliation(s)
- Ebru Ergün
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Architecture, Recep Tayyip Erdogan University, Rize, Turkey.
| |
Collapse
|
11
|
Bayram B, Kunduracioglu I, Ince S, Pacal I. A systematic review of deep learning in MRI-based cerebral vascular occlusion-based brain diseases. Neuroscience 2025; 568:76-94. [PMID: 39805420 DOI: 10.1016/j.neuroscience.2025.01.020] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 01/09/2025] [Accepted: 01/10/2025] [Indexed: 01/16/2025]
Abstract
Neurological disorders, including cerebral vascular occlusions and strokes, present a major global health challenge due to their high mortality rates and long-term disabilities. Early diagnosis, particularly within the first hours, is crucial for preventing irreversible damage and improving patient outcomes. Although neuroimaging techniques like magnetic resonance imaging (MRI) have advanced significantly, traditional methods often fail to fully capture the complexity of brain lesions. Deep learning has recently emerged as a powerful tool in medical imaging, offering high accuracy in detecting and segmenting brain anomalies. This review examines 61 MRI-based studies published between 2020 and 2024, focusing on the role of deep learning in diagnosing cerebral vascular occlusion-related conditions. It evaluates the successes and limitations of these studies, including the adequacy and diversity of datasets, and addresses challenges such as data privacy and algorithm explainability. Comparisons between convolutional neural network (CNN)-based and Vision Transformer (ViT)-based approaches reveal distinct advantages and limitations. The findings emphasize the importance of ethically secure frameworks, the inclusion of diverse datasets, and improved model interpretability. Advanced architectures like U-Net variants and transformer-based models are highlighted as promising tools to enhance reliability in clinical applications. By automating complex neuroimaging tasks and improving diagnostic accuracy, deep learning facilitates personalized treatment strategies. This review provides a roadmap for integrating technical advancements into clinical practice, underscoring the transformative potential of deep learning in managing neurological disorders and improving healthcare outcomes globally.
Collapse
Affiliation(s)
- Bilal Bayram
- Department of Neurology, University of Health Sciences, Van Education and Research Hospital, 65000, Van, Turkey.
| | - Ismail Kunduracioglu
- Department of Computer Engineering, Faculty of Engineering, Igdir University, 76000, Igdir, Turkey.
| | - Suat Ince
- Department of Radiology, University of Health Sciences, Van Education and Research Hospital, 65000, Van, Turkey.
| | - Ishak Pacal
- Department of Computer Engineering, Faculty of Engineering, Igdir University, 76000, Igdir, Turkey.
| |
Collapse
|
12
|
Gai Y, Liu S, Zhang Z, Wei J, Wang H, Liu L, Bai Q, Qin Q, Zhao C, Zhang S, Xiang N, Zhang X. Integrative Approaches to Soybean Resilience, Productivity, and Utility: A Review of Genomics, Computational Modeling, and Economic Viability. PLANTS (BASEL, SWITZERLAND) 2025; 14:671. [PMID: 40094561 PMCID: PMC11901646 DOI: 10.3390/plants14050671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2024] [Revised: 02/05/2025] [Accepted: 02/07/2025] [Indexed: 03/19/2025]
Abstract
Soybean is a vital crop globally and a key source of food, feed, and biofuel. With advancements in high-throughput technologies, soybeans have become a key target for genetic improvement. This comprehensive review explores advances in multi-omics, artificial intelligence, and economic sustainability to enhance soybean resilience and productivity. Genomics revolution, including marker-assisted selection (MAS), genomic selection (GS), genome-wide association studies (GWAS), QTL mapping, GBS, and CRISPR-Cas9, metagenomics, and metabolomics have boosted the growth and development by creating stress-resilient soybean varieties. The artificial intelligence (AI) and machine learning approaches are improving genetic trait discovery associated with nutritional quality, stresses, and adaptation of soybeans. Additionally, AI-driven technologies like IoT-based disease detection and deep learning are revolutionizing soybean monitoring, early disease identification, yield prediction, disease prevention, and precision farming. Additionally, the economic viability and environmental sustainability of soybean-derived biofuels are critically evaluated, focusing on trade-offs and policy implications. Finally, the potential impact of climate change on soybean growth and productivity is explored through predictive modeling and adaptive strategies. Thus, this study highlights the transformative potential of multidisciplinary approaches in advancing soybean resilience and global utility.
Collapse
Affiliation(s)
- Yuhong Gai
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Shuhao Liu
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Zhidan Zhang
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Jian Wei
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Hongtao Wang
- Key Laboratory of Germplasm Resources Evaluation and Application of Changbai Mountain, Tonghua Normal University, Tonghua 134099, China
| | - Lu Liu
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Qianyue Bai
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Qiushi Qin
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
- Jilin Changfa Modern Agricultural Technology Group Co., Ltd., Changchun 130118, China
| | - Chungang Zhao
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Shuheng Zhang
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Nan Xiang
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| | - Xiao Zhang
- College of Resources and Environment, Key Laboratory of Northern Salt-Alkali Tolerant Soybean Breeding, Ministry of Agriculture and Rural Affairs, Jilin Agricultural University, Changchun 130118, China; (Y.G.); (S.L.); (L.L.); (Q.B.); (Q.Q.); (C.Z.); (S.Z.); (N.X.); (X.Z.)
| |
Collapse
|
13
|
Ozdemir B, Pacal I. A robust deep learning framework for multiclass skin cancer classification. Sci Rep 2025; 15:4938. [PMID: 39930026 PMCID: PMC11811178 DOI: 10.1038/s41598-025-89230-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2024] [Accepted: 02/04/2025] [Indexed: 02/13/2025] Open
Abstract
Skin cancer represents a significant global health concern, where early and precise diagnosis plays a pivotal role in improving treatment efficacy and patient survival rates. Nonetheless, the inherent visual similarities between benign and malignant lesions pose substantial challenges to accurate classification. To overcome these obstacles, this study proposes an innovative hybrid deep learning model that combines ConvNeXtV2 blocks and separable self-attention mechanisms, tailored to enhance feature extraction and optimize classification performance. The inclusion of ConvNeXtV2 blocks in the initial two stages is driven by their ability to effectively capture fine-grained local features and subtle patterns, which are critical for distinguishing between visually similar lesion types. Meanwhile, the adoption of separable self-attention in the later stages allows the model to selectively prioritize diagnostically relevant regions while minimizing computational complexity, addressing the inefficiencies often associated with traditional self-attention mechanisms. The model was comprehensively trained and validated on the ISIC 2019 dataset, which includes eight distinct skin lesion categories. Advanced methodologies such as data augmentation and transfer learning were employed to further enhance model robustness and reliability. The proposed architecture achieved exceptional performance metrics, with 93.48% accuracy, 93.24% precision, 90.70% recall, and a 91.82% F1-score, outperforming over ten Convolutional Neural Network (CNN) based and over ten Vision Transformer (ViT) based models tested under comparable conditions. Despite its robust performance, the model maintains a compact design with only 21.92 million parameters, making it highly efficient and suitable for model deployment. The Proposed Model demonstrates exceptional accuracy and generalizability across diverse skin lesion classes, establishing a reliable framework for early and accurate skin cancer diagnosis in clinical practice.
Collapse
Affiliation(s)
- Burhanettin Ozdemir
- Department of Operations and Project Management, College of Business, Alfaisal University, Riyadh, 11533, Saudi Arabia.
| | - Ishak Pacal
- Department of Computer Engineering, Faculty of Engineering, Igdir University, Igdir, 76000, Turkey
- Department of Electronics and Information Technologies, Faculty of Architecture and Engineering, Nakhchivan State University, AZ 7012, Nakhchivan, Azerbaijan
| |
Collapse
|
14
|
Pacal I, Işık G. Utilizing convolutional neural networks and vision transformers for precise corn leaf disease identification. Neural Comput Appl 2025; 37:2479-2496. [DOI: 10.1007/s00521-024-10769-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 11/05/2024] [Indexed: 05/14/2025]
|
15
|
Kaur H, Sharma R, Kaur J. Comparison of deep transfer learning models for classification of cervical cancer from pap smear images. Sci Rep 2025; 15:3945. [PMID: 39890842 PMCID: PMC11785805 DOI: 10.1038/s41598-024-74531-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 09/26/2024] [Indexed: 02/03/2025] Open
Abstract
Cervical cancer is one of the most commonly diagnosed cancers worldwide, and it is particularly prevalent among women living in developing countries. Traditional classification algorithms often require segmentation and feature extraction techniques to detect cervical cancer. In contrast, convolutional neural networks (CNN) models require large datasets to reduce overfitting and poor generalization. Based on limited datasets, transfer learning was applied directly to pap smear images to perform a classification task. A comprehensive comparison of 16 pre-trained models (VGG16, VGG19, ResNet50, ResNet50V2, ResNet101, ResNet101V2, ResNet152, ResNet152V2, DenseNet121, DenseNet169, DenseNet201, MobileNet, XceptionNet, InceptionV3, and InceptionResNetV2) were carried out for cervical cancer classification by relying on the Herlev dataset and Sipakmed dataset. A comparison of the results revealed that ResNet50 achieved 95% accuracy both for 2-class classification and for 7-class classification using the Herlev dataset. Based on the Sipakmed dataset, VGG16 obtained an accuracy of 99.95% for 2-class and 5-class classification, DenseNet121 achieved an accuracy of 97.65% for 3-class classification. Our findings indicate that DTL models are suitable for automating cervical cancer screening, providing more accurate and efficient results than manual screening.
Collapse
Affiliation(s)
- Harmanpreet Kaur
- Department of Computer Science & Engineering, Punjabi University, Patiala, India.
| | - Reecha Sharma
- Department of Electronics and Communication Engineering, Punjabi University, Patiala, India
| | - Jagroop Kaur
- Department of Computer Science & Engineering, Punjabi University, Patiala, India
| |
Collapse
|
16
|
Pacal I, Alaftekin M, Zengul FD. Enhancing Skin Cancer Diagnosis Using Swin Transformer with Hybrid Shifted Window-Based Multi-head Self-attention and SwiGLU-Based MLP. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:3174-3192. [PMID: 38839675 PMCID: PMC11612041 DOI: 10.1007/s10278-024-01140-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/07/2024] [Accepted: 05/08/2024] [Indexed: 06/07/2024]
Abstract
Skin cancer is one of the most frequently occurring cancers worldwide, and early detection is crucial for effective treatment. Dermatologists often face challenges such as heavy data demands, potential human errors, and strict time limits, which can negatively affect diagnostic outcomes. Deep learning-based diagnostic systems offer quick, accurate testing and enhanced research capabilities, providing significant support to dermatologists. In this study, we enhanced the Swin Transformer architecture by implementing the hybrid shifted window-based multi-head self-attention (HSW-MSA) in place of the conventional shifted window-based multi-head self-attention (SW-MSA). This adjustment enables the model to more efficiently process areas of skin cancer overlap, capture finer details, and manage long-range dependencies, while maintaining memory usage and computational efficiency during training. Additionally, the study replaces the standard multi-layer perceptron (MLP) in the Swin Transformer with a SwiGLU-based MLP, an upgraded version of the gated linear unit (GLU) module, to achieve higher accuracy, faster training speeds, and better parameter efficiency. The modified Swin model-base was evaluated using the publicly accessible ISIC 2019 skin dataset with eight classes and was compared against popular convolutional neural networks (CNNs) and cutting-edge vision transformer (ViT) models. In an exhaustive assessment on the unseen test dataset, the proposed Swin-Base model demonstrated exceptional performance, achieving an accuracy of 89.36%, a recall of 85.13%, a precision of 88.22%, and an F1-score of 86.65%, surpassing all previously reported research and deep learning models documented in the literature.
Collapse
Affiliation(s)
- Ishak Pacal
- Department of Computer Engineering, Igdir University, 76000, Igdir, Turkey
| | - Melek Alaftekin
- Department of Computer Engineering, Igdir University, 76000, Igdir, Turkey
| | - Ferhat Devrim Zengul
- Department of Health Services Administration, The University of Alabama at Birmingham, Birmingham, AL, USA.
- Center for Integrated System, School of Engineering, The University of Alabama at Birmingham, Birmingham, AL, USA.
- Department of Biomedical Informatics and Data Science, School of Medicine, The University of Alabama, Birmingham, USA.
| |
Collapse
|
17
|
Maman A, Pacal I, Bati F. Can deep learning effectively diagnose cardiac amyloidosis with 99mTc-PYP scintigraphy? J Radioanal Nucl Chem 2024. [DOI: 10.1007/s10967-024-09879-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 11/07/2024] [Indexed: 05/14/2025]
|
18
|
Pacal I, Celik O, Bayram B, Cunha A. Enhancing EfficientNetv2 with global and efficient channel attention mechanisms for accurate MRI-Based brain tumor classification. CLUSTER COMPUTING 2024; 27:11187-11212. [DOI: 10.1007/s10586-024-04532-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 04/08/2024] [Accepted: 04/22/2024] [Indexed: 05/14/2025]
Abstract
AbstractThe early and accurate diagnosis of brain tumors is critical for effective treatment planning, with Magnetic Resonance Imaging (MRI) serving as a key tool in the non-invasive examination of such conditions. Despite the advancements in Computer-Aided Diagnosis (CADx) systems powered by deep learning, the challenge of accurately classifying brain tumors from MRI scans persists due to the high variability of tumor appearances and the subtlety of early-stage manifestations. This work introduces a novel adaptation of the EfficientNetv2 architecture, enhanced with Global Attention Mechanism (GAM) and Efficient Channel Attention (ECA), aimed at overcoming these hurdles. This enhancement not only amplifies the model’s ability to focus on salient features within complex MRI images but also significantly improves the classification accuracy of brain tumors. Our approach distinguishes itself by meticulously integrating attention mechanisms that systematically enhance feature extraction, thereby achieving superior performance in detecting a broad spectrum of brain tumors. Demonstrated through extensive experiments on a large public dataset, our model achieves an exceptional high-test accuracy of 99.76%, setting a new benchmark in MRI-based brain tumor classification. Moreover, the incorporation of Grad-CAM visualization techniques sheds light on the model’s decision-making process, offering transparent and interpretable insights that are invaluable for clinical assessment. By addressing the limitations inherent in previous models, this study not only advances the field of medical imaging analysis but also highlights the pivotal role of attention mechanisms in enhancing the interpretability and accuracy of deep learning models for brain tumor diagnosis. This research sets the stage for advanced CADx systems, enhancing patient care and treatment outcomes.
Collapse
|
19
|
Agar M, Aydin S, Cakmak M, Koc M, Togacar M. Detection of Thymoma Disease Using mRMR Feature Selection and Transformer Models. Diagnostics (Basel) 2024; 14:2169. [PMID: 39410573 PMCID: PMC11476294 DOI: 10.3390/diagnostics14192169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 09/03/2024] [Accepted: 09/27/2024] [Indexed: 10/20/2024] Open
Abstract
BACKGROUND Thymoma is a tumor that originates in the thymus gland, a part of the human body located behind the breastbone. It is a malignant disease that is rare in children but more common in adults and usually does not spread outside the thymus. The exact cause of thymic disease is not known, but it is thought to be more common in people infected with the EBV virus at an early age. Various surgical methods are used in clinical settings to treat thymoma. Expert opinion is very important in the diagnosis of the disease. Recently, next-generation technologies have become increasingly important in disease detection. Today's early detection systems already use transformer models that are open to technological advances. METHODS What makes this study different is the use of transformer models instead of traditional deep learning models. The data used in this study were obtained from patients undergoing treatment at Fırat University, Department of Thoracic Surgery. The dataset consisted of two types of classes: thymoma disease images and non-thymoma disease images. The proposed approach consists of preprocessing, model training, feature extraction, feature set fusion between models, efficient feature selection, and classification. In the preprocessing step, unnecessary regions of the images were cropped, and the region of interest (ROI) technique was applied. Four types of transformer models (Deit3, Maxvit, Swin, and ViT) were used for model training. As a result of the training of the models, the feature sets obtained from the best three models were merged between the models (Deit3 and Swin, Deit3 and ViT, Deit3 and ViT, Swin and ViT, and Deit3 and Swin and ViT). The combined feature set of the model (Deit3 and ViT) that gave the best performance with fewer features was analyzed using the mRMR feature selection method. The SVM method was used in the classification process. RESULTS With the mRMR feature selection method, 100% overall accuracy was achieved with feature sets containing fewer features. The cross-validation technique was used to verify the overall accuracy of the proposed approach and 99.22% overall accuracy was achieved in the analysis with this technique. CONCLUSIONS These findings emphasize the added value of the proposed approach in the detection of thymoma.
Collapse
Affiliation(s)
- Mehmet Agar
- Department of Thoracic Surgery, Faculty of Medicine, Firat University, 23119 Elazig, Turkey; (S.A.); (M.C.)
| | - Siyami Aydin
- Department of Thoracic Surgery, Faculty of Medicine, Firat University, 23119 Elazig, Turkey; (S.A.); (M.C.)
| | - Muharrem Cakmak
- Department of Thoracic Surgery, Faculty of Medicine, Firat University, 23119 Elazig, Turkey; (S.A.); (M.C.)
| | - Mustafa Koc
- Department of Radiology, Faculty of Medicine, Firat University, 23119 Elazig, Turkey;
| | - Mesut Togacar
- Department of Management Information Systems, Faculty of Economics and Administrative Sciences, Firat University, 23119 Elazig, Turkey;
| |
Collapse
|
20
|
Alotaibi M, Alshardan A, Maashi M, Asiri MM, Alotaibi SR, Yafoz A, Alsini R, Khadidos AO. Exploiting histopathological imaging for early detection of lung and colon cancer via ensemble deep learning model. Sci Rep 2024; 14:20434. [PMID: 39227664 PMCID: PMC11372073 DOI: 10.1038/s41598-024-71302-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Accepted: 08/27/2024] [Indexed: 09/05/2024] Open
Abstract
Cancer seems to have a vast number of deaths due to its heterogeneity, aggressiveness, and significant propensity for metastasis. The predominant categories of cancer that may affect males and females and occur worldwide are colon and lung cancer. A precise and on-time analysis of this cancer can increase the survival rate and improve the appropriate treatment characteristics. An efficient and effective method for the speedy and accurate recognition of tumours in the colon and lung areas is provided as an alternative to cancer recognition methods. Earlier diagnosis of the disease on the front drastically reduces the chance of death. Machine learning (ML) and deep learning (DL) approaches can accelerate this cancer diagnosis, facilitating researcher workers to study a vast majority of patients in a limited period and at a low cost. This research presents Histopathological Imaging for the Early Detection of Lung and Colon Cancer via Ensemble DL (HIELCC-EDL) model. The HIELCC-EDL technique utilizes histopathological images to identify lung and colon cancer (LCC). To achieve this, the HIELCC-EDL technique uses the Wiener filtering (WF) method for noise elimination. In addition, the HIELCC-EDL model uses the channel attention Residual Network (CA-ResNet50) model for learning complex feature patterns. Moreover, the hyperparameter selection of the CA-ResNet50 model is performed using the tuna swarm optimization (TSO) technique. Finally, the detection of LCC is achieved by using the ensemble of three classifiers such as extreme learning machine (ELM), competitive neural networks (CNNs), and long short-term memory (LSTM). To illustrate the promising performance of the HIELCC-EDL model, a complete set of experimentations was performed on a benchmark dataset. The experimental validation of the HIELCC-EDL model portrayed a superior accuracy value of 99.60% over recent approaches.
Collapse
Affiliation(s)
- Moneerah Alotaibi
- Department of Computer Science, College of Science and Humanities Dawadmi, Shaqra University, Shaqra, Saudi Arabia
| | - Amal Alshardan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Mashael Maashi
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, P.O. Box 103786, 11543, Riyadh, Saudi Arabia
| | - Mashael M Asiri
- Department of Computer Science, Applied College at Mahayil, King Khalid University, Abha, Saudi Arabia.
| | - Sultan Refa Alotaibi
- Department of Computer Science, College of Science and Humanities Dawadmi, Shaqra University, Shaqra, Saudi Arabia
| | - Ayman Yafoz
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Raed Alsini
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Alaa O Khadidos
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|