1
|
Nahiduzzaman M, Abdulrazak LF, Kibria HB, Khandakar A, Ayari MA, Ahamed MF, Ahsan M, Haider J, Moni MA, Kowalski M. A hybrid explainable model based on advanced machine learning and deep learning models for classifying brain tumors using MRI images. Sci Rep 2025; 15:1649. [PMID: 39794374 PMCID: PMC11724088 DOI: 10.1038/s41598-025-85874-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Accepted: 01/07/2025] [Indexed: 01/13/2025] Open
Abstract
Brain tumors present a significant global health challenge, and their early detection and accurate classification are crucial for effective treatment strategies. This study presents a novel approach combining a lightweight parallel depthwise separable convolutional neural network (PDSCNN) and a hybrid ridge regression extreme learning machine (RRELM) for accurately classifying four types of brain tumors (glioma, meningioma, no tumor, and pituitary) based on MRI images. The proposed approach enhances the visibility and clarity of tumor features in MRI images by employing contrast-limited adaptive histogram equalization (CLAHE). A lightweight PDSCNN is then employed to extract relevant tumor-specific patterns while minimizing computational complexity. A hybrid RRELM model is proposed, enhancing the traditional ELM for improved classification performance. The proposed framework is compared with various state-of-the-art models in terms of classification accuracy, model parameters, and layer sizes. The proposed framework achieved remarkable average precision, recall, and accuracy values of 99.35%, 99.30%, and 99.22%, respectively, through five-fold cross-validation. The PDSCNN-RRELM outperformed the extreme learning machine model with pseudoinverse (PELM) and exhibited superior performance. The introduction of ridge regression in the ELM framework led to significant enhancements in classification performance model parameters and layer sizes compared to those of the state-of-the-art models. Additionally, the interpretability of the framework was demonstrated using Shapley Additive Explanations (SHAP), providing insights into the decision-making process and increasing confidence in real-world diagnosis.
Collapse
Affiliation(s)
- Md Nahiduzzaman
- Department of Electrical and Computer Engineering, Rajshahi University of Engineering and Technology, Rajshahi, 6204, Bangladesh
| | - Lway Faisal Abdulrazak
- Department of Space Technology Engineering, Electrical Engineering Technical College, Middle Technical University, Baghdad, Iraq
- Department of Computer Science, Cihan University Sulaimaniya, Sulaimaniya, 46001, Kurdistan Region, Iraq
| | - Hafsa Binte Kibria
- Department of Electrical and Computer Engineering, Rajshahi University of Engineering and Technology, Rajshahi, 6204, Bangladesh
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha, 2713, Qatar
| | | | - Md Faysal Ahamed
- Department of Electrical and Computer Engineering, Rajshahi University of Engineering and Technology, Rajshahi, 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, York, YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK
| | - Mohammad Ali Moni
- Artificial Intelligence and Digital Health, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioral Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, Warsaw, 00-908, Poland.
| |
Collapse
|
2
|
Hadhoud Y, Mekhaznia T, Bennour A, Amroune M, Kurdi NA, Aborujilah AH, Al-Sarem M. From Binary to Multi-Class Classification: A Two-Step Hybrid CNN-ViT Model for Chest Disease Classification Based on X-Ray Images. Diagnostics (Basel) 2024; 14:2754. [PMID: 39682662 DOI: 10.3390/diagnostics14232754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2024] [Revised: 12/03/2024] [Accepted: 12/04/2024] [Indexed: 12/18/2024] Open
Abstract
BACKGROUND/OBJECTIVES Chest disease identification for Tuberculosis and Pneumonia diseases presents diagnostic challenges due to overlapping radiographic features and the limited availability of expert radiologists, especially in developing countries. The present study aims to address these challenges by developing a Computer-Aided Diagnosis (CAD) system to provide consistent and objective analyses of chest X-ray images, thereby reducing potential human error. By leveraging the complementary strengths of convolutional neural networks (CNNs) and vision transformers (ViTs), we propose a hybrid model for the accurate detection of Tuberculosis and for distinguishing between Tuberculosis and Pneumonia. METHODS We designed a two-step hybrid model that integrates the ResNet-50 CNN with the ViT-b16 architecture. It uses the transfer learning on datasets from Guangzhou Women's and Children's Medical Center for Pneumonia cases and datasets from Qatar and Dhaka (Bangladesh) universities for Tuberculosis cases. CNNs capture hierarchical structures in images, while ViTs, with their self-attention mechanisms, excel at identifying relationships between features. Combining these approaches enhances the model's performance on binary and multi-class classification tasks. RESULTS Our hybrid CNN-ViT model achieved a binary classification accuracy of 98.97% for Tuberculosis detection. For multi-class classification, distinguishing between Tuberculosis, viral Pneumonia, and bacterial Pneumonia, the model achieved an accuracy of 96.18%. These results underscore the model's potential in improving diagnostic accuracy and reliability for chest disease classification based on X-ray images. CONCLUSIONS The proposed hybrid CNN-ViT model demonstrates substantial potential in advancing the accuracy and robustness of CAD systems for chest disease diagnosis. By integrating CNN and ViT architectures, our approach enhances the diagnostic precision, which may help to alleviate the burden on healthcare systems in resource-limited settings and improve patient outcomes in chest disease diagnosis.
Collapse
Affiliation(s)
- Yousra Hadhoud
- LAMIS Laboratory, Larbi Tebessi University, Tebessa 12002, Algeria
| | - Tahar Mekhaznia
- LAMIS Laboratory, Larbi Tebessi University, Tebessa 12002, Algeria
| | - Akram Bennour
- LAMIS Laboratory, Larbi Tebessi University, Tebessa 12002, Algeria
| | - Mohamed Amroune
- LAMIS Laboratory, Larbi Tebessi University, Tebessa 12002, Algeria
| | - Neesrin Ali Kurdi
- College of Computer Science and Engineering, Taibah University, Medina 41477, Saudi Arabia
| | - Abdulaziz Hadi Aborujilah
- Department of Management Information Systems, College of Commerce & Business Administration, Dhofar University, Salalaha 211, Oman
| | - Mohammed Al-Sarem
- Department of Information Technology, Aylol University College, Yarim 547, Yemen
| |
Collapse
|
3
|
Bani Baker Q, Hammad M, Al-Smadi M, Al-Jarrah H, Al-Hamouri R, Al-Zboon SA. Enhanced COVID-19 Detection from X-ray Images with Convolutional Neural Network and Transfer Learning. J Imaging 2024; 10:250. [PMID: 39452413 PMCID: PMC11508642 DOI: 10.3390/jimaging10100250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/22/2024] [Accepted: 09/27/2024] [Indexed: 10/26/2024] Open
Abstract
The global spread of Coronavirus (COVID-19) has prompted imperative research into scalable and effective detection methods to curb its outbreak. The early diagnosis of COVID-19 patients has emerged as a pivotal strategy in mitigating the spread of the disease. Automated COVID-19 detection using Chest X-ray (CXR) imaging has significant potential for facilitating large-scale screening and epidemic control efforts. This paper introduces a novel approach that employs state-of-the-art Convolutional Neural Network models (CNNs) for accurate COVID-19 detection. The employed datasets each comprised 15,000 X-ray images. We addressed both binary (Normal vs. Abnormal) and multi-class (Normal, COVID-19, Pneumonia) classification tasks. Comprehensive evaluations were performed by utilizing six distinct CNN-based models (Xception, Inception-V3, ResNet50, VGG19, DenseNet201, and InceptionResNet-V2) for both tasks. As a result, the Xception model demonstrated exceptional performance, achieving 98.13% accuracy, 98.14% precision, 97.65% recall, and a 97.89% F1-score in binary classification, while in multi-classification it yielded 87.73% accuracy, 90.20% precision, 87.73% recall, and an 87.49% F1-score. Moreover, the other utilized models, such as ResNet50, demonstrated competitive performance compared with many recent works.
Collapse
Affiliation(s)
- Qanita Bani Baker
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Mahmoud Hammad
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Mohammed Al-Smadi
- Digital Learning and Online Education Office (DLOE), Qatar University, Doha 2713, Qatar;
| | - Heba Al-Jarrah
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Rahaf Al-Hamouri
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Sa’ad A. Al-Zboon
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| |
Collapse
|
4
|
Zhao L, Zhang Z. A improved pooling method for convolutional neural networks. Sci Rep 2024; 14:1589. [PMID: 38238357 PMCID: PMC10796389 DOI: 10.1038/s41598-024-51258-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 01/02/2024] [Indexed: 01/22/2024] Open
Abstract
The pooling layer in convolutional neural networks plays a crucial role in reducing spatial dimensions, and improving computational efficiency. However, standard pooling operations such as max pooling or average pooling are not suitable for all applications and data types. Therefore, developing custom pooling layers that can adaptively learn and extract relevant features from specific datasets is of great significance. In this paper, we propose a novel approach to design and implement customizable pooling layers to enhance feature extraction capabilities in CNNs. The proposed T-Max-Avg pooling layer incorporates a threshold parameter T, which selects the K highest interacting pixels as specified, allowing it to control whether the output features of the input data are based on the maximum values or weighted averages. By learning the optimal pooling strategy during training, our custom pooling layer can effectively capture and represent discriminative information in the input data, thereby improving classification performance. Experimental results show that the proposed T-Max-Avg pooling layer achieves good performance on three different datasets. When compared to LeNet-5 model with average pooling, max pooling, and Avg-TopK methods, the T-Max-Avg pooling method achieves the highest accuracy on CIFAR-10, CIFAR-100, and MNIST datasets.
Collapse
Affiliation(s)
- Lei Zhao
- School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou, 730070, China
| | - Zhonglin Zhang
- School of Electronics and Information Engineering, Lanzhou Jiaotong University, Lanzhou, 730070, China.
| |
Collapse
|
5
|
Nahiduzzaman M, Goni MOF, Hassan R, Islam MR, Syfullah MK, Shahriar SM, Anower MS, Ahsan M, Haider J, Kowalski M. Parallel CNN-ELM: A multiclass classification of chest X-ray images to identify seventeen lung diseases including COVID-19. EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120528. [PMID: 37274610 PMCID: PMC10223636 DOI: 10.1016/j.eswa.2023.120528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 05/19/2023] [Accepted: 05/19/2023] [Indexed: 06/06/2023]
Abstract
Numerous epidemic lung diseases such as COVID-19, tuberculosis (TB), and pneumonia have spread over the world, killing millions of people. Medical specialists have experienced challenges in correctly identifying these diseases due to their subtle differences in Chest X-ray images (CXR). To assist the medical experts, this study proposed a computer-aided lung illness identification method based on the CXR images. For the first time, 17 different forms of lung disorders were considered and the study was divided into six trials with each containing two, two, three, four, fourteen, and seventeen different forms of lung disorders. The proposed framework combined robust feature extraction capabilities of a lightweight parallel convolutional neural network (CNN) with the classification abilities of the extreme learning machine algorithm named CNN-ELM. An optimistic accuracy of 90.92% and an area under the curve (AUC) of 96.93% was achieved when 17 classes were classified side by side. It also accurately identified COVID-19 and TB with 99.37% and 99.98% accuracy, respectively, in 0.996 microseconds for a single image. Additionally, the current results also demonstrated that the framework could outperform the existing state-of-the-art (SOTA) models. On top of that, a secondary conclusion drawn from this study was that the prospective framework retained its effectiveness over a range of real-world environments, including balanced-unbalanced or large-small datasets, large multiclass or simple binary class, and high- or low-resolution images. A prototype Android App was also developed to establish the potential of the framework in real-life implementation.
Collapse
Affiliation(s)
- Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Omaer Faruq Goni
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Rakibul Hassan
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Khalid Syfullah
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Saleh Mohammed Shahriar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Shamim Anower
- Department of Electrical & Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, 00-908 Warsaw, Poland
| |
Collapse
|
6
|
Nahiduzzaman M, Chowdhury MEH, Salam A, Nahid E, Ahmed F, Al-Emadi N, Ayari MA, Khandakar A, Haider J. Explainable deep learning model for automatic mulberry leaf disease classification. FRONTIERS IN PLANT SCIENCE 2023; 14:1175515. [PMID: 37794930 PMCID: PMC10546311 DOI: 10.3389/fpls.2023.1175515] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 08/28/2023] [Indexed: 10/06/2023]
Abstract
Mulberry leaves feed Bombyx mori silkworms to generate silk thread. Diseases that affect mulberry leaves have reduced crop and silk yields in sericulture, which produces 90% of the world's raw silk. Manual leaf disease identification is tedious and error-prone. Computer vision can categorize leaf diseases early and overcome the challenges of manual identification. No mulberry leaf deep learning (DL) models have been reported. Therefore, in this study, two types of leaf diseases: leaf rust and leaf spot, with disease-free leaves, were collected from two regions of Bangladesh. Sericulture experts annotated the leaf images. The images were pre-processed, and 6,000 synthetic images were generated using typical image augmentation methods from the original 764 training images. Additional 218 and 109 images were employed for testing and validation respectively. In addition, a unique lightweight parallel depth-wise separable CNN model, PDS-CNN was developed by applying depth-wise separable convolutional layers to reduce parameters, layers, and size while boosting classification performance. Finally, the explainable capability of PDS-CNN is obtained through the use of SHapley Additive exPlanations (SHAP) evaluated by a sericulture specialist. The proposed PDS-CNN outperforms well-known deep transfer learning models, achieving an optimistic accuracy of 95.05 ± 2.86% for three-class classifications and 96.06 ± 3.01% for binary classifications with only 0.53 million parameters, 8 layers, and a size of 6.3 megabytes. Furthermore, when compared with other well-known transfer models, the proposed model identified mulberry leaf diseases with higher accuracy, fewer factors, fewer layers, and lower overall size. The visually expressive SHAP explanation images validate the models' findings aligning with the predictions made the sericulture specialist. Based on these findings, it is possible to conclude that the explainable AI (XAI)-based PDS-CNN can provide sericulture specialists with an effective tool for accurately categorizing mulberry leaves.
Collapse
Affiliation(s)
- Md. Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha, Qatar
| | | | - Abdus Salam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Emama Nahid
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi, Bangladesh
| | - Faruque Ahmed
- Bangladesh Sericulture Research and Training Institute, Rajshahi, Bangladesh
| | - Nasser Al-Emadi
- Department of Electrical Engineering, Qatar University, Doha, Qatar
| | - Mohamed Arselene Ayari
- Department of Civil and Environmental Engineering, Qatar University, Doha, Qatar
- Technology Innovation and Engineering Education Unit, Qatar University, Doha, Qatar
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha, Qatar
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
7
|
Xie L, Ge T, Xiao B, Han X, Zhang Q, Xu Z, He D, Tian W. Identification of Adolescent Menarche Status Using Biplanar X-ray Images: A Deep Learning-Based Method. Bioengineering (Basel) 2023; 10:769. [PMID: 37508796 PMCID: PMC10375958 DOI: 10.3390/bioengineering10070769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 07/30/2023] Open
Abstract
The purpose of this study is to develop an automated method for identifying the menarche status of adolescents based on EOS radiographs. We designed a deep-learning-based algorithm that contains a region of interest detection network and a classification network. The algorithm was trained and tested on a retrospective dataset of 738 adolescent EOS cases using a five-fold cross-validation strategy and was subsequently tested on a clinical validation set of 259 adolescent EOS cases. On the clinical validation set, our algorithm achieved accuracy of 0.942, macro precision of 0.933, macro recall of 0.938, and a macro F1-score of 0.935. The algorithm showed almost perfect performance in distinguishing between males and females, with the main classification errors found in females aged 12 to 14 years. Specifically for females, the algorithm had accuracy of 0.910, sensitivity of 0.943, and specificity of 0.855 in estimating menarche status, with an area under the curve of 0.959. The kappa value of the algorithm, in comparison to the actual situation, was 0.806, indicating strong agreement between the algorithm and the real-world scenario. This method can efficiently analyze EOS radiographs and identify the menarche status of adolescents. It is expected to become a routine clinical tool and provide references for doctors' decisions under specific clinical conditions.
Collapse
Affiliation(s)
- Linzhen Xie
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Tenghui Ge
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Bin Xiao
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Xiaoguang Han
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Qi Zhang
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Zhongning Xu
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Da He
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| | - Wei Tian
- Department of Spine Surgery, Peking University Fourth School of Clinical Medicine, Beijing 100035, China
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing 100035, China
- Research Unit of Intelligent Orthopedics, Chinese Academy of Medical Sciences, Beijing 100035, China
| |
Collapse
|
8
|
Nahiduzzaman M, Faruq Goni MO, Robiul Islam M, Sayeed A, Shamim Anower M, Ahsan M, Haider J, Kowalski M. Detection of various lung diseases including COVID-19 using extreme learning machine algorithm based on the features extracted from a lightweight CNN architecture. Biocybern Biomed Eng 2023; 43:S0208-5216(23)00037-2. [PMID: 38620111 PMCID: PMC10292668 DOI: 10.1016/j.bbe.2023.06.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 04/04/2023] [Accepted: 06/16/2023] [Indexed: 11/09/2023]
Abstract
Around the world, several lung diseases such as pneumonia, cardiomegaly, and tuberculosis (TB) contribute to severe illness, hospitalization or even death, particularly for elderly and medically vulnerable patients. In the last few decades, several new types of lung-related diseases have taken the lives of millions of people, and COVID-19 has taken almost 6.27 million lives. To fight against lung diseases, timely and correct diagnosis with appropriate treatment is crucial in the current COVID-19 pandemic. In this study, an intelligent recognition system for seven lung diseases has been proposed based on machine learning (ML) techniques to aid the medical experts. Chest X-ray (CXR) images of lung diseases were collected from several publicly available databases. A lightweight convolutional neural network (CNN) has been used to extract characteristic features from the raw pixel values of the CXR images. The best feature subset has been identified using the Pearson Correlation Coefficient (PCC). Finally, the extreme learning machine (ELM) has been used to perform the classification task to assist faster learning and reduced computational complexity. The proposed CNN-PCC-ELM model achieved an accuracy of 96.22% with an Area Under Curve (AUC) of 99.48% for eight class classification. The outcomes from the proposed model demonstrated better performance than the existing state-of-the-art (SOTA) models in the case of COVID-19, pneumonia, and tuberculosis detection in both binary and multiclass classifications. For eight class classification, the proposed model achieved precision, recall and fi-score and ROC are 100%, 99%, 100% and 99.99% respectively for COVID-19 detection demonstrating its robustness. Therefore, the proposed model has overshadowed the existing pioneering models to accurately differentiate COVID-19 from the other lung diseases that can assist the medical physicians in treating the patient effectively.
Collapse
Affiliation(s)
- Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Omaer Faruq Goni
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Abu Sayeed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Shamim Anower
- Department of Electrical & Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, Warsaw, Poland
| |
Collapse
|
9
|
Sultana A, Nahiduzzaman M, Bakchy SC, Shahriar SM, Peyal HI, Chowdhury MEH, Khandakar A, Arselene Ayari M, Ahsan M, Haider J. A Real Time Method for Distinguishing COVID-19 Utilizing 2D-CNN and Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094458. [PMID: 37177662 PMCID: PMC10181786 DOI: 10.3390/s23094458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 04/19/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
Rapid identification of COVID-19 can assist in making decisions for effective treatment and epidemic prevention. The PCR-based test is expert-dependent, is time-consuming, and has limited sensitivity. By inspecting Chest R-ray (CXR) images, COVID-19, pneumonia, and other lung infections can be detected in real time. The current, state-of-the-art literature suggests that deep learning (DL) is highly advantageous in automatic disease classification utilizing the CXR images. The goal of this study is to develop models by employing DL models for identifying COVID-19 and other lung disorders more efficiently. For this study, a dataset of 18,564 CXR images with seven disease categories was created from multiple publicly available sources. Four DL architectures including the proposed CNN model and pretrained VGG-16, VGG-19, and Inception-v3 models were applied to identify healthy and six lung diseases (fibrosis, lung opacity, viral pneumonia, bacterial pneumonia, COVID-19, and tuberculosis). Accuracy, precision, recall, f1 score, area under the curve (AUC), and testing time were used to evaluate the performance of these four models. The results demonstrated that the proposed CNN model outperformed all other DL models employed for a seven-class classification with an accuracy of 93.15% and average values for precision, recall, f1-score, and AUC of 0.9343, 0.9443, 0.9386, and 0.9939. The CNN model equally performed well when other multiclass classifications including normal and COVID-19 as the common classes were considered, yielding accuracy values of 98%, 97.49%, 97.81%, 96%, and 96.75% for two, three, four, five, and six classes, respectively. The proposed model can also identify COVID-19 with shorter training and testing times compared to other transfer learning models.
Collapse
Affiliation(s)
- Abida Sultana
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sagor Chandro Bakchy
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Saleh Mohammed Shahriar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Hasibul Islam Peyal
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | | | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | | | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester Street, Manchester M1 5GD, UK
| |
Collapse
|
10
|
Qiu S, Ma J, Ma Z. IRCM-Caps: An X-ray image detection method for COVID-19. THE CLINICAL RESPIRATORY JOURNAL 2023; 17:364-373. [PMID: 36922395 DOI: 10.1111/crj.13599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 02/12/2023] [Accepted: 02/20/2023] [Indexed: 03/18/2023]
Abstract
OBJECTIVE COVID-19 is ravaging the world, but traditional reverse transcription-polymerase reaction (RT-PCR) tests are time-consuming and have a high false-negative rate and lack of medical equipment. Therefore, lung imaging screening methods are proposed to diagnose COVID-19 due to its fast test speed. Currently, the commonly used convolutional neural network (CNN) model requires a large number of datasets, and the accuracy of the basic capsule network for multiple classification is limital. For this reason, this paper proposes a novel model based on CNN and CapsNet. METHODS The proposed model integrates CNN and CapsNet. And attention mechanism module and multi-branch lightweight module are applied to enhance performance. Use the contrast adaptive histogram equalization (CLAHE) algorithm to preprocess the image to enhance image contrast. The preprocessed images are input into the network for training, and ReLU was used as the activation function to adjust the parameters to achieve the optimal. RESULT The test dataset includes 1200 X-ray images (400 COVID-19, 400 viral pneumonia, and 400 normal), and we replace CNN of VGG16, InceptionV3, Xception, Inception-Resnet-v2, ResNet50, DenseNet121, and MoblieNetV2 and integrate with CapsNet. Compared with CapsNet, this network improves 6.96%, 7.83%, 9.37%, 10.47%, and 10.38% in accuracy, area under the curve (AUC), recall, and F1 scores, respectively. In the binary classification experiment, compared with CapsNet, the accuracy, AUC, accuracy, recall rate, and F1 score were increased by 5.33%, 5.34%, 2.88%, 8.00%, and 5.56%, respectively. CONCLUSION The proposed embedded the advantages of traditional convolutional neural network and capsule network and has a good classification effect on small COVID-19 X-ray image dataset.
Collapse
Affiliation(s)
- Shuo Qiu
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China
| | - Jinlin Ma
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China.,Key Laboratory of Intelligent Information Processing of Image and Graphics, State Ethnic Affairs Commission, Yinchuan, China
| | - Ziping Ma
- School of Mathematics and Information Science, North Minzu University, Yinchuan, China
| |
Collapse
|