1
|
Tran AT, Zeevi T, Payabvash S. Strategies to Improve the Robustness and Generalizability of Deep Learning Segmentation and Classification in Neuroimaging. BIOMEDINFORMATICS 2025; 5:20. [PMID: 40271381 PMCID: PMC12014193 DOI: 10.3390/biomedinformatics5020020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2025]
Abstract
Artificial Intelligence (AI) and deep learning models have revolutionized diagnosis, prognostication, and treatment planning by extracting complex patterns from medical images, enabling more accurate, personalized, and timely clinical decisions. Despite its promise, challenges such as image heterogeneity across different centers, variability in acquisition protocols and scanners, and sensitivity to artifacts hinder the reliability and clinical integration of deep learning models. Addressing these issues is critical for ensuring accurate and practical AI-powered neuroimaging applications. We reviewed and summarized the strategies for improving the robustness and generalizability of deep learning models for the segmentation and classification of neuroimages. This review follows a structured protocol, comprehensively searching Google Scholar, PubMed, and Scopus for studies on neuroimaging, task-specific applications, and model attributes. Peer-reviewed, English-language studies on brain imaging were included. The extracted data were analyzed to evaluate the implementation and effectiveness of these techniques. The study identifies key strategies to enhance deep learning in neuroimaging, including regularization, data augmentation, transfer learning, and uncertainty estimation. These approaches address major challenges such as data variability and domain shifts, improving model robustness and ensuring consistent performance across diverse clinical settings. The technical strategies summarized in this review can enhance the robustness and generalizability of deep learning models for segmentation and classification to improve their reliability for real-world clinical practice.
Collapse
Affiliation(s)
- Anh T. Tran
- Department of Radiology, Columbia University Irving Medical Center, NewYork-Presbyterian Hospital, Columbia University, New York, NY 10032, USA
| | - Tal Zeevi
- Department of Biomedical Engineering, Yale University, New Haven, CT 06520, USA
| | - Seyedmehdi Payabvash
- Department of Radiology, Columbia University Irving Medical Center, NewYork-Presbyterian Hospital, Columbia University, New York, NY 10032, USA
| |
Collapse
|
2
|
Rastogi D, Johri P, Donelli M, Kumar L, Bindewari S, Raghav A, Khatri SK. Brain Tumor Detection and Prediction in MRI Images Utilizing a Fine-Tuned Transfer Learning Model Integrated Within Deep Learning Frameworks. Life (Basel) 2025; 15:327. [PMID: 40141673 PMCID: PMC11944010 DOI: 10.3390/life15030327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2025] [Revised: 02/15/2025] [Accepted: 02/17/2025] [Indexed: 03/28/2025] Open
Abstract
Brain tumor diagnosis is a complex task due to the intricate anatomy of the brain and the heterogeneity of tumors. While magnetic resonance imaging (MRI) is commonly used for brain imaging, accurately detecting brain tumors remains challenging. This study aims to enhance brain tumor classification via deep transfer learning architectures using fine-tuned transfer learning, an advanced approach within artificial intelligence. Deep learning methods facilitate the analysis of high-dimensional MRI data, automating the feature extraction process crucial for precise diagnoses. In this research, several transfer learning models, including InceptionResNetV2, VGG19, Xception, and MobileNetV2, were employed to improve the accuracy of tumor detection. The dataset, sourced from Kaggle, contains tumor and non-tumor images. To mitigate class imbalance, image augmentation techniques were applied. The models were pre-trained on extensive datasets and fine-tuned to recognize specific features in MRI brain images, allowing for improved classification of tumor versus non-tumor images. The experimental results show that the Xception model outperformed other architectures, achieving an accuracy of 96.11%. This result underscores its capability in high-precision brain tumor detection. The study concludes that fine-tuned deep transfer learning architectures, particularly Xception, significantly improve the accuracy and efficiency of brain tumor diagnosis. These findings demonstrate the potential of using advanced AI models to support clinical decision making, leading to more reliable diagnoses and improved patient outcomes.
Collapse
Affiliation(s)
- Deependra Rastogi
- School of Computer Science and Engineering, IILM University, Greater Noida 201306, India; (L.K.); (S.B.); (A.R.)
| | - Prashant Johri
- School of Computing Science and Engineering, Galgotias University, Greater Noida 203201, India;
| | - Massimo Donelli
- Department of Civil, Environmental, Mechanical Engineering University of Trento, 38100 Trento, Italy
- Radiomics Laboratory, Department of Economy and Management, University of Trento, 38100 Trento, Italy
| | - Lalit Kumar
- School of Computer Science and Engineering, IILM University, Greater Noida 201306, India; (L.K.); (S.B.); (A.R.)
| | - Shantanu Bindewari
- School of Computer Science and Engineering, IILM University, Greater Noida 201306, India; (L.K.); (S.B.); (A.R.)
| | - Abhinav Raghav
- School of Computer Science and Engineering, IILM University, Greater Noida 201306, India; (L.K.); (S.B.); (A.R.)
| | | |
Collapse
|
3
|
Na S, Jeong H, Kim I, Hong SM, Shim J, Yoon IH, Cho KH. Distribution coefficient prediction using multimodal machine learning based on soil adsorption factors, XRF, and XRD spectrum data. JOURNAL OF HAZARDOUS MATERIALS 2024; 478:135285. [PMID: 39121738 DOI: 10.1016/j.jhazmat.2024.135285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 07/08/2024] [Accepted: 07/20/2024] [Indexed: 08/12/2024]
Abstract
The distribution coefficient (Kd) plays a crucial role in predicting the migration behavior of radionuclides in the soil environment. However, Kd depends on the complexities of geological and environmental factors, and existing models often do not reflect the unique soil properties. We propose a multimodal technique to predict Kd values for radionuclide adsorption in soils surrounding nuclear facilities in Republic of Korea. We integrated and trained three sub-networks reflecting different data domains: soil adsorption factors for physicochemical conditions, X-ray fluorescence (XRF) data, and X-ray diffraction (XRD) spectra for inherent soil properties. Our multimodal model achieved high performance, with a coefficient of determination (R2) of 0.84 and root mean squared error (RMSE) of 0.89 for natural log-transformed Kd. This is the first study to develop a multimodal model that simultaneously incorporates inherent soil properties and adsorption factors to predict Kd. We investigated influential peaks in XRD spectra and also revealed that pH and calcium oxide (CaO) were significant variables in soil adsorption factors and XRF data, respectively. These results promote the use of a multimodal model to predict Kd values by integrating data from different domains, providing a cost-effective and novel approach to elucidate the mechanisms of radionuclide adsorption in soil.
Collapse
Affiliation(s)
- Seongyeon Na
- Department of Civil, Urban, Earth and Environmental Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Heewon Jeong
- Future and Fusion Lab of Architectural, Civil and Environmental Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Ilgook Kim
- Decommissioning Technology Research Division, Korea Atomic Energy Research Institute, Daejeon 34057, Republic of Korea
| | - Seok Min Hong
- Department of Civil, Urban, Earth and Environmental Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - Jaegyu Shim
- Department of Civil, Urban, Earth and Environmental Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 44919, Republic of Korea
| | - In-Ho Yoon
- Decommissioning Technology Research Division, Korea Atomic Energy Research Institute, Daejeon 34057, Republic of Korea.
| | - Kyung Hwa Cho
- School of Civil, Environmental, and Architectural Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
4
|
T R M, Gupta M, T A A, Kumar V V, Geman O, Kumar V D. An XAI-enhanced efficientNetB0 framework for precision brain tumor detection in MRI imaging. J Neurosci Methods 2024; 410:110227. [PMID: 39038716 DOI: 10.1016/j.jneumeth.2024.110227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 06/25/2024] [Accepted: 07/19/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Accurately diagnosing brain tumors from MRI scans is crucial for effective treatment planning. While traditional methods heavily rely on radiologist expertise, the integration of AI, particularly Convolutional Neural Networks (CNNs), has shown promise in improving accuracy. However, the lack of transparency in AI decision-making processes presents a challenge for clinical adoption. METHODS Recent advancements in deep learning, particularly the utilization of CNNs, have facilitated the development of models for medical image analysis. In this study, we employed the EfficientNetB0 architecture and integrated explainable AI techniques to enhance both accuracy and interpretability. Grad-CAM visualization was utilized to highlight significant areas in MRI scans influencing classification decisions. RESULTS Our model achieved a classification accuracy of 98.72 % across four categories of brain tumors (Glioma, Meningioma, No Tumor, Pituitary), with precision and recall exceeding 97 % for all categories. The incorporation of explainable AI techniques was validated through visual inspection of Grad-CAM heatmaps, which aligned well with established diagnostic markers in MRI scans. CONCLUSION The AI-enhanced EfficientNetB0 framework with explainable AI techniques significantly improves brain tumor classification accuracy to 98.72 %, offering clear visual insights into the decision-making process. This method enhances diagnostic reliability and trust, demonstrating substantial potential for clinical adoption in medical diagnostics.
Collapse
Affiliation(s)
- Mahesh T R
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore 562112, India.
| | - Muskan Gupta
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore 562112, India
| | - Anupama T A
- Department of Computer Science and Engineering, Siddaganga Institute of Technology, Tumakuru 572103, India
| | - Vinoth Kumar V
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India.
| | - Oana Geman
- Stefan Cel Mare University of Suceava, Suceava, Romania.
| | - Dhilip Kumar V
- Vel Tech Rangarajan Dr.Sagunthala R & D Instiute of Science and Technology, Chennai, India.
| |
Collapse
|
5
|
S S, V S. FACNN: fuzzy-based adaptive convolution neural network for classifying COVID-19 in noisy CXR images. Med Biol Eng Comput 2024; 62:2893-2909. [PMID: 38710960 DOI: 10.1007/s11517-024-03107-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 04/22/2024] [Indexed: 05/08/2024]
Abstract
COVID-19 detection using chest X-rays (CXR) has evolved as a significant method for early diagnosis of the pandemic disease. Clinical trials and methods utilize X-ray images with computer and intelligent algorithms to improve detection and classification precision. This article thus proposes a fuzzy-based adaptive convolution neural network (FACNN) model to improve the detection precision by confining the false rates. The feature extraction process between the successive regions is validated using a fuzzy process that classifies labeled and unknown pixels. The membership functions are derived based on high precision features for detection and false rate suppression process. The convolution neural network process is responsible for increasing detection precision through recurrent training based on feature availability. This availability analysis is verified using fuzzy derivatives under local variances. Based on variance-reduced features, the appropriate regions with labeled and unknown features are used for normal or infected classification. Thus, the proposed FACNN improves accuracy, precision, and feature extraction by 14.36%, 8.74%, and 12.35%, respectively. This model reduces the false rate and extraction time by 10.35% and 10.66%, respectively.
Collapse
Affiliation(s)
- Suganyadevi S
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, 641 407, India.
| | - Seethalakshmi V
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, 641 407, India
| |
Collapse
|
6
|
Abdusalomov A, Rakhimov M, Karimberdiyev J, Belalova G, Cho YI. Enhancing Automated Brain Tumor Detection Accuracy Using Artificial Intelligence Approaches for Healthcare Environments. Bioengineering (Basel) 2024; 11:627. [PMID: 38927863 PMCID: PMC11201188 DOI: 10.3390/bioengineering11060627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/09/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024] Open
Abstract
Medical imaging and deep learning models are essential to the early identification and diagnosis of brain cancers, facilitating timely intervention and improving patient outcomes. This research paper investigates the integration of YOLOv5, a state-of-the-art object detection framework, with non-local neural networks (NLNNs) to improve brain tumor detection's robustness and accuracy. This study begins by curating a comprehensive dataset comprising brain MRI scans from various sources. To facilitate effective fusion, the YOLOv5 and NLNNs, K-means+, and spatial pyramid pooling fast+ (SPPF+) modules are integrated within a unified framework. The brain tumor dataset is used to refine the YOLOv5 model through the application of transfer learning techniques, adapting it specifically to the task of tumor detection. The results indicate that the combination of YOLOv5 and other modules results in enhanced detection capabilities in comparison to the utilization of YOLOv5 exclusively, proving recall rates of 86% and 83% respectively. Moreover, the research explores the interpretability aspect of the combined model. By visualizing the attention maps generated by the NLNNs module, the regions of interest associated with tumor presence are highlighted, aiding in the understanding and validation of the decision-making procedure of the methodology. Additionally, the impact of hyperparameters, such as NLNNs kernel size, fusion strategy, and training data augmentation, is investigated to optimize the performance of the combined model.
Collapse
Affiliation(s)
- Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
| | - Mekhriddin Rakhimov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Jakhongir Karimberdiyev
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Guzal Belalova
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| |
Collapse
|
7
|
Albalawi E, Thakur A, Dorai DR, Bhatia Khan S, Mahesh TR, Almusharraf A, Aurangzeb K, Anwar MS. Enhancing brain tumor classification in MRI scans with a multi-layer customized convolutional neural network approach. Front Comput Neurosci 2024; 18:1418546. [PMID: 38933391 PMCID: PMC11199693 DOI: 10.3389/fncom.2024.1418546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/23/2024] [Indexed: 06/28/2024] Open
Abstract
Background The necessity of prompt and accurate brain tumor diagnosis is unquestionable for optimizing treatment strategies and patient prognoses. Traditional reliance on Magnetic Resonance Imaging (MRI) analysis, contingent upon expert interpretation, grapples with challenges such as time-intensive processes and susceptibility to human error. Objective This research presents a novel Convolutional Neural Network (CNN) architecture designed to enhance the accuracy and efficiency of brain tumor detection in MRI scans. Methods The dataset used in the study comprises 7,023 brain MRI images from figshare, SARTAJ, and Br35H, categorized into glioma, meningioma, no tumor, and pituitary classes, with a CNN-based multi-task classification model employed for tumor detection, classification, and location identification. Our methodology focused on multi-task classification using a single CNN model for various brain MRI classification tasks, including tumor detection, classification based on grade and type, and tumor location identification. Results The proposed CNN model incorporates advanced feature extraction capabilities and deep learning optimization techniques, culminating in a groundbreaking paradigm shift in automated brain MRI analysis. With an exceptional tumor classification accuracy of 99%, our method surpasses current methodologies, demonstrating the remarkable potential of deep learning in medical applications. Conclusion This study represents a significant advancement in the early detection and treatment planning of brain tumors, offering a more efficient and accurate alternative to traditional MRI analysis methods.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - D. Ramya Dorai
- Department of Information Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - T. R. Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Ahlam Almusharraf
- Department of Management, College of Business Administration, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | |
Collapse
|
8
|
Gokapay DK, Mohanty SN. Enhanced MRI-based brain tumor segmentation and feature extraction using Berkeley wavelet transform and ETCCNN. Digit Health 2024; 10:20552076241305282. [PMID: 39698507 PMCID: PMC11653464 DOI: 10.1177/20552076241305282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 11/19/2024] [Indexed: 12/20/2024] Open
Abstract
Objective Brain tumors are abnormal growths of brain cells that are typically diagnosed via magnetic resonance imaging (MRI), which helps to discriminate between malignant and benign tumors. Using MRI image analysis, tumor sites have been identified and classified into four distinct tumor categories: meningioma, glioma, not tumor, and pituitary. If a brain tumor is not detected in its early stages, it could progress to a severe level or cause death. Therefore, to address these issues, the proposed approach uses an efficient classifier based on deep learning for brain tumor detection. Methods This article describes the classification and detection of brain tumor by an efficient two-channel convolutional neural network. The input image is initially rotated during the augmentation stage. Morphological operations, thresholding, and region filling are then used in the pre-processing stage. The output is then segmented using the Berkeley Wavelet Transform. A two-channel convolutional neural network is used to extract features from segmented objects. In the end, the most effective deep neural network is employed to determine the features of brain tumors. The classifier will utilize the Enhanced Serval Optimization Algorithm to determine the optimal gain parameters. MATLAB serves as the platform of choice for implementing the suggested model. Results Several performance metrics are calculated to assess the proposed brain tumor detection method, such as accuracy, F measures, kappa, precision, sensitivity, and specificity. The proposed model has a 98.8% detection accuracy for brain tumors. Conclusion The evaluation shows that the suggested strategy has produced the best results.
Collapse
Affiliation(s)
- Dilip Kumar Gokapay
- School of Computer Science & Engineering (SCOPE), VIT-AP University, Amaravati, Andhra Pradesh, India
| | - Sachi Nandan Mohanty
- School of Computer Science & Engineering (SCOPE), VIT-AP University, Amaravati, Andhra Pradesh, India
| |
Collapse
|
9
|
Valbuena Rubio S, García-Ordás MT, García-Olalla Olivera O, Alaiz-Moretón H, González-Alonso MI, Benítez-Andrades JA. Survival and grade of the glioma prediction using transfer learning. PeerJ Comput Sci 2023; 9:e1723. [PMID: 38192446 PMCID: PMC10773899 DOI: 10.7717/peerj-cs.1723] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 11/06/2023] [Indexed: 01/10/2024]
Abstract
Glioblastoma is a highly malignant brain tumor with a life expectancy of only 3-6 months without treatment. Detecting and predicting its survival and grade accurately are crucial. This study introduces a novel approach using transfer learning techniques. Various pre-trained networks, including EfficientNet, ResNet, VGG16, and Inception, were tested through exhaustive optimization to identify the most suitable architecture. Transfer learning was applied to fine-tune these models on a glioblastoma image dataset, aiming to achieve two objectives: survival and tumor grade prediction.The experimental results show 65% accuracy in survival prediction, classifying patients into short, medium, or long survival categories. Additionally, the prediction of tumor grade achieved an accuracy of 97%, accurately differentiating low-grade gliomas (LGG) and high-grade gliomas (HGG). The success of the approach is attributed to the effectiveness of transfer learning, surpassing the current state-of-the-art methods. In conclusion, this study presents a promising method for predicting the survival and grade of glioblastoma. Transfer learning demonstrates its potential in enhancing prediction models, particularly in scenarios with limited large datasets. These findings hold promise for improving diagnostic and treatment approaches for glioblastoma patients.
Collapse
Affiliation(s)
| | - María Teresa García-Ordás
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | - Héctor Alaiz-Moretón
- SECOMUCI Research Group, Escuela de Ingenierías Industrial e Informática, Universidad de León, León, Spain
| | | | | |
Collapse
|
10
|
Abdellatef E, Emara HM, Shoaib MR, Ibrahim FE, Elwekeil M, El-Shafai W, Taha TE, El-Fishawy AS, El-Rabaie ESM, Eldokany IM, Abd El-Samie FE. Automated diagnosis of EEG abnormalities with different classification techniques. Med Biol Eng Comput 2023; 61:3363-3385. [PMID: 37672143 DOI: 10.1007/s11517-023-02843-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 04/23/2023] [Indexed: 09/07/2023]
Abstract
Automatic seizure detection and prediction using clinical Electroencephalograms (EEGs) are challenging tasks due to factors such as low Signal-to-Noise Ratios (SNRs), high variance in epileptic seizures among patients, and limited clinical data constraints. To overcome these challenges, this paper presents two approaches for EEG signal classification. One of these approaches depends on Machine Learning (ML) tools. The used features are different types of entropy, higher-order statistics, and sub-band energies in the Hilbert Marginal Spectrum (HMS) domain. The classification is performed using Support Vector Machine (SVM), Logistic Regression (LR), and K-Nearest Neighbor (KNN) classifiers. Both seizure detection and prediction scenarios are considered. The second approach depends on spectrograms of EEG signal segments and a Convolutional Neural Network (CNN)-based residual learning model. We use 10000 spectrogram images for each class. In this approach, it is possible to perform both seizure detection and prediction in addition to a 3-state classification scenario. Both approaches are evaluated on the Children's Hospital Boston and the Massachusetts Institute of Technology (CHB-MIT) dataset, which contains 24 EEG recordings for 6 males and 18 females. The results obtained for the HMS-based model showed an accuracy of 100%. The CNN-based model achieved accuracies of 97.66%, 95.59%, and 94.51% for Seizure (S) versus Pre-Seizure (PS), Non-Seizure (NS) versus S, and NS versus S versus PS classes, respectively. These results demonstrate that the proposed approaches can be effectively used for seizure detection and prediction. They outperform the state-of-the-art techniques for automatic seizure detection and prediction. Block diagram of proposed epileptic seizure detection method using HMS with different classifiers.
Collapse
Affiliation(s)
- Essam Abdellatef
- Department of Electronics and Communications, Delta Higher Institute for Engineering and Technology (DHIET), 35511, Mansoura, Egypt
| | - Heba M Emara
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
| | - Mohamed R Shoaib
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, Singapore
| | - Fatma E Ibrahim
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
| | - Mohamed Elwekeil
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
| | - Walid El-Shafai
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt.
- Security Engineering Laboratory, Department of Computer Science College of Engineering, Prince Sultan University, Riyadh, 11586, Saudi Arabia.
| | - Taha E Taha
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
| | - Adel S El-Fishawy
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
| | | | - Ibrahim M Eldokany
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
| | - Fathi E Abd El-Samie
- Faculty of Electronic Engineering, Menoufia University, 32952, Menouf, Egypt
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| |
Collapse
|
11
|
Emara HM, Shoaib MR, El-Shafai W, Elwekeil M, Hemdan EED, Fouda MM, Taha TE, El-Fishawy AS, El-Rabaie ESM, El-Samie FEA. Simultaneous Super-Resolution and Classification of Lung Disease Scans. Diagnostics (Basel) 2023; 13:diagnostics13071319. [PMID: 37046537 PMCID: PMC10093568 DOI: 10.3390/diagnostics13071319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 04/05/2023] Open
Abstract
Acute lower respiratory infection is a leading cause of death in developing countries. Hence, progress has been made for early detection and treatment. There is still a need for improved diagnostic and therapeutic strategies, particularly in resource-limited settings. Chest X-ray and computed tomography (CT) have the potential to serve as effective screening tools for lower respiratory infections, but the use of artificial intelligence (AI) in these areas is limited. To address this gap, we present a computer-aided diagnostic system for chest X-ray and CT images of several common pulmonary diseases, including COVID-19, viral pneumonia, bacterial pneumonia, tuberculosis, lung opacity, and various types of carcinoma. The proposed system depends on super-resolution (SR) techniques to enhance image details. Deep learning (DL) techniques are used for both SR reconstruction and classification, with the InceptionResNetv2 model used as a feature extractor in conjunction with a multi-class support vector machine (MCSVM) classifier. In this paper, we compare the proposed model performance to those of other classification models, such as Resnet101 and Inceptionv3, and evaluate the effectiveness of using both softmax and MCSVM classifiers. The proposed system was tested on three publicly available datasets of CT and X-ray images and it achieved a classification accuracy of 98.028% using a combination of SR and InceptionResNetv2. Overall, our system has the potential to serve as a valuable screening tool for lower respiratory disorders and assist clinicians in interpreting chest X-ray and CT images. In resource-limited settings, it can also provide a valuable diagnostic support.
Collapse
Affiliation(s)
- Heba M. Emara
- Department of Electronics and Communications Engineering, High Institute of Electronic Engineering, Ministry of Higher Education, Bilbis-Sharqiya 44621, Egypt
| | - Mohamed R. Shoaib
- School of Computer Science and Engineering (SCSE), Nanyang Technological University (NTU), Singapore 639798, Singapore
| | - Walid El-Shafai
- Security Engineering Lab, Computer Science Department, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Mohamed Elwekeil
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Ezz El-Din Hemdan
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Taha E. Taha
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Adel S. El-Fishawy
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - El-Sayed M. El-Rabaie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Fathi E. Abd El-Samie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia
| |
Collapse
|