1
|
Usha Sudhakaran D, Thanka Swami Kanaka Bai S. Brain tumor detection using hybrid transfer learning and patch antenna-enhanced microwave imaging. Technol Health Care 2025:9287329251325740. [PMID: 40208040 DOI: 10.1177/09287329251325740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
BackgroundBrain tumors pose a significant healthcare challenge, necessitating early detection and precise monitoring to ensure effective treatment.ObjectivesThe study proposes an innovative technique with the integration of hybrid transfer learning with improved microwave imaging. The integration of special feature extraction abilities of pre-trained deep learning methods along with the high-resolution imaging capabilities of the patch antenna.MethodsIt was primarily composed of two phases. The initial stage involves the development of a patch antenna and head phantom model, which are then subjected to SAR analysis to extract pertinent features from transmitted signals. In the second stage, an AI-based detection model that utilizes MobileNet V2 is implemented. The images acquired by the patch antenna system are fed into MobileNet V2, which extracts high-level features by employing depth-wise separable convolutions and inverted residual blocks. The fully connected layer is used to classify brain tumors in an effective manner by passing these extracted features.ResultsThe results of the simulation indicate that the model performs exceptionally well, with an accuracy of 98.44%, precision of 98.03%, recall of 99.00%, F1-score of 98.52%, and specificity of 97.82%.ConclusionThis method offers a promising solution for the non-invasive and real-time detection of brain tumors, taking advantage of the electromagnetic properties of brain tissue and the capabilities of AI to address the limitations of current diagnostic methods, such as MRI and CT scans.
Collapse
Affiliation(s)
- Deebu Usha Sudhakaran
- Department of Electronics and Communication Engineering, Noorul Islam Centre for Higher Education, Kanyakumari, Tamil Nadu, India
| | | |
Collapse
|
2
|
Biradar S, Virupakshappa. AG-MSTLN-EL: A Multi-source Transfer Learning Approach to Brain Tumor Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:245-261. [PMID: 39060764 PMCID: PMC11810865 DOI: 10.1007/s10278-024-01199-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 06/29/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024]
Abstract
The analysis of medical images (MI) is an important part of advanced medicine as it helps detect and diagnose various diseases early. Classifying brain tumors through magnetic resonance imaging (MRI) poses a challenge demanding accurate models for effective diagnosis and treatment planning. This paper introduces AG-MSTLN-EL, an attention-aided multi-source transfer learning ensemble learning model leveraging multi-source transfer learning (Visual Geometry Group ResNet and GoogLeNet), attention mechanisms, and ensemble learning to achieve robust and accurate brain tumor classification. Multi-source transfer learning allows knowledge extraction from diverse domains, enhancing generalization. The attention mechanism focuses on specific MRI regions, increasing interpretability and classification performance. Ensemble learning combines k-nearest neighbor, Softmax, and support vector machine classifiers, improving both accuracy and reliability. Evaluating the model's performance on a dataset with 3064 brain tumor MRI images, AG-MSTLN-EL outperforms state-of-the-art models in terms of all classification measures. The model's innovative combination of transfer learning, attention mechanism, and ensemble learning provides a reliable solution for brain tumor classification. Its superior performance and high interpretability make AG-MSTLN-EL a valuable tool for clinicians and researchers in medical image analysis.
Collapse
Affiliation(s)
- Shivaprasad Biradar
- Department of Computer Science & Engineering, Sharnbasva University, Kalaburagi, Karnataka, India
| | - Virupakshappa
- Department of Computer Science & Engineering, Sharnbasva University, Kalaburagi, Karnataka, India.
| |
Collapse
|
3
|
Gasmi K, Ben Aoun N, Alsalem K, Ltaifa IB, Alrashdi I, Ammar LB, Mrabet M, Shehab A. Enhanced brain tumor diagnosis using combined deep learning models and weight selection technique. Front Neuroinform 2024; 18:1444650. [PMID: 39659489 PMCID: PMC11628532 DOI: 10.3389/fninf.2024.1444650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 10/21/2024] [Indexed: 12/12/2024] Open
Abstract
Brain tumor classification is a critical task in medical imaging, as accurate diagnosis directly influences treatment planning and patient outcomes. Traditional methods often fall short in achieving the required precision due to the complex and heterogeneous nature of brain tumors. In this study, we propose an innovative approach to brain tumor multi-classification by leveraging an ensemble learning method that combines advanced deep learning models with an optimal weighting strategy. Our methodology integrates Vision Transformers (ViT) and EfficientNet-V2 models, both renowned for their powerful feature extraction capabilities in medical imaging. This model enhances the feature extraction step by capturing both global and local features, thanks to the combination of different deep learning models with the ViT model. These models are then combined using a weighted ensemble approach, where each model's prediction is assigned a weight. To optimize these weights, we employ a genetic algorithm, which iteratively selects the best weight combinations to maximize classification accuracy. We trained and validated our ensemble model using a well-curated dataset comprising labeled brain MRI images. The model's performance was benchmarked against standalone ViT and EfficientNet-V2 models, as well as other traditional classifiers. The ensemble approach achieved a notable improvement in classification accuracy, precision, recall, and F1-score compared to individual models. Specifically, our model attained an accuracy rate of 95%, significantly outperforming existing methods. This study underscores the potential of combining advanced deep learning models with a genetic algorithm-optimized weighting strategy to tackle complex medical classification tasks. The enhanced diagnostic precision offered by our ensemble model can lead to better-informed clinical decisions, ultimately improving patient outcomes. Furthermore, our approach can be generalized to other medical imaging classification problems, paving the way for broader applications of AI in healthcare. This advancement in brain tumor classification contributes valuable insights to the field of medical AI, supporting the ongoing efforts to integrate advanced computational tools in clinical practice.
Collapse
Affiliation(s)
- Karim Gasmi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakkaka, Saudi Arabia
| | - Najib Ben Aoun
- College of Computing and Information, Al-Baha University, Alaqiq, Saudi Arabia
- REGIM-Lab: Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Sfax, Tunisia
| | - Khalaf Alsalem
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka, Saudi Arabia
| | - Ibtihel Ben Ltaifa
- STIH: Sens Texte Informatique Histoire, Sorbonne University, Paris, France
| | - Ibrahim Alrashdi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakkaka, Saudi Arabia
| | | | - Manel Mrabet
- Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Abdulaziz Shehab
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka, Saudi Arabia
| |
Collapse
|
4
|
Ullah MS, Khan MA, Almujally NA, Alhaisoni M, Akram T, Shabaz M. BrainNet: a fusion assisted novel optimal framework of residual blocks and stacked autoencoders for multimodal brain tumor classification. Sci Rep 2024; 14:5895. [PMID: 38467755 PMCID: PMC10928185 DOI: 10.1038/s41598-024-56657-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 03/08/2024] [Indexed: 03/13/2024] Open
Abstract
A significant issue in computer-aided diagnosis (CAD) for medical applications is brain tumor classification. Radiologists could reliably detect tumors using machine learning algorithms without extensive surgery. However, a few important challenges arise, such as (i) the selection of the most important deep learning architecture for classification (ii) an expert in the field who can assess the output of deep learning models. These difficulties motivate us to propose an efficient and accurate system based on deep learning and evolutionary optimization for the classification of four types of brain modalities (t1 tumor, t1ce tumor, t2 tumor, and flair tumor) on a large-scale MRI database. Thus, a CNN architecture is modified based on domain knowledge and connected with an evolutionary optimization algorithm to select hyperparameters. In parallel, a Stack Encoder-Decoder network is designed with ten convolutional layers. The features of both models are extracted and optimized using an improved version of Grey Wolf with updated criteria of the Jaya algorithm. The improved version speeds up the learning process and improves the accuracy. Finally, the selected features are fused using a novel parallel pooling approach that is classified using machine learning and neural networks. Two datasets, BraTS2020 and BraTS2021, have been employed for the experimental tasks and obtained an improved average accuracy of 98% and a maximum single-classifier accuracy of 99%. Comparison is also conducted with several classifiers, techniques, and neural nets; the proposed method achieved improved performance.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon
- Department of Computer Science, HITEC University, Taxila, 47080, Pakistan
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, PO Box 84428, 11671, Riyadh, Saudi Arabia
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Tallha Akram
- Department of ECE, COMSATS University Islamabad, Wah Campus, Rawalpindi, Pakistan
| | - Mohammad Shabaz
- Model Institute of Engineering and Technology, Jammu, J&K, India.
| |
Collapse
|
5
|
Liu X, Liu J. Aided Diagnosis Model Based on Deep Learning for Glioblastoma, Solitary Brain Metastases, and Primary Central Nervous System Lymphoma with Multi-Modal MRI. BIOLOGY 2024; 13:99. [PMID: 38392317 PMCID: PMC10887006 DOI: 10.3390/biology13020099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/26/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024]
Abstract
(1) Background: Diagnosis of glioblastoma (GBM), solitary brain metastases (SBM), and primary central nervous system lymphoma (PCNSL) plays a decisive role in the development of personalized treatment plans. Constructing a deep learning classification network to diagnose GBM, SBM, and PCNSL with multi-modal MRI is important and necessary. (2) Subjects: GBM, SBM, and PCNSL were confirmed by histopathology with the multi-modal MRI examination (study from 1225 subjects, average age 53 years, 671 males), 3.0 T T2 fluid-attenuated inversion recovery (T2-Flair), and Contrast-enhanced T1-weighted imaging (CE-T1WI). (3) Methods: This paper introduces MFFC-Net, a classification model based on the fusion of multi-modal MRIs, for the classification of GBM, SBM, and PCNSL. The network architecture consists of parallel encoders using DenseBlocks to extract features from different modalities of MRI images. Subsequently, an L1-norm feature fusion module is applied to enhance the interrelationships among tumor tissues. Then, a spatial-channel self-attention weighting operation is performed after the feature fusion. Finally, the classification results are obtained using the full convolutional layer (FC) and Soft-max. (4) Results: The ACC of MFFC-Net based on feature fusion was 0.920, better than the radiomics model (ACC of 0.829). There was no significant difference in the ACC compared to the expert radiologist (0.920 vs. 0.924, p = 0.774). (5) Conclusions: Our MFFC-Net model could distinguish GBM, SBM, and PCNSL preoperatively based on multi-modal MRI, with a higher performance than the radiomics model and was comparable to radiologists.
Collapse
Affiliation(s)
- Xiao Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
| | - Jie Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100044, China
| |
Collapse
|
6
|
Pal S, Singh RP, Kumar A. Analysis of Hybrid Feature Optimization Techniques Based on the Classification Accuracy of Brain Tumor Regions Using Machine Learning and Further Evaluation Based on the Institute Test Data. J Med Phys 2024; 49:22-32. [PMID: 38828069 PMCID: PMC11141750 DOI: 10.4103/jmp.jmp_77_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 06/05/2024] Open
Abstract
Aim The goal of this study was to get optimal brain tumor features from magnetic resonance imaging (MRI) images and classify them based on the three groups of the tumor region: Peritumoral edema, enhancing-core, and necrotic tumor core, using machine learning classification models. Materials and Methods This study's dataset was obtained from the multimodal brain tumor segmentation challenge. A total of 599 brain MRI studies were employed, all in neuroimaging informatics technology initiative format. The dataset was divided into training, validation, and testing subsets online test dataset (OTD). The dataset includes four types of MRI series, which were combined together and processed for intensity normalization using contrast limited adaptive histogram equalization methodology. To extract radiomics features, a python-based library called pyRadiomics was employed. Particle-swarm optimization (PSO) with varying inertia weights was used for feature optimization. Inertia weight with a linearly decreasing strategy (W1), inertia weight with a nonlinear coefficient decreasing strategy (W2), and inertia weight with a logarithmic strategy (W3) were different strategies used to vary the inertia weight for feature optimization in PSO. These selected features were further optimized using the principal component analysis (PCA) method to further reducing the dimensionality and removing the noise and improve the performance and efficiency of subsequent algorithms. Support vector machine (SVM), light gradient boosting (LGB), and extreme gradient boosting (XGB) machine learning classification algorithms were utilized for the classification of images into different tumor regions using optimized features. The proposed method was also tested on institute test data (ITD) for a total of 30 patient images. Results For OTD test dataset, the classification accuracy of SVM was 0.989, for the LGB model (LGBM) was 0.992, and for the XGB model (XGBM) was 0.994, using the varying inertia weight-PSO optimization method and the classification accuracy of SVM was 0.996 for the LGBM was 0.998, and for the XGBM was 0.994, using PSO and PCA-a hybrid optimization technique. For ITD test dataset, the classification accuracy of SVM was 0.994 for the LGBM was 0.993, and for the XGBM was 0.997, using the hybrid optimization technique. Conclusion The results suggest that the proposed method can be used to classify a brain tumor as used in this study to classify the tumor region into three groups: Peritumoral edema, enhancing-core, and necrotic tumor core. This was done by extracting the different features of the tumor, such as its shape, grey level, gray-level co-occurrence matrix, etc., and then choosing the best features using hybrid optimal feature selection techniques. This was done without much human expertise and in much less time than it would take a person.
Collapse
Affiliation(s)
- Soniya Pal
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
- Batra Hospital and Medical Research Center, New Delhi, India
| | - Raj Pal Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
| | - Anuj Kumar
- Department of Radiotherapy, S. N. Medical College, Agra, Uttar Pradesh, India
| |
Collapse
|
7
|
B. A, Kaur M, Singh D, Roy S, Amoon M. Efficient Skip Connections-Based Residual Network (ESRNet) for Brain Tumor Classification. Diagnostics (Basel) 2023; 13:3234. [PMID: 37892055 PMCID: PMC10606037 DOI: 10.3390/diagnostics13203234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Brain tumors pose a complex and urgent challenge in medical diagnostics, requiring precise and timely classification due to their diverse characteristics and potentially life-threatening consequences. While existing deep learning (DL)-based brain tumor classification (BTC) models have shown significant progress, they encounter limitations like restricted depth, vanishing gradient issues, and difficulties in capturing intricate features. To address these challenges, this paper proposes an efficient skip connections-based residual network (ESRNet). leveraging the residual network (ResNet) with skip connections. ESRNet ensures smooth gradient flow during training, mitigating the vanishing gradient problem. Additionally, the ESRNet architecture includes multiple stages with increasing numbers of residual blocks for improved feature learning and pattern recognition. ESRNet utilizes residual blocks from the ResNet architecture, featuring skip connections that enable identity mapping. Through direct addition of the input tensor to the convolutional layer output within each block, skip connections preserve the gradient flow. This mechanism prevents vanishing gradients, ensuring effective information propagation across network layers during training. Furthermore, ESRNet integrates efficient downsampling techniques and stabilizing batch normalization layers, which collectively contribute to its robust and reliable performance. Extensive experimental results reveal that ESRNet significantly outperforms other approaches in terms of accuracy, sensitivity, specificity, F-score, and Kappa statistics, with median values of 99.62%, 99.68%, 99.89%, 99.47%, and 99.42%, respectively. Moreover, the achieved minimum performance metrics, including accuracy (99.34%), sensitivity (99.47%), specificity (99.79%), F-score (99.04%), and Kappa statistics (99.21%), underscore the exceptional effectiveness of ESRNet for BTC. Therefore, the proposed ESRNet showcases exceptional performance and efficiency in BTC, holding the potential to revolutionize clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ashwini B.
- Department of ISE, NMAM Institute of Technology, Nitte (Deemed to be University), Nitte 574110, India;
| | - Manjit Kaur
- School of Computer Science and Artificial Intelligence, SR University, Warangal 506371, India
| | - Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
- Research and Development Cell, Lovely Professional University, Phagwara 144411, India
| | - Satyabrata Roy
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur 303007, India;
| | - Mohammed Amoon
- Department of Computer Science, Community College, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
| |
Collapse
|
8
|
Zafar A, Tanveer J, Ali MU, Lee SW. BU-DLNet: Breast Ultrasonography-Based Cancer Detection Using Deep-Learning Network Selection and Feature Optimization. Bioengineering (Basel) 2023; 10:825. [PMID: 37508852 PMCID: PMC10376009 DOI: 10.3390/bioengineering10070825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/04/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
Early detection of breast lesions and distinguishing between malignant and benign lesions are critical for breast cancer (BC) prognosis. Breast ultrasonography (BU) is an important radiological imaging modality for the diagnosis of BC. This study proposes a BU image-based framework for the diagnosis of BC in women. Various pre-trained networks are used to extract the deep features of the BU images. Ten wrapper-based optimization algorithms, including the marine predator algorithm, generalized normal distribution optimization, slime mold algorithm, equilibrium optimizer (EO), manta-ray foraging optimization, atom search optimization, Harris hawks optimization, Henry gas solubility optimization, path finder algorithm, and poor and rich optimization, were employed to compute the optimal subset of deep features using a support vector machine classifier. Furthermore, a network selection algorithm was employed to determine the best pre-trained network. An online BU dataset was used to test the proposed framework. After comprehensive testing and analysis, it was found that the EO algorithm produced the highest classification rate for each pre-trained model. It produced the highest classification accuracy of 96.79%, and it was trained using only a deep feature vector with a size of 562 in the ResNet-50 model. Similarly, the Inception-ResNet-v2 had the second highest classification accuracy of 96.15% using the EO algorithm. Moreover, the results of the proposed framework are compared with those in the literature.
Collapse
Affiliation(s)
- Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
9
|
Khan MA, Khan A, Alhaisoni M, Alqahtani A, Alsubai S, Alharbi M, Malik NA, Damaševičius R. Multimodal brain tumor detection and classification using deep saliency map and improved dragonfly optimization algorithm. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:572-587. [DOI: 10.1002/ima.22831] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/18/2022] [Indexed: 08/25/2024]
Abstract
AbstractIn the last decade, there has been a significant increase in medical cases involving brain tumors. Brain tumor is the tenth most common type of tumor, affecting millions of people. However, if it is detected early, the cure rate can increase. Computer vision researchers are working to develop sophisticated techniques for detecting and classifying brain tumors. MRI scans are primarily used for tumor analysis. We proposed an automated system for brain tumor detection and classification using a saliency map and deep learning feature optimization in this paper. The proposed framework was implemented in stages. In the initial phase of the proposed framework, a fusion‐based contrast enhancement technique is proposed. In the following phase, a tumor segmentation technique based on saliency maps is proposed, which is then mapped on original images based on active contour. Following that, a pre‐trained CNN model named EfficientNetB0 is fine‐tuned and trained in two ways: on enhanced images and on tumor localization images. Deep transfer learning is used to train both models, and features are extracted from the average pooling layer. The deep learning features are then fused using an improved fusion approach known as Entropy Serial Fusion. The best features are chosen in the final step using an improved dragonfly optimization algorithm. Finally, the best features are classified using an extreme learning machine (ELM). The experimental process is conducted on three publically available datasets and achieved an improved accuracy of 95.14, 94.89, and 95.94%, respectively. The comparison with several neural nets shows the improvement of proposed framework.
Collapse
Affiliation(s)
| | - Awais Khan
- Department of Computer Science HITEC University Taxila Pakistan
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences Princess Nourah bint Abdulrahman University Riyadh Saudi Arabia
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences Prince Sattam bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Shtwai Alsubai
- College of Computer Engineering and Sciences Prince Sattam bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences Prince Sattam Bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Nazir Ahmed Malik
- Cyber Reconnaissance and Combat Lab Bahria University Islamabad Islamabad Pakistan
| | | |
Collapse
|
10
|
Automatic Intelligent System Using Medical of Things for Multiple Sclerosis Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4776770. [PMID: 36864930 PMCID: PMC9974276 DOI: 10.1155/2023/4776770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/31/2022] [Accepted: 08/16/2022] [Indexed: 02/25/2023]
Abstract
Malfunctions in the immune system cause multiple sclerosis (MS), which initiates mild to severe nerve damage. MS will disturb the signal communication between the brain and other body parts, and early diagnosis will help reduce the harshness of MS in humankind. Magnetic resonance imaging (MRI) supported MS detection is a standard clinical procedure in which the bio-image recorded with a chosen modality is considered to assess the severity of the disease. The proposed research aims to implement a convolutional neural network (CNN) supported scheme to detect MS lesions in the chosen brain MRI slices. The stages of this framework include (i) image collection and resizing, (ii) deep feature mining, (iii) hand-crafted feature mining, (iii) feature optimization with firefly algorithm, and (iv) serial feature integration and classification. In this work, five-fold cross-validation is executed, and the final result is considered for the assessment. The brain MRI slices with/without the skull section are examined separately, presenting the attained results. The experimental outcome of this study confirms that the VGG16 with random forest (RF) classifier offered a classification accuracy of >98% MRI with skull, and VGG16 with K-nearest neighbor (KNN) provided an accuracy of >98% without the skull.
Collapse
|
11
|
Hossain A, Islam MT, Abdul Rahim SK, Rahman MA, Rahman T, Arshad H, Khandakar A, Ayari MA, Chowdhury MEH. A Lightweight Deep Learning Based Microwave Brain Image Network Model for Brain Tumor Classification Using Reconstructed Microwave Brain (RMB) Images. BIOSENSORS 2023; 13:238. [PMID: 36832004 PMCID: PMC9954219 DOI: 10.3390/bios13020238] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 01/23/2023] [Accepted: 01/30/2023] [Indexed: 05/27/2023]
Abstract
Computerized brain tumor classification from the reconstructed microwave brain (RMB) images is important for the examination and observation of the development of brain disease. In this paper, an eight-layered lightweight classifier model called microwave brain image network (MBINet) using a self-organized operational neural network (Self-ONN) is proposed to classify the reconstructed microwave brain (RMB) images into six classes. Initially, an experimental antenna sensor-based microwave brain imaging (SMBI) system was implemented, and RMB images were collected to create an image dataset. It consists of a total of 1320 images: 300 images for the non-tumor, 215 images for each single malignant and benign tumor, 200 images for each double benign tumor and double malignant tumor, and 190 images for the single benign and single malignant tumor classes. Then, image resizing and normalization techniques were used for image preprocessing. Thereafter, augmentation techniques were applied to the dataset to make 13,200 training images per fold for 5-fold cross-validation. The MBINet model was trained and achieved accuracy, precision, recall, F1-score, and specificity of 96.97%, 96.93%, 96.85%, 96.83%, and 97.95%, respectively, for six-class classification using original RMB images. The MBINet model was compared with four Self-ONNs, two vanilla CNNs, ResNet50, ResNet101, and DenseNet201 pre-trained models, and showed better classification outcomes (almost 98%). Therefore, the MBINet model can be used for reliably classifying the tumor(s) using RMB images in the SMBI system.
Collapse
Affiliation(s)
- Amran Hossain
- Centre for Advanced Electronic and Communication Engineering, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
- Department of Computer Science and Engineering, Dhaka University of Engineering and Technology, Gazipur, Gazipur 1707, Bangladesh
| | - Mohammad Tariqul Islam
- Centre for Advanced Electronic and Communication Engineering, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
| | | | - Md Atiqur Rahman
- Centre for Advanced Electronic and Communication Engineering, Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
| | - Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Haslina Arshad
- Institute of IR4.0, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia
| | - Amit Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Mohamed Arslane Ayari
- Department of Civil and Architectural Engineering, Qatar University, Doha 2713, Qatar
| | | |
Collapse
|
12
|
Nazir K, Mustafa Madni T, Iqbal Janjua U, Javed U, Attique Khan M, Tariq U, Cha JH. 3D Kronecker Convolutional Feature Pyramid for Brain Tumor Semantic Segmentation in MR Imaging. COMPUTERS, MATERIALS & CONTINUA 2023; 76:2861-2877. [DOI: 10.32604/cmc.2023.039181] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/10/2023] [Indexed: 08/25/2024]
|
13
|
Cao K, Deng T, Zhang C, Lu L, Li L. A CNN-transformer fusion network for COVID-19 CXR image classification. PLoS One 2022; 17:e0276758. [PMID: 36301907 PMCID: PMC9612494 DOI: 10.1371/journal.pone.0276758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 10/12/2022] [Indexed: 11/04/2022] Open
Abstract
The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers' proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19.
Collapse
Affiliation(s)
- Kai Cao
- Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China
| | - Tao Deng
- School of Mathematics and Computer Science, Northwest Minzu University, Lanzhou, Gansu, China
- Key Laboratory of Streaming Data Computing Technologies and Application, Northwest Minzu University, Lanzhou, Gansu, China
| | - Chuanlin Zhang
- School of Mathematics and Computer Science, Northwest Minzu University, Lanzhou, Gansu, China
| | - Limeng Lu
- Key Laboratory of China’s Ethnic Languages and Information Technology of Ministry of Education, Northwest Minzu University, Lanzhou, Gansu, China
| | - Lin Li
- School of Computing, University of Leeds, Leeds, United Kingdom
| |
Collapse
|
14
|
A. Mohamed E, Gaber T, Karam O, Rashed EA. A Novel CNN pooling layer for breast cancer segmentation and classification from thermograms. PLoS One 2022; 17:e0276523. [PMID: 36269756 PMCID: PMC9586394 DOI: 10.1371/journal.pone.0276523] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
Breast cancer is the second most frequent cancer worldwide, following lung cancer and the fifth leading cause of cancer death and a major cause of cancer death among women. In recent years, convolutional neural networks (CNNs) have been successfully applied for the diagnosis of breast cancer using different imaging modalities. Pooling is a main data processing step in CNN that decreases the feature maps' dimensionality without losing major patterns. However, the effect of pooling layer was not studied efficiently in literature. In this paper, we propose a novel design for the pooling layer called vector pooling block (VPB) for the CCN algorithm. The proposed VPB consists of two data pathways, which focus on extracting features along horizontal and vertical orientations. The VPB makes the CNNs able to collect both global and local features by including long and narrow pooling kernels, which is different from the traditional pooling layer, that gathers features from a fixed square kernel. Based on the novel VPB, we proposed a new pooling module called AVG-MAX VPB. It can collect informative features by using two types of pooling techniques, maximum and average pooling. The VPB and the AVG-MAX VPB are plugged into the backbone CNNs networks, such as U-Net, AlexNet, ResNet18 and GoogleNet, to show the advantages in segmentation and classification tasks associated with breast cancer diagnosis from thermograms. The proposed pooling layer was evaluated using a benchmark thermogram database (DMR-IR) and its results compared with U-Net results which was used as base results. The U-Net results were as follows: global accuracy = 96.6%, mean accuracy = 96.5%, mean IoU = 92.07%, and mean BF score = 78.34%. The VBP-based results were as follows: global accuracy = 98.3%, mean accuracy = 97.9%, mean IoU = 95.87%, and mean BF score = 88.68% while the AVG-MAX VPB-based results were as follows: global accuracy = 99.2%, mean accuracy = 98.97%, mean IoU = 98.03%, and mean BF score = 94.29%. Other network architectures also demonstrate superior improvement considering the use of VPB and AVG-MAX VPB.
Collapse
Affiliation(s)
- Esraa A. Mohamed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
| | - Tarek Gaber
- Faculty of Computers and Informatics, Suez Canal University, Ismailia, Egypt
- School of Science, Engineering and Environment University of Salford, Manchester, United Kingdom
| | - Omar Karam
- Faculty of Informatics and Computer Science, British University in Egypt (BUE), Cairo, Egypt
| | - Essam A. Rashed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
- Graduate School of Information Science, University of Hyogo, Kobe, Japan
| |
Collapse
|
15
|
Aamir S, Rahim A, Aamir Z, Abbasi SF, Khan MS, Alhaisoni M, Khan MA, Khan K, Ahmad J. Predicting Breast Cancer Leveraging Supervised Machine Learning Techniques. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:5869529. [PMID: 36017156 PMCID: PMC9398810 DOI: 10.1155/2022/5869529] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 07/28/2022] [Indexed: 02/08/2023]
Abstract
Breast cancer is one of the leading causes of increasing deaths in women worldwide. The complex nature (microcalcification and masses) of breast cancer cells makes it quite difficult for radiologists to diagnose it properly. Subsequently, various computer-aided diagnosis (CAD) systems have previously been developed and are being used to aid radiologists in the diagnosis of cancer cells. However, due to intrinsic risks associated with the delayed and/or incorrect diagnosis, it is indispensable to improve the developed diagnostic systems. In this regard, machine learning has recently been playing a potential role in the early and precise detection of breast cancer. This paper presents a new machine learning-based framework that utilizes the Random Forest, Gradient Boosting, Support Vector Machine, Artificial Neural Network, and Multilayer Perception approaches to efficiently predict breast cancer from the patient data. For this purpose, the Wisconsin Diagnostic Breast Cancer (WDBC) dataset has been utilized and classified using a hybrid Multilayer Perceptron Model (MLP) and 5-fold cross-validation framework as a working prototype. For the improved classification, a connection-based feature selection technique has been used that also eliminates the recursive features. The proposed framework has been validated on two separate datasets, i.e., the Wisconsin Prognostic dataset (WPBC) and Wisconsin Original Breast Cancer (WOBC) datasets. The results demonstrate improved accuracy of 99.12% due to efficient data preprocessing and feature selection applied to the input data.
Collapse
Affiliation(s)
- Sanam Aamir
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad 44000, Pakistan
| | - Aqsa Rahim
- Faculty of Science and Technology, University of Tromsø, Tromso, Norway
| | - Zain Aamir
- Department of Data Science, National University of Computer and Emerging Sciences, Islamabad 44000, Pakistan
| | - Saadullah Farooq Abbasi
- Department of Electrical Engineering, National University of Technology, Islamabad 44000, Pakistan
| | | | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Muhammad Attique Khan
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Khyber Khan
- Department of Computer Science, Khurasan University, Jalalabad, Afghanistan
| | - Jawad Ahmad
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| |
Collapse
|