1
|
Chandrasekaran S, Aarathi S, Alqhatani A, Khan SB, Quasim MT, Basheer S. Improving healthcare sustainability using advanced brain simulations using a multi-modal deep learning strategy with VGG19 and bidirectional LSTM. Front Med (Lausanne) 2025; 12:1574428. [PMID: 40276738 PMCID: PMC12020513 DOI: 10.3389/fmed.2025.1574428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2025] [Accepted: 03/04/2025] [Indexed: 04/26/2025] Open
Abstract
Background Brain tumor categorization on MRI is a challenging but crucial task in medical imaging, requiring high resilience and accuracy for effective diagnostic applications. This study describe a unique multimodal scheme combining the capabilities of deep learning with ensemble learning approaches to overcome these issues. Methods The system integrates three new modalities, spatial feature extraction using a pre-trained VGG19 network, sequential dependency learning using a Bidirectional LSTM, and classification efficiency through a LightGBM classifier. Results The combination of both methods leverages the complementary strengths of convolutional neural networks and recurrent neural networks, thus enabling the model to achieve state-of-the-art performance scores. The outcomes confirm the efficacy of this multimodal approach, which achieves a total accuracy of 97%, an F1-score of 0.97, and a ROC AUC score of 0.997. Conclusion With synergistic harnessing of spatial and sequential features, the model enhances classification rates and effectively deals with high-dimensional data, compared to traditional single-modal methods. The scalable methodology has the possibility of greatly augmenting brain tumor diagnosis and planning of treatment in medical imaging studies.
Collapse
Affiliation(s)
- Saravanan Chandrasekaran
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, India
| | - S. Aarathi
- Department of Computer Science and Engineering (Data Science), Dayananda Sagar College of Engineering, Bangalore, India
| | - Abdulmajeed Alqhatani
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran, Saudi Arabia
| | - Surbhi Bhatia Khan
- School of Science, Engineering, and Environment, University of Salford, Salford, United Kingdom
- University Centre for Research and Development, Chandigarh University, Mohali, India
- Centre for Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, India
| | - Mohammad Tabrez Quasim
- Department of Computer Science and Artificial Intelligence, College of Computing and Information Technology, University of Bisha, Bisha, Saudi Arabia
| | - Shakila Basheer
- Department of Information Systems, College of Computer and Information Science, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Durairaj V, Uthirapathy P. Interactive Multi-scale Fusion: Advancing Brain Tumor Detection Through Trans-IMSM Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:757-774. [PMID: 39147889 PMCID: PMC11950544 DOI: 10.1007/s10278-024-01222-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 07/09/2024] [Accepted: 07/31/2024] [Indexed: 08/17/2024]
Abstract
Multi-modal medical image (MI) fusion assists in generating collaboration images collecting complement features through the distinct images of several conditions. The images help physicians to diagnose disease accurately. Hence, this research proposes a novel multi-modal MI fusion modal named guided filter-based interactive multi-scale and multi-modal transformer (Trans-IMSM) fusion approach to develop high-quality computed tomography-magnetic resonance imaging (CT-MRI) fused images for brain tumor detection. This research utilizes the CT and MRI brain scan dataset to gather the input CT and MRI images. At first, the data preprocessing is carried out to preprocess these input images to improve the image quality and generalization ability for further analysis. Then, these preprocessed CT and MRI are decomposed into detail and base components utilizing the guided filter-based MI decomposition approach. This approach involves two phases: such as acquiring the image guidance and decomposing the images utilizing the guided filter. A canny operator is employed to acquire the image guidance comprising robust edge for CT and MRI images, and the guided filter is applied to decompose the guidance and preprocessed images. Then, by applying the Trans-IMSM model, fuse the detail components, while a weighting approach is used for the base components. The fused detail and base components are subsequently processed through a gated fusion and reconstruction network, and the final fused images for brain tumor detection are generated. Extensive tests are carried out to compute the Trans-IMSM method's efficacy. The evaluation results demonstrated the robustness and effectiveness, achieving an accuracy of 98.64% and an SSIM of 0.94.
Collapse
Affiliation(s)
- Vasanthi Durairaj
- Department of ECE, IFET College of Engineering, Villupuram, Tamil Nadu, India.
| | - Palani Uthirapathy
- Department of ECE, IFET College of Engineering, Villupuram, Tamil Nadu, India
| |
Collapse
|
3
|
G V, Rani VV, Ponnada S, S J. A hybrid EfficientNet-DbneAlexnet for brain tumor detection using MRI images. Comput Biol Chem 2025; 115:108279. [PMID: 39631224 DOI: 10.1016/j.compbiolchem.2024.108279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 10/09/2024] [Accepted: 11/08/2024] [Indexed: 12/07/2024]
Abstract
The rapid growth of abnormal cells in the brain presents a serious risk to the health of humans as it can result in death. Since these tumors have a varied range of shapes, sizes, and positions, identifying Brain Tumors (BTs) is challenging. Magnetic Resonance Images (MRI) are most utilized for identifying malignant tumors. This paper develops a new approach, named EfficientNet-Deep batch normalized eLUAlexnet (EfficientNet-DbneAlexnet) for detecting BTs. Firstly, the input MRI image is transmitted for image enhancement. Here, the image is enhanced by the Piecewise Linear Transformation (PLT). After this, skull stripping is carried out, which is performed by the Fuzzy Local Information C Means (FLICM). Following this, the tumor area in the image is segmented with the help of a Projective Adversarial Network (PAN). The segmented image is later applied to the feature extraction module, wherein features like textural and statistical features are extracted. Finally, the BT detection is accomplished using the developed EfficientNet-DbneAlexnet, which is created by assimilating EfficientNet and Deep batch normalized eLUAlexnet (DbneAlexnet). The results demonstrate that EfficientNet-DbneAlexnet obtained a sensitivity of 90.36 %, accuracy of 92.77 %, and specificity of 91.82 %.
Collapse
Affiliation(s)
- Vasavi G
- Department of CSE (Cyber Security), School of Engineering, Malla Reddy University, Hyderabad, India.
| | - Vaddadi Vasudha Rani
- Dept of Information Technology, GMR Institute of Technology, Rajam, Andhra Pradesh, India
| | - Sreenu Ponnada
- Department of Computer Science and Engineering, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh 534202, India
| | - Jyothi S
- Department of Computer Science, Sri Padmavathi Mahila Visvavidyalayam, Tirupati, India
| |
Collapse
|
4
|
Jyothi P, Dhanasekaran S. An attention 3DUNET and visual geometry group-19 based deep neural network for brain tumor segmentation and classification from MRI. J Biomol Struct Dyn 2025; 43:730-741. [PMID: 37979152 DOI: 10.1080/07391102.2023.2283164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/06/2023] [Indexed: 11/20/2023]
Abstract
There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Parvathy Jyothi
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - S Dhanasekaran
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
5
|
Akgüller Ö, Balcı MA, Cioca G. Information Geometry and Manifold Learning: A Novel Framework for Analyzing Alzheimer's Disease MRI Data. Diagnostics (Basel) 2025; 15:153. [PMID: 39857036 PMCID: PMC11763731 DOI: 10.3390/diagnostics15020153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2024] [Revised: 01/05/2025] [Accepted: 01/09/2025] [Indexed: 01/27/2025] Open
Abstract
Background: Alzheimer's disease is a progressive neurological condition marked by a decline in cognitive abilities. Early diagnosis is crucial but challenging due to overlapping symptoms among impairment stages, necessitating non-invasive, reliable diagnostic tools. Methods: We applied information geometry and manifold learning to analyze grayscale MRI scans classified into No Impairment, Very Mild, Mild, and Moderate Impairment. Preprocessed images were reduced via Principal Component Analysis (retaining 95% variance) and converted into statistical manifolds using estimated mean vectors and covariance matrices. Geodesic distances, computed with the Fisher Information metric, quantified class differences. Graph Neural Networks, including Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), and GraphSAGE, were utilized to categorize impairment levels using graph-based representations of the MRI data. Results: Significant differences in covariance structures were observed, with increased variability and stronger feature correlations at higher impairment levels. Geodesic distances between No Impairment and Mild Impairment (58.68, p<0.001) and between Mild and Moderate Impairment (58.28, p<0.001) are statistically significant. GCN and GraphSAGE achieve perfect classification accuracy (precision, recall, F1-Score: 1.0), correctly identifying all instances across classes. GAT attains an overall accuracy of 59.61%, with variable performance across classes. Conclusions: Integrating information geometry, manifold learning, and GNNs effectively differentiates AD impairment stages from MRI data. The strong performance of GCN and GraphSAGE indicates their potential to assist clinicians in the early identification and tracking of Alzheimer's disease progression.
Collapse
Affiliation(s)
- Ömer Akgüller
- Department of Mathematics, Faculty of Science, Mugla Sitki Kocman University, Muğla 48000, Turkey;
| | - Mehmet Ali Balcı
- Department of Mathematics, Faculty of Science, Mugla Sitki Kocman University, Muğla 48000, Turkey;
| | - Gabriela Cioca
- Preclinical Department, Faculty of Medicine, Lucian Blaga University of Sibiu, 550024 Sibiu, Romania;
| |
Collapse
|
6
|
Disci R, Gurcan F, Soylu A. Advanced Brain Tumor Classification in MR Images Using Transfer Learning and Pre-Trained Deep CNN Models. Cancers (Basel) 2025; 17:121. [PMID: 39796749 PMCID: PMC11719945 DOI: 10.3390/cancers17010121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Revised: 12/29/2024] [Accepted: 01/01/2025] [Indexed: 01/13/2025] Open
Abstract
BACKGROUND/OBJECTIVES Brain tumor classification is a crucial task in medical diagnostics, as early and accurate detection can significantly improve patient outcomes. This study investigates the effectiveness of pre-trained deep learning models in classifying brain MRI images into four categories: Glioma, Meningioma, Pituitary, and No Tumor, aiming to enhance the diagnostic process through automation. METHODS A publicly available Brain Tumor MRI dataset containing 7023 images was used in this research. The study employs state-of-the-art pre-trained models, including Xception, MobileNetV2, InceptionV3, ResNet50, VGG16, and DenseNet121, which are fine-tuned using transfer learning, in combination with advanced preprocessing and data augmentation techniques. Transfer learning was applied to fine-tune the models and optimize classification accuracy while minimizing computational requirements, ensuring efficiency in real-world applications. RESULTS Among the tested models, Xception emerged as the top performer, achieving a weighted accuracy of 98.73% and a weighted F1 score of 95.29%, demonstrating exceptional generalization capabilities. These models proved particularly effective in addressing class imbalances and delivering consistent performance across various evaluation metrics, thus demonstrating their suitability for clinical adoption. However, challenges persist in improving recall for the Glioma and Meningioma categories, and the black-box nature of deep learning models requires further attention to enhance interpretability and trust in medical settings. CONCLUSIONS The findings underscore the transformative potential of deep learning in medical imaging, offering a pathway toward more reliable, scalable, and efficient diagnostic tools. Future research will focus on expanding dataset diversity, improving model explainability, and validating model performance in real-world clinical settings to support the widespread adoption of AI-driven systems in healthcare and ensure their integration into clinical workflows.
Collapse
Affiliation(s)
- Rukiye Disci
- Department of Management Information Systems, Faculty of Economics and Administrative Sciences, Karadeniz Technical University, 61080 Trabzon, Turkey
| | - Fatih Gurcan
- Department of Management Information Systems, Faculty of Economics and Administrative Sciences, Karadeniz Technical University, 61080 Trabzon, Turkey
| | - Ahmet Soylu
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 2815 Gjøvik, Norway
| |
Collapse
|
7
|
Prasad R, Kumar Saxena A, Laha S. Prediction of Brain Cancer Occurrence and Risk Assessment of Brain Hemorrhage Using Hybrid Deep Learning Technique. Cancer Invest 2025; 43:80-102. [PMID: 39629783 DOI: 10.1080/07357907.2024.2431829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 10/17/2024] [Accepted: 11/16/2024] [Indexed: 02/01/2025]
Abstract
The prediction of brain cancer occurrence and risk assessment of brain hemorrhage using a hybrid deep learning (DL) technique is a critical area of research in medical imaging analysis. One prominent challenge in this field is the accurate identification and classification of brain tumors and hemorrhages, which can significantly impact patient prognosis and treatment planning. The objectives of the study address the prediction of brain cancer occurrence and the assessment of risk levels associated with both brain cancers due to brain hemorrhage. A diverse dataset of brain MRI and CT scan images. Utilize Unsymmetrical Trimmed Median Filter with Optics Clustering for noise removal while preserving edges and details. The Chan-Vese segmentation process for refined segmentation. Brain cancer detection using Multi-Head Self-Attention Dilated Convolution Neural Network (MH-SA-DCNN) with Efficient Net Model. Brain cancer detection using MH-SA-DCNN with Efficient Net Model. This trains the algorithm to predict cancerous regions in brain images. Further, implement a Graph-Based Deep Neural Network Model (G-DNN) to capture spatial relationships and risk factors from brain images. Cox regression model to estimate cancer risk over time and fine-tune and optimize the model's parameters and features using the Osprey optimization algorithm (OPA).
Collapse
Affiliation(s)
- Rajeshwar Prasad
- Scholar, Department of Computer Science and Information Technology, Guru Ghasidas Vishwavidyalaya (C.U.), Koni, Bilaspur, (C.G.), Chhattisgarh, India
| | - Amit Kumar Saxena
- Department of Computer Science and Information Technology, Guru Ghasidas Vishwavidyalaya (C.U.), Koni, Bilaspur, (C.G.), Chhattisgarh, India
| | - Suman Laha
- Scholar, Department of Computer and System Sciences, Visva-Bharati University, Santiniketan, W.B., India; Assistant Professor, Department of Computational Sciences, Brainware University, Barasat, Kolkata, W.B., India
| |
Collapse
|
8
|
Abas Mohamed Y, Ee Khoo B, Shahrimie Mohd Asaari M, Ezane Aziz M, Rahiman Ghazali F. Decoding the black box: Explainable AI (XAI) for cancer diagnosis, prognosis, and treatment planning-A state-of-the art systematic review. Int J Med Inform 2025; 193:105689. [PMID: 39522406 DOI: 10.1016/j.ijmedinf.2024.105689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 10/28/2024] [Accepted: 10/31/2024] [Indexed: 11/16/2024]
Abstract
OBJECTIVE Explainable Artificial Intelligence (XAI) is increasingly recognized as a crucial tool in cancer care, with significant potential to enhance diagnosis, prognosis, and treatment planning. However, the holistic integration of XAI across all stages of cancer care remains underexplored. This review addresses this gap by systematically evaluating the role of XAI in these critical areas, identifying key challenges and emerging trends. MATERIALS AND METHODS Following the PRISMA guidelines, a comprehensive literature search was conducted across Scopus and Web of Science, focusing on publications from January 2020 to May 2024. After rigorous screening and quality assessment, 69 studies were selected for in-depth analysis. RESULTS The review identified critical gaps in the application of XAI within cancer care, notably the exclusion of clinicians in 83% of studies, which raises concerns about real-world applicability and may lead to explanations that are technically sound but clinically irrelevant. Additionally, 87% of studies lacked rigorous evaluation of XAI explanations, compromising their reliability in clinical practice. The dominance of post-hoc visual methods like SHAP, LIME and Grad-CAM reflects a trend toward explanations that may be inherently flawed due to specific input perturbations and simplifying assumptions. The lack of formal evaluation metrics and standardization constrains broader XAI adoption in clinical settings, creating a disconnect between AI development and clinical integration. Moreover, translating XAI insights into actionable clinical decisions remains challenging due to the absence of clear guidelines for integrating these tools into clinical workflows. CONCLUSION This review highlights the need for greater clinician involvement, standardized XAI evaluation metrics, clinician-centric interfaces, context-aware XAI systems, and frameworks for integrating XAI into clinical workflows for informed clinical decision-making and improved outcomes in cancer care.
Collapse
Affiliation(s)
- Yusuf Abas Mohamed
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia
| | - Bee Ee Khoo
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia.
| | - Mohd Shahrimie Mohd Asaari
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia
| | - Mohd Ezane Aziz
- Department of Radiology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia (USM), Kelantan, Malaysia
| | - Fattah Rahiman Ghazali
- Department of Radiology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia (USM), Kelantan, Malaysia
| |
Collapse
|
9
|
Berghout T. The Neural Frontier of Future Medical Imaging: A Review of Deep Learning for Brain Tumor Detection. J Imaging 2024; 11:2. [PMID: 39852315 PMCID: PMC11766058 DOI: 10.3390/jimaging11010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Revised: 12/21/2024] [Accepted: 12/23/2024] [Indexed: 01/26/2025] Open
Abstract
Brain tumor detection is crucial in medical research due to high mortality rates and treatment challenges. Early and accurate diagnosis is vital for improving patient outcomes, however, traditional methods, such as manual Magnetic Resonance Imaging (MRI) analysis, are often time-consuming and error-prone. The rise of deep learning has led to advanced models for automated brain tumor feature extraction, segmentation, and classification. Despite these advancements, comprehensive reviews synthesizing recent findings remain scarce. By analyzing over 100 research papers over past half-decade (2019-2024), this review fills that gap, exploring the latest methods and paradigms, summarizing key concepts, challenges, datasets, and offering insights into future directions for brain tumor detection using deep learning. This review also incorporates an analysis of previous reviews and targets three main aspects: feature extraction, segmentation, and classification. The results revealed that research primarily focuses on Convolutional Neural Networks (CNNs) and their variants, with a strong emphasis on transfer learning using pre-trained models. Other methods, such as Generative Adversarial Networks (GANs) and Autoencoders, are used for feature extraction, while Recurrent Neural Networks (RNNs) are employed for time-sequence modeling. Some models integrate with Internet of Things (IoT) frameworks or federated learning for real-time diagnostics and privacy, often paired with optimization algorithms. However, the adoption of eXplainable AI (XAI) remains limited, despite its importance in building trust in medical diagnostics. Finally, this review outlines future opportunities, focusing on image quality, underexplored deep learning techniques, expanding datasets, and exploring deeper learning representations and model behavior such as recurrent expansion to advance medical imaging diagnostics.
Collapse
Affiliation(s)
- Tarek Berghout
- Laboratory of Automation and Manufacturing Engineering, Department of Industrial Engineering, Batna 2 University, Batna 05000, Algeria
| |
Collapse
|
10
|
Abualnaja SY, Morris JS, Rashid H, Cook WH, Helmy AE. Machine learning for predicting post-operative outcomes in meningiomas: a systematic review and meta-analysis. Acta Neurochir (Wien) 2024; 166:505. [PMID: 39688716 PMCID: PMC11652405 DOI: 10.1007/s00701-024-06344-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Accepted: 11/01/2024] [Indexed: 12/18/2024]
Abstract
PURPOSE Meningiomas are the most common primary brain tumour and account for over one-third of cases. Traditionally, estimations of morbidity and mortality following surgical resection have depended on subjective assessments of various factors, including tumour volume, location, WHO grade, extent of resection (Simpson grade) and pre-existing co-morbidities, an approach fraught with subjective variability. This systematic review and meta-analysis seeks to evaluate the efficacy with which machine learning (ML) algorithms predict post-operative outcomes in meningioma patients. METHODS A literature search was conducted in December 2023 by two independent reviewers through PubMed, DARE, Cochrane Library and SCOPUS electronic databases. Random-effects meta-analysis was conducted. RESULTS Systematic searches yielded 32 studies, comprising 142,459 patients and 139,043 meningiomas. Random-effects meta-analysis sought to generate restricted maximum-likelihood estimates for the accuracy of alternate ML algorithms in predicting several postoperative outcomes. ML models incorporating both clinical and radiomic data significantly outperformed models utilizing either data type alone as well as traditional methods. Pooled estimates for the AUCs achieved by different ML algorithms ranged from 0.74-0.81 in the prediction of overall survival and progression-/recurrence-free survival, with ensemble classifiers demonstrating particular promise for future clinical application. Additionally, current ML models may exhibit a bias in predictive accuracy towards female patients, presumably due to the higher prevalence of meningiomas in females. CONCLUSION This review underscores the potential of ML to improve the accuracy of prognoses for meningioma patients and provides insight into which model classes offer the greatest potential for predicting survival outcomes. However, future research will have to directly compare standardized ML methodologies to traditional approaches in large-scale, prospective studies, before their clinical utility can be confidently validated.
Collapse
Affiliation(s)
| | | | | | - William H Cook
- Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.
| | - Adel E Helmy
- Division of Neurosurgery, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
11
|
Ragab M, Katib I, Sharaf SA, Alterazi HA, Subahi A, Alattas SG, Binyamin SS, Alyami J. Automated brain tumor recognition using equilibrium optimizer with deep learning approach on MRI images. Sci Rep 2024; 14:29448. [PMID: 39604452 PMCID: PMC11603070 DOI: 10.1038/s41598-024-80888-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 11/22/2024] [Indexed: 11/29/2024] Open
Abstract
Brain tumours (BT) affect human health owing to their location. Artificial intelligence (AI) is intended to assist in diagnosing and treating complex diseases by combining technologies like deep learning (DL), big data analytics, and machine learning (ML). AI can identify and categorize tumours by analyzing brain imaging approaches like Magnetic Resonance Imaging (MRI). The medical sector has been promptly shifted by evolving technology, and an essential element of these transformations is AI technology. AI model determines tumours' class, size, aggressiveness, and location. This assists medical doctors in making more exact diagnoses and treatment plans and helps patients better understand their health. Also, AI is used to track the progress of patients through treatment. AI-based analytics is used to predict potential tumour recurrence and assess treatment response. This study presents Brain Tumor Recognition using an Equilibrium Optimizer with a Deep Learning Approach (BTR-EODLA) technique for MRI images. The BTR-EODLA technique intends to recognize whether or not a BT presence exists. In the BTR-EODLA technique, median filtering (MF) is deployed to eliminate the noise in the input MRI. Besides, the squeeze-excitation ResNet (SE-ResNet50) model is applied to derive feature vectors, and its parameters are fine-tuned by the design of the EO model. The BTR-EODLA technique utilizes the stacked autoencoder (SAE) model for BT detection. A sequence of experiments is performed to ensure the improved performance of the BTR-EODLA technique. The investigational validation of the BTR-EODLA technique portrayed a superior accuracy value of 98.78% over existing models.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Sanaa A Sharaf
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Hassan A Alterazi
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Alanoud Subahi
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Rabigh, 25732, Saudi Arabia
| | - Sana G Alattas
- Biological Sciences Department, College of Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Sami Saeed Binyamin
- Computer and Information Technology Department, The Applied College, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Jaber Alyami
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
- King Fahd Medical Research Center, Smart Medical Imaging Research Group , King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| |
Collapse
|
12
|
Adamu MJ, Kawuwa HB, Qiang L, Nyatega CO, Younis A, Fahad M, Dauya SS. Efficient and Accurate Brain Tumor Classification Using Hybrid MobileNetV2-Support Vector Machine for Magnetic Resonance Imaging Diagnostics in Neoplasms. Brain Sci 2024; 14:1178. [PMID: 39766377 PMCID: PMC11674380 DOI: 10.3390/brainsci14121178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Revised: 11/15/2024] [Accepted: 11/21/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND/OBJECTIVES Magnetic Resonance Imaging (MRI) plays a vital role in brain tumor diagnosis by providing clear visualization of soft tissues without the use of ionizing radiation. Given the increasing incidence of brain tumors, there is an urgent need for reliable diagnostic tools, as misdiagnoses can lead to harmful treatment decisions and poor outcomes. While machine learning has significantly advanced medical diagnostics, achieving both high accuracy and computational efficiency remains a critical challenge. METHODS This study proposes a hybrid model that integrates MobileNetV2 for feature extraction with a Support Vector Machine (SVM) classifier for the classification of brain tumors. The model was trained and validated using the Kaggle MRI brain tumor dataset, which includes 7023 images categorized into four types: glioma, meningioma, pituitary tumor, and no tumor. MobileNetV2's efficient architecture was leveraged for feature extraction, and SVM was used to enhance classification accuracy. RESULTS The proposed hybrid model showed excellent results, achieving Area Under the Curve (AUC) scores of 0.99 for glioma, 0.97 for meningioma, and 1.0 for both pituitary tumors and the no tumor class. These findings highlight that the MobileNetV2-SVM hybrid not only improves classification accuracy but also reduces computational overhead, making it suitable for broader clinical use. CONCLUSIONS The MobileNetV2-SVM hybrid model demonstrates substantial potential for enhancing brain tumor diagnostics by offering a balance of precision and computational efficiency. Its ability to maintain high accuracy while operating efficiently could lead to better outcomes in medical practice, particularly in resource limited settings.
Collapse
Affiliation(s)
- Mohammed Jajere Adamu
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
- Department of Computer Science, Yobe State University, Damaturu 600213, Nigeria;
- Center for Distance and Online Education, Lovely Professional University, Phagwara 144411, India
| | - Halima Bello Kawuwa
- Department of Biomedical Engineering, School of Precision Instruments and Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China;
| | - Li Qiang
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
| | - Charles Okanda Nyatega
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
- Department of Electronics and Telecommunication Engineering, Mbeya University of Science and Technology, Mbeya P.O. Box 131, Tanzania
| | - Ayesha Younis
- Department of Electronic Science and Technology, School of Microelectronics, Tianjin University, Tianjin 300072, China; (L.Q.); (C.O.N.); (A.Y.)
| | - Muhammad Fahad
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| | - Salisu Samaila Dauya
- Department of Computer Science, Yobe State University, Damaturu 600213, Nigeria;
| |
Collapse
|
13
|
M S, Bv B, D P, S AK, Mathivanan SK, Shah MA. Efficient brain tumor grade classification using ensemble deep learning models. BMC Med Imaging 2024; 24:297. [PMID: 39487431 PMCID: PMC11529038 DOI: 10.1186/s12880-024-01476-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 10/22/2024] [Indexed: 11/04/2024] Open
Abstract
Detecting brain tumors early on is critical for effective treatment and life-saving efforts. The analysis of the brain with MRI scans is fundamental to the diagnosis because it contains detailed structural views of the brain, which is vital in identifying any of its abnormalities. The other option of performing an invasive biopsy is very painful and uncomfortable, which is not the case with MRI as it is free from surgically invasive margins and pieces of equipment. This helps patients to feel more at ease and hasten the diagnostic procedure, allowing physicians to formulate and practice action plans quicker. It is very difficult to locate a human brain tumor by manual because MRI scans produce large numbers of three-dimensional images. Complete applicability of pre-written computerized diagnostics, affords high possibilities in providing areas of interest earlier through the application of machine learning techniques and algorithms. The proposed work in the present study was to develop a deep learning model which will classify brain tumor grade images (BTGC), and hence enhance accuracy in diagnosing patients with different grades of brain tumors using MRI. A MobileNetV2 model, was used to extract the features from the images. This model increases the efficiency and generalizability of the model further. In this study, six standard Kaggle brain tumor MRI datasets were used to train and validate the developed and tested model of a brain tumor detection and classification algorithm into several types. This work consists of two key components: (i) brain tumor detection and (ii) classification of the tumor. The tumor classifications are conducted in both three classes (Meningioma, Pituitary, and glioma) and two classes (malignant, benign). The model has been reported to detect brain tumors with 99.85% accuracy, to distinguish benign and malignant tumors with 99.87% accuracy, and to type meningioma, pituitary, and glioma tumors with 99.38% accuracy. The results of this study indicate that the described technique is useful in the detection and classification of brain tumors.
Collapse
Affiliation(s)
- Sankar M
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr, Sagunthala R&D Institute of Science and Technology, Chennai, India
| | - Baiju Bv
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, TamilNadu, India
| | - Preethi D
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Ramapuram, Chennai, India
| | - Ananda Kumar S
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, TamilNadu, India
| | | | - Mohd Asif Shah
- Department of Economics, Kardan University, Parwane Du, Kabul, 1001, Afghanistan.
- Centre for Research Impact & Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, 140401, Punjab, India.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
14
|
Rodríguez Mallma MJ, Zuloaga-Rotta L, Borja-Rosales R, Rodríguez Mallma JR, Vilca-Aguilar M, Salas-Ojeda M, Mauricio D. Explainable Machine Learning Models for Brain Diseases: Insights from a Systematic Review. Neurol Int 2024; 16:1285-1307. [PMID: 39585057 PMCID: PMC11587041 DOI: 10.3390/neurolint16060098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Revised: 10/10/2024] [Accepted: 10/23/2024] [Indexed: 11/26/2024] Open
Abstract
In recent years, Artificial Intelligence (AI) methods, specifically Machine Learning (ML) models, have been providing outstanding results in different areas of knowledge, with the health area being one of its most impactful fields of application. However, to be applied reliably, these models must provide users with clear, simple, and transparent explanations about the medical decision-making process. This systematic review aims to investigate the use and application of explainability in ML models used in brain disease studies. A systematic search was conducted in three major bibliographic databases, Web of Science, Scopus, and PubMed, from January 2014 to December 2023. A total of 133 relevant studies were identified and analyzed out of a total of 682 found in the initial search, in which the explainability of ML models in the medical context was studied, identifying 11 ML models and 12 explainability techniques applied in the study of 20 brain diseases.
Collapse
Affiliation(s)
- Mirko Jerber Rodríguez Mallma
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru; (M.J.R.M.); (L.Z.-R.)
| | - Luis Zuloaga-Rotta
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru; (M.J.R.M.); (L.Z.-R.)
| | - Rubén Borja-Rosales
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru; (M.J.R.M.); (L.Z.-R.)
| | - Josef Renato Rodríguez Mallma
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru; (M.J.R.M.); (L.Z.-R.)
| | | | - María Salas-Ojeda
- Facultad de Artes y Humanidades, Universidad San Ignacio de Loyola, Lima 15024, Peru
| | - David Mauricio
- Facultad de Ingeniería de Sistemas e Informática, Universidad Nacional Mayor de San Marcos, Lima 15081, Peru;
| |
Collapse
|
15
|
Saarela M, Podgorelec V. Recent Applications of Explainable AI (XAI): A Systematic Literature Review. APPLIED SCIENCES 2024; 14:8884. [DOI: 10.3390/app14198884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
This systematic literature review employs the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to investigate recent applications of explainable AI (XAI) over the past three years. From an initial pool of 664 articles identified through the Web of Science database, 512 peer-reviewed journal articles met the inclusion criteria—namely, being recent, high-quality XAI application articles published in English—and were analyzed in detail. Both qualitative and quantitative statistical techniques were used to analyze the identified articles: qualitatively by summarizing the characteristics of the included studies based on predefined codes, and quantitatively through statistical analysis of the data. These articles were categorized according to their application domains, techniques, and evaluation methods. Health-related applications were particularly prevalent, with a strong focus on cancer diagnosis, COVID-19 management, and medical imaging. Other significant areas of application included environmental and agricultural management, industrial optimization, cybersecurity, finance, transportation, and entertainment. Additionally, emerging applications in law, education, and social care highlight XAI’s expanding impact. The review reveals a predominant use of local explanation methods, particularly SHAP and LIME, with SHAP being favored for its stability and mathematical guarantees. However, a critical gap in the evaluation of XAI results is identified, as most studies rely on anecdotal evidence or expert opinion rather than robust quantitative metrics. This underscores the urgent need for standardized evaluation frameworks to ensure the reliability and effectiveness of XAI applications. Future research should focus on developing comprehensive evaluation standards and improving the interpretability and stability of explanations. These advancements are essential for addressing the diverse demands of various application domains while ensuring trust and transparency in AI systems.
Collapse
Affiliation(s)
- Mirka Saarela
- Faculty of Information Technology, University of Jyväskylä, P.O. Box 35, FI-40014 Jyväskylä, Finland
| | - Vili Podgorelec
- Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia
| |
Collapse
|
16
|
Sun J, Chen K, He Z, Ren S, He X, Liu X, Peng C. Medical image analysis using improved SAM-Med2D: segmentation and classification perspectives. BMC Med Imaging 2024; 24:241. [PMID: 39285324 PMCID: PMC11403950 DOI: 10.1186/s12880-024-01401-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 08/14/2024] [Indexed: 09/22/2024] Open
Abstract
Recently emerged SAM-Med2D represents a state-of-the-art advancement in medical image segmentation. Through fine-tuning the Large Visual Model, Segment Anything Model (SAM), on extensive medical datasets, it has achieved impressive results in cross-modal medical image segmentation. However, its reliance on interactive prompts may restrict its applicability under specific conditions. To address this limitation, we introduce SAM-AutoMed, which achieves automatic segmentation of medical images by replacing the original prompt encoder with an improved MobileNet v3 backbone. The performance on multiple datasets surpasses both SAM and SAM-Med2D. Current enhancements on the Large Visual Model SAM lack applications in the field of medical image classification. Therefore, we introduce SAM-MedCls, which combines the encoder of SAM-Med2D with our designed attention modules to construct an end-to-end medical image classification model. It performs well on datasets of various modalities, even achieving state-of-the-art results, indicating its potential to become a universal model for medical image classification.
Collapse
Affiliation(s)
- Jiakang Sun
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China
| | - Ke Chen
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China
| | - Zhiyi He
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China
| | - Siyuan Ren
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China
| | - Xinyang He
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China
| | - Xu Liu
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China
| | - Cheng Peng
- Chengdu Institute of Computer Application, Chinese Academy of Sciences, Chengdu, 610213, Sichuan, China.
- School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing, 101499, China.
| |
Collapse
|
17
|
Alshomrani F. A Unified Pipeline for Simultaneous Brain Tumor Classification and Segmentation Using Fine-Tuned CNN and Residual UNet Architecture. Life (Basel) 2024; 14:1143. [PMID: 39337926 PMCID: PMC11433524 DOI: 10.3390/life14091143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/12/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024] Open
Abstract
In this paper, I present a comprehensive pipeline integrating a Fine-Tuned Convolutional Neural Network (FT-CNN) and a Residual-UNet (RUNet) architecture for the automated analysis of MRI brain scans. The proposed system addresses the dual challenges of brain tumor classification and segmentation, which are crucial tasks in medical image analysis for precise diagnosis and treatment planning. Initially, the pipeline preprocesses the FigShare brain MRI image dataset, comprising 3064 images, by normalizing and resizing them to achieve uniformity and compatibility with the model. The FT-CNN model then classifies the preprocessed images into distinct tumor types: glioma, meningioma, and pituitary tumor. Following classification, the RUNet model performs pixel-level segmentation to delineate tumor regions within the MRI scans. The FT-CNN leverages the VGG19 architecture, pre-trained on large datasets and fine-tuned for specific tumor classification tasks. Features extracted from MRI images are used to train the FT-CNN, demonstrating robust performance in discriminating between tumor types. Subsequently, the RUNet model, inspired by the U-Net design and enhanced with residual blocks, effectively segments tumors by combining high-resolution spatial information from the encoding path with context-rich features from the bottleneck. My experimental results indicate that the integrated pipeline achieves high accuracy in both classification (96%) and segmentation tasks (98%), showcasing its potential for clinical applications in brain tumor diagnosis. For the classification task, the metrics involved are loss, accuracy, confusion matrix, and classification report, while for the segmentation task, the metrics used are loss, accuracy, Dice coefficient, intersection over union, and Jaccard distance. To further validate the generalizability and robustness of the integrated pipeline, I evaluated the model on two additional datasets. The first dataset consists of 7023 images for classification tasks, expanding to a four-class dataset. The second dataset contains approximately 3929 images for both classification and segmentation tasks, including a binary classification scenario. The model demonstrated robust performance, achieving 95% accuracy on the four-class task and high accuracy (96%) in the binary classification and segmentation tasks, with a Dice coefficient of 95%.
Collapse
Affiliation(s)
- Faisal Alshomrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Science, Taibah University, Medinah 42353, Saudi Arabia
| |
Collapse
|
18
|
Pande SD, Ahammad SH, Madhav BTP, Ramya KR, Smirani LK, Hossain MA, Rashed ANZ. Assessment of brain tumor detection techniques and recommendation of neural network. BIOMED ENG-BIOMED TE 2024; 69:395-406. [PMID: 38285486 DOI: 10.1515/bmt-2022-0336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 01/05/2024] [Indexed: 01/30/2024]
Abstract
OBJECTIVES Brain tumor classification is amongst the most complex and challenging jobs in the computer domain. The latest advances in brain tumor detection systems (BTDS) are presented as they can inspire new researchers to deliver new architectures for effective and efficient tumor detection. Here, the data of the multi-modal brain tumor segmentation task is employed, which has been registered, skull stripped, and histogram matching is conducted with the ferrous volume of high contrast. METHODS This research further configures a capsule network (CapsNet) for brain tumor classification. Results of the latest deep neural network (NN) architectures for tumor detection are compared and presented. The VGG16 and CapsNet architectures yield the highest f1-score and precision values, followed by VGG19. Overall, ResNet152, MobileNet, and MobileNetV2 give us the lowest f1-score. RESULTS The VGG16 and CapsNet have produced outstanding results. However, VGG16 and VGG19 are more profound architecture, resulting in slower computation speed. The research then recommends the latest suitable NN for effective brain tumor detection. CONCLUSIONS Finally, the work concludes with future directions and potential new architectures for tumor detection.
Collapse
Affiliation(s)
| | - Shaik Hasane Ahammad
- Department of ECE, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh, India
| | | | - Kalangi Ruth Ramya
- Department of Computer Engineering, Indira College of Engineering and Management, Pune, MH, India
| | - Lassaad K Smirani
- Deanship of Information Technology, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Md Amzad Hossain
- Department of Electrical and Electronic Engineering, Jashore University of Science and Technology, Jashore, Bangladesh
| | - Ahmed Nabih Zaki Rashed
- Electronics and Electrical Communications Engineering Department, Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
- Department of VLSI Microelectronics, Institute of Electronics and Communication Engineering, Saveetha School of Engineering, SIMATS, Chennai, Tamilnadu, India
| |
Collapse
|
19
|
Ali Amin S, Alqudah MKS, Ateeq Almutairi S, Almajed R, Rustom Al Nasar M, Ali Alkhazaleh H. Optimal extreme learning machine for diagnosing brain tumor based on modified sailfish optimizer. Heliyon 2024; 10:e34050. [PMID: 39816348 PMCID: PMC11733978 DOI: 10.1016/j.heliyon.2024.e34050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 07/02/2024] [Accepted: 07/03/2024] [Indexed: 01/18/2025] Open
Abstract
This study proposes a hierarchical automated methodology for detecting brain tumors in Magnetic Resonance Imaging (MRI), focusing on preprocessing images to improve quality and eliminate artifacts or noise. A modified Extreme Learning Machine is then used to diagnose brain tumors that are integrated with the Modified Sailfish optimizer to enhance its performance. The Modified Sailfish optimizer is a metaheuristic algorithm known for efficiently navigating optimization landscapes and enhancing convergence speed. Experiments were conducted using the "Whole Brain Atlas (WBA)" database, which contains annotated MRI images. The results showed superior efficiency in accurately detecting brain tumors from MRI images, demonstrating the potential of the method in enhancing accuracy and efficiency. The proposed method utilizes hierarchical methodology, preprocessing techniques, and optimization of the Extreme Learning Machine with the Modified Sailfish optimizer to improve accuracy rates and decrease the time needed for brain tumor diagnosis. The proposed method outperformed other methods in terms of accuracy, recall, specificity, precision, and F1 score in medical imaging diagnosis. It achieved the highest accuracy at 93.95 %, with End/End and CNN attaining high values of 89.24 % and 93.17 %, respectively. The method also achieved a perfect score of 100 % in recall, 91.38 % in specificity, and 75.64 % in F1 score. However, it is crucial to consider factors like computational complexity, dataset characteristics, and generalizability before evaluating the effectiveness of the method in medical imaging diagnosis. This approach has the potential to make substantial contributions to medical imaging and aid healthcare professionals in making prompt and precise treatment decisions for brain tumors.
Collapse
Affiliation(s)
- Saad Ali Amin
- College of Engineering and IT, University of Dubai, Academic City, 14143, Dubai, United Arab Emirates
| | | | - Saleh Ateeq Almutairi
- Applied College, Computer Science, And Information Department, Taibah University, Medinah, Saudi Arabia
| | - Rasha Almajed
- College of Computer Information Technology (CCIT), Department of Information Technology Management, American University in the Emirates (AUE), Academic City, 14143, Dubai, United Arab Emirates
| | - Mohammad Rustom Al Nasar
- College of Computer Information Technology (CCIT), Department of Information Technology Management, American University in the Emirates (AUE), Academic City, 14143, Dubai, United Arab Emirates
| | - Hamzah Ali Alkhazaleh
- College of Engineering and IT, University of Dubai, Academic City, 14143, Dubai, United Arab Emirates
| |
Collapse
|
20
|
Sarah P, Krishnapriya S, Saladi S, Karuna Y, Bavirisetti DP. A novel approach to brain tumor detection using K-Means++, SGLDM, ResNet50, and synthetic data augmentation. Front Physiol 2024; 15:1342572. [PMID: 39077759 PMCID: PMC11284281 DOI: 10.3389/fphys.2024.1342572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/24/2024] [Indexed: 07/31/2024] Open
Abstract
Introduction: Brain tumors are abnormal cell growths in the brain, posing significant treatment challenges. Accurate early detection using non-invasive methods is crucial for effective treatment. This research focuses on improving the early detection of brain tumors in MRI images through advanced deep-learning techniques. The primary goal is to identify the most effective deep-learning model for classifying brain tumors from MRI data, enhancing diagnostic accuracy and reliability. Methods: The proposed method for brain tumor classification integrates segmentation using K-means++, feature extraction from the Spatial Gray Level Dependence Matrix (SGLDM), and classification with ResNet50, along with synthetic data augmentation to enhance model robustness. Segmentation isolates tumor regions, while SGLDM captures critical texture information. The ResNet50 model then classifies the tumors accurately. To further improve the interpretability of the classification results, Grad-CAM is employed, providing visual explanations by highlighting influential regions in the MRI images. Result: In terms of accuracy, sensitivity, and specificity, the evaluation on the Br35H::BrainTumorDetection2020 dataset showed superior performance of the suggested method compared to existing state-of-the-art approaches. This indicates its effectiveness in achieving higher precision in identifying and classifying brain tumors from MRI data, showcasing advancements in diagnostic reliability and efficacy. Discussion: The superior performance of the suggested method indicates its robustness in accurately classifying brain tumors from MRI images, achieving higher accuracy, sensitivity, and specificity compared to existing methods. The method's enhanced sensitivity ensures a greater detection rate of true positive cases, while its improved specificity reduces false positives, thereby optimizing clinical decision-making and patient care in neuro-oncology.
Collapse
Affiliation(s)
- Ponuku Sarah
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Srigiri Krishnapriya
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Saritha Saladi
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Yepuganti Karuna
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Durga Prasad Bavirisetti
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
21
|
Zhou L, Jiang Y, Li W, Hu J, Zheng S. Shape-Scale Co-Awareness Network for 3D Brain Tumor Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2495-2508. [PMID: 38386578 DOI: 10.1109/tmi.2024.3368531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
The accurate segmentation of brain tumor is significant in clinical practice. Convolutional Neural Network (CNN)-based methods have made great progress in brain tumor segmentation due to powerful local modeling ability. However, brain tumors are frequently pattern-agnostic, i.e. variable in shape, size and location, which can not be effectively matched by traditional CNN-based methods with local and regular receptive fields. To address the above issues, we propose a shape-scale co-awareness network (S2CA-Net) for brain tumor segmentation, which can efficiently learn shape-aware and scale-aware features simultaneously to enhance pattern-agnostic representations. Primarily, three key components are proposed to accomplish the co-awareness of shape and scale. The Local-Global Scale Mixer (LGSM) decouples the extraction of local and global context by adopting the CNN-Former parallel structure, which contributes to obtaining finer hierarchical features. The Multi-level Context Aggregator (MCA) enriches the scale diversity of input patches by modeling global features across multiple receptive fields. The Multi-Scale Attentive Deformable Convolution (MS-ADC) learns the target deformation based on the multiscale inputs, which motivates the network to enforce feature constraints both in terms of scale and shape for optimal feature matching. Overall, LGSM and MCA focus on enhancing the scale-awareness of the network to cope with the size and location variations, while MS-ADC focuses on capturing deformation information for optimal shape matching. Finally, their effective integration prompts the network to perceive variations in shape and scale simultaneously, which can robustly tackle the variations in patterns of brain tumors. The experimental results on BraTS 2019, BraTS 2020, MSD BTS Task and BraTS2023-MEN show that S2CA-Net has superior overall performance in accuracy and efficiency compared to other state-of-the-art methods. Code: https://github.com/jiangyu945/S2CA-Net.
Collapse
|
22
|
Kiran L, Zeb A, Rehman QNU, Rahman T, Shehzad Khan M, Ahmad S, Irfan M, Naeem M, Huda S, Mahmoud H. An enhanced pattern detection and segmentation of brain tumors in MRI images using deep learning technique. Front Comput Neurosci 2024; 18:1418280. [PMID: 38988988 PMCID: PMC11233794 DOI: 10.3389/fncom.2024.1418280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/27/2024] [Indexed: 07/12/2024] Open
Abstract
Neuroscience is a swiftly progressing discipline that aims to unravel the intricate workings of the human brain and mind. Brain tumors, ranging from non-cancerous to malignant forms, pose a significant diagnostic challenge due to the presence of more than 100 distinct types. Effective treatment hinges on the precise detection and segmentation of these tumors early. We introduce a cutting-edge deep-learning approach employing a binary convolutional neural network (BCNN) to address this. This method is employed to segment the 10 most prevalent brain tumor types and is a significant improvement over current models restricted to only segmenting four types. Our methodology begins with acquiring MRI images, followed by a detailed preprocessing stage where images undergo binary conversion using an adaptive thresholding method and morphological operations. This prepares the data for the next step, which is segmentation. The segmentation identifies the tumor type and classifies it according to its grade (Grade I to Grade IV) and differentiates it from healthy brain tissue. We also curated a unique dataset comprising 6,600 brain MRI images specifically for this study. The overall performance achieved by our proposed model is 99.36%. The effectiveness of our model is underscored by its remarkable performance metrics, achieving 99.40% accuracy, 99.32% precision, 99.45% recall, and a 99.28% F-Measure in segmentation tasks.
Collapse
Affiliation(s)
- Lubna Kiran
- Qurtuba University of Science and Information Technology, Peshawar, Pakistan
| | - Asim Zeb
- Abbottabad University of Science and Technology, Abbottabad, Pakistan
| | | | - Taj Rahman
- Qurtuba University of Science and Information Technology, Peshawar, Pakistan
| | | | - Shafiq Ahmad
- Department of Industrial Engineering, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| | - Muhammad Irfan
- Department of Computer Science, Kohat University of Science and Technology, Kohat, Pakistan
| | - Muhammad Naeem
- Abbottabad University of Science and Technology, Abbottabad, Pakistan
| | - Shamsul Huda
- School of Information Technology, Deakin University, Burwood, VIC, Australia
| | - Haitham Mahmoud
- Department of Industrial Engineering, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
23
|
Abdusalomov A, Rakhimov M, Karimberdiyev J, Belalova G, Cho YI. Enhancing Automated Brain Tumor Detection Accuracy Using Artificial Intelligence Approaches for Healthcare Environments. Bioengineering (Basel) 2024; 11:627. [PMID: 38927863 PMCID: PMC11201188 DOI: 10.3390/bioengineering11060627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 06/09/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024] Open
Abstract
Medical imaging and deep learning models are essential to the early identification and diagnosis of brain cancers, facilitating timely intervention and improving patient outcomes. This research paper investigates the integration of YOLOv5, a state-of-the-art object detection framework, with non-local neural networks (NLNNs) to improve brain tumor detection's robustness and accuracy. This study begins by curating a comprehensive dataset comprising brain MRI scans from various sources. To facilitate effective fusion, the YOLOv5 and NLNNs, K-means+, and spatial pyramid pooling fast+ (SPPF+) modules are integrated within a unified framework. The brain tumor dataset is used to refine the YOLOv5 model through the application of transfer learning techniques, adapting it specifically to the task of tumor detection. The results indicate that the combination of YOLOv5 and other modules results in enhanced detection capabilities in comparison to the utilization of YOLOv5 exclusively, proving recall rates of 86% and 83% respectively. Moreover, the research explores the interpretability aspect of the combined model. By visualizing the attention maps generated by the NLNNs module, the regions of interest associated with tumor presence are highlighted, aiding in the understanding and validation of the decision-making procedure of the methodology. Additionally, the impact of hyperparameters, such as NLNNs kernel size, fusion strategy, and training data augmentation, is investigated to optimize the performance of the combined model.
Collapse
Affiliation(s)
- Akmalbek Abdusalomov
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
| | - Mekhriddin Rakhimov
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Jakhongir Karimberdiyev
- Department of Artificial Intelligence, Tashkent University of Information Technologies Named after Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan; (M.R.); (J.K.)
| | - Guzal Belalova
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea;
- Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan;
| |
Collapse
|
24
|
Silva Santana L, Borges Camargo Diniz J, Mothé Glioche Gasparri L, Buccaran Canto A, Batista Dos Reis S, Santana Neville Ribeiro I, Gadelha Figueiredo E, Paulo Mota Telles J. Application of Machine Learning for Classification of Brain Tumors: A Systematic Review and Meta-Analysis. World Neurosurg 2024; 186:204-218.e2. [PMID: 38580093 DOI: 10.1016/j.wneu.2024.03.152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/07/2024]
Abstract
BACKGROUND Classifying brain tumors accurately is crucial for treatment and prognosis. Machine learning (ML) shows great promise in improving tumor classification accuracy. This study evaluates ML algorithms for differentiating various brain tumor types. METHODS A systematic review and meta-analysis were conducted, searching PubMed, Embase, and Web of Science up to March 14, 2023. Studies that only investigated image segmentation accuracy or brain tumor detection instead of classification were excluded. We extracted binary diagnostic accuracy data, constructing contingency tables to derive sensitivity and specificity. RESULTS Fifty-one studies were included. The pooled area under the curve for glioblastoma versus lymphoma and low-grade versus high-grade gliomas were 0.99 (95% confidence interval [CI]: 0.98-1.00) and 0.89, respectively. The pooled sensitivity and specificity for benign versus malignant tumors were 0.90 (95% CI: 0.85-0.93) and 0.93 (95% CI: 0.90-0.95), respectively. The pooled sensitivity and specificity for low-grade versus high-grade gliomas were 0.99 (95% CI: 0.97-1.00) and 0.94, (95% CI: 0.79-0.99), respectively. Primary versus metastatic tumor identification yields sensitivity and specificity of 0.89, (95% CI: 0.83-0.93) and 0.87 (95% CI: 0.82-0.91), correspondingly. The differentiation of gliomas from pituitary tumors yielded the highest results among primary brain tumor classifications: sensitivity of 0.99 (95% CI: 0.99-1.00) and specificity of 0.99 (95% CI: 0.98-1.00). CONCLUSIONS ML demonstrated excellent performance in classifying brain tumor images, with near-maximum area under the curves, sensitivity, and specificity.
Collapse
Affiliation(s)
| | | | | | | | | | - Iuri Santana Neville Ribeiro
- Department of Neurology, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - Eberval Gadelha Figueiredo
- Department of Neurology, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - João Paulo Mota Telles
- Department of Neurology, Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil.
| |
Collapse
|
25
|
Wang W, Huang H, Peng X, Wang Z, Zeng Y. Utilizing support vector machines to foster sustainable development and innovation in the clean energy sector via green finance. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2024; 360:121225. [PMID: 38796867 DOI: 10.1016/j.jenvman.2024.121225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 04/26/2024] [Accepted: 05/21/2024] [Indexed: 05/29/2024]
Abstract
As the global demand for clean energy continues to grow, the sustainable development of clean energy projects has become an important topic of research. in order to optimize the performance and sustainability of clean energy projects, this work explores the environmental and economic benefits of the clean energy industry. through the use of Support Vector Machine (SVM) Multi-factor models and a bi-level multi-objective approach, this work conducts comprehensive assessment and optimization. with wind power base a as a case study, the work describes the material consumption of wind turbines, transportation energy consumption and carbon dioxide (CO2) emissions, and infrastructure material consumption through descriptive statistics. Moreover, this work analyzes the characteristics of different wind turbine models in depth. On one hand, the SVM multi-factor model is used to predict and assess the profitability of Wind Power Base A. On the other hand, a bi-level multi-objective approach is applied to optimize the number of units, internal rate of return within the project, and annual average equivalent utilization hours of the Wind Power Base A. The research results indicate that in March, the WilderHill New Energy Global Innovation Index (NEX) was 0.91053, while the predicted value of the SVM multi-factor model was 0.98596. The predicted value is slightly higher than the actual value, demonstrating the model's good grasp of future returns. The cumulative rate of return of Wind Power Base A is 18.83%, with an annualized return of 9.47%, exceeding the market performance by 1.68%. Under the optimization of the bi-level multi-objective approach, the number of units at Wind Power Base A decreases from the original 7004 to 5860, with total purchase and transportation costs remaining basically unchanged. The internal rate of return of the project increases from 8% to 9.3%, and the annual equivalent utilization hours increase to 2044 h, comprehensively improving the investment return and utilization efficiency of the wind power base. Through optimization, significant improvements are achieved in terAs the global demand for clean energy continues to grow, the sustainable development of clean energy projects has become an important topic of research. In order to optimize the performance and sustainability of clean energy projects, this work explores the environmental and economic benefits of the clean energy industry. Through the use of Support Vector Machine (SVM) multi-factor models and a bi-level multi-objective approach, this work conducts comprehensive assessment and optimization. With Wind Power Base A as a case study, the work describes the material consumption of wind turbines, transportation energy consumption and carbon dioxide (CO2) emissions, and infrastructure material consumption through descriptive statistics. Moreover, this work analyzes the characteristics of different wind turbine models in depth. On one hand, the SVM multi-factor model is used to predict and assess the profitability of Wind Power Base A. On the other hand, a bi-level multi-objective approach is applied to optimize the number of units, internal rate of return within the project, and annual average equivalent utilization hours of the Wind Power Base A. The research results indicate that in March, the WilderHill New Energy Global Innovation Index (NEX) was 0.91053, while the predicted value of the SVM multi-factor model was 0.98596. The predicted value is slightly higher than the actual value, demonstrating the model's good grasp of future returns. The cumulative rate of return of Wind Power Base A is 18.83%, with an annualized return of 9.47%, exceeding the market performance by 1.68%. Under the optimization of the bi-level multi-objective approach, the number of units at Wind Power Base A decreases from the original 7004 to 5860, with total purchase and transportation costs remaining basically unchanged. The internal rate of return of the project increases from 8% to 9.3%, and the annual equivalent utilization hours increase to 2044 h, comprehensively improving the investment return and utilization efficiency of the wind power base. Through optimization, significant improvements are achieved in terms of the number of units, internal rate of return within the project, and annual average equivalent utilization hours at Wind Power Base A. The number of units decreases to 5860, with total purchase and transportation costs remaining basically unchanged, the internal rate of return increases to 9.3%, and annual equivalent utilization hours increase to 2044 h. Energy consumption and CO2 emissions are significantly reduced, with energy consumption decreasing by 0.68 × 109 kgce and CO2 emissions decreasing by 1.29 × 109 kg. The optimization effects are mainly concentrated in the production and installation stages, with emission reductions achieved through the recycling and disposal of materials consumed in the early stages. In terms of investment benefits, environmental benefits are enhanced, with a 13.93% reduction in CO2 emissions. Moreover, there is improved energy efficiency, with the energy input-output ratio increasing from 7.73 to 9.31. This indicates that the Wind Power Base A project has significant environmental and energy efficiency advantages in the clean energy industry. This work innovatively provides a comprehensive assessment and optimization scheme for clean energy projects and predicts the profitability of Wind Power Base A using SVM multi-factor models. Besides, this work optimizes key parameters of the project using a bi-level multi-objective approach, thus comprehensively improving the investment return and utilization efficiency of the wind power base. This work provides innovative methods and strong data support for the development of the clean energy industry, which is of great significance for promoting sustainable development under the backdrop of green finance.
Collapse
Affiliation(s)
- Weijia Wang
- School of Information Technology, Deakin University, Geelong, VIC, 3216, Australia.
| | - Huimin Huang
- School of Public Administration, Guangzhou University, Guangzhou, 510006, China.
| | - Xiaoyan Peng
- School of Government, Sun Yat-sen University, Guangzhou, 510275, China.
| | - Zeyu Wang
- School of Public Administration, Guangzhou University, Guangzhou, 510006, China.
| | - Yanzhao Zeng
- School of Economics and Statistics, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
26
|
Appiah R, Pulletikurthi V, Esquivel-Puentes HA, Cabrera C, Hasan NI, Dharmarathne S, Gomez LJ, Castillo L. Brain tumor detection using proper orthogonal decomposition integrated with deep learning networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108167. [PMID: 38669717 DOI: 10.1016/j.cmpb.2024.108167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 03/11/2024] [Accepted: 04/06/2024] [Indexed: 04/28/2024]
Abstract
BACKGROUND AND OBJECTIVE The central organ of the human nervous system is the brain, which receives and sends stimuli to the various parts of the body to engage in daily activities. Uncontrolled growth of brain cells can result in tumors which affect the normal functions of healthy brain cells. An automatic reliable technique for detecting tumors is imperative to assist medical practitioners in the timely diagnosis of patients. Although machine learning models are being used, with minimal data availability to train, development of low-order based models integrated with machine learning are a tool for reliable detection. METHODS In this study, we focus on comparing a low-order model such as proper orthogonal decomposition (POD) coupled with convolutional neural network (CNN) on 2D images from magnetic resonance imaging (MRI) scans to effectively identify brain tumors. The explainability of the coupled POD-CNN prediction output as well as the state-of-the-art pre-trained transfer learning models such as MobileNetV2, Inception-v3, ResNet101, and VGG-19 were explored. RESULTS The results showed that CNN predicted tumors with an accuracy of 99.21% whereas POD-CNN performed better with about 1/3rd of computational time at an accuracy of 95.88%. Explainable AI with SHAP showed MobileNetV2 has better prediction in identifying the tumor boundaries. CONCLUSIONS Integration of POD with CNN is carried for the first time to detect brain tumor detection with minimal MRI scan data. This study facilitates low-model approaches in machine learning to improve the accuracy and performance of tumor detection.
Collapse
Affiliation(s)
- Rita Appiah
- School of Nuclear Engineering, Purdue University, West Lafayette, IN 47906, USA.
| | | | | | - Cristiano Cabrera
- School of Mechanical Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Nahian I Hasan
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Suranga Dharmarathne
- R.B. Annis School of Engineering, University of Indianapolis, Indianapolis, IN 46227, USA
| | - Luis J Gomez
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Luciano Castillo
- School of Mechanical Engineering, Purdue University, West Lafayette, IN 47906, USA
| |
Collapse
|
27
|
Usha MP, Kannan G, Ramamoorthy M. Multimodal Brain Tumor Classification Using Convolutional Tumnet Architecture. Behav Neurol 2024; 2024:4678554. [PMID: 38882177 PMCID: PMC11178426 DOI: 10.1155/2024/4678554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/22/2023] [Accepted: 01/10/2024] [Indexed: 06/18/2024] Open
Abstract
The most common and aggressive tumor is brain malignancy, which has a short life span in the fourth grade of the disease. As a result, the medical plan may be a crucial step toward improving the well-being of a patient. Both diagnosis and therapy are part of the medical plan. Brain tumors are commonly imaged with magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). In this paper, multimodal fused imaging with classification and segmentation for brain tumors was proposed using the deep learning method. The MRI and CT brain tumor images of the same slices (308 slices of meningioma and sarcoma) are combined using three different types of pixel-level fusion methods. The presence/absence of a tumor is classified using the proposed Tumnet technique, and the tumor area is found accordingly. In the other case, Tumnet is also applied for single-modal MRI/CT (561 image slices) for classification. The proposed Tumnet was modeled with 5 convolutional layers, 3 pooling layers with ReLU activation function, and 3 fully connected layers. The first-order statistical fusion metrics for an average method of MRI-CT images are obtained as SSIM tissue at 83%, SSIM bone at 84%, accuracy at 90%, sensitivity at 96%, and specificity at 95%, and the second-order statistical fusion metrics are obtained as the standard deviation of fused images at 79% and entropy at 0.99. The entropy value confirms the presence of additional features in the fused image. The proposed Tumnet yields a sensitivity of 96%, an accuracy of 98%, a specificity of 99%, normalized values of the mean of 0.75, a standard deviation of 0.4, a variance of 0.16, and an entropy of 0.90.
Collapse
Affiliation(s)
- M Padma Usha
- Department of Electronics and Communication Engineering B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India
| | - G Kannan
- Department of Electronics and Communication Engineering B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India
| | - M Ramamoorthy
- Department of Artificial Intelligence and Machine Learning Saveetha School of Engineering SIMATS, Chennai, 600124, India
| |
Collapse
|
28
|
Al-Otaibi S, Rehman A, Raza A, Alyami J, Saba T. CVG-Net: novel transfer learning based deep features for diagnosis of brain tumors using MRI scans. PeerJ Comput Sci 2024; 10:e2008. [PMID: 38855235 PMCID: PMC11157570 DOI: 10.7717/peerj-cs.2008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/01/2024] [Indexed: 06/11/2024]
Abstract
Brain tumors present a significant medical challenge, demanding accurate and timely diagnosis for effective treatment planning. These tumors disrupt normal brain functions in various ways, giving rise to a broad spectrum of physical, cognitive, and emotional challenges. The daily increase in mortality rates attributed to brain tumors underscores the urgency of this issue. In recent years, advanced medical imaging techniques, particularly magnetic resonance imaging (MRI), have emerged as indispensable tools for diagnosing brain tumors. Brain MRI scans provide high-resolution, non-invasive visualization of brain structures, facilitating the precise detection of abnormalities such as tumors. This study aims to propose an effective neural network approach for the timely diagnosis of brain tumors. Our experiments utilized a multi-class MRI image dataset comprising 21,672 images related to glioma tumors, meningioma tumors, and pituitary tumors. We introduced a novel neural network-based feature engineering approach, combining 2D convolutional neural network (2DCNN) and VGG16. The resulting 2DCNN-VGG16 network (CVG-Net) extracted spatial features from MRI images using 2DCNN and VGG16 without human intervention. The newly created hybrid feature set is then input into machine learning models to diagnose brain tumors. We have balanced the multi-class MRI image features data using the Synthetic Minority Over-sampling Technique (SMOTE) approach. Extensive research experiments demonstrate that utilizing the proposed CVG-Net, the k-neighbors classifier outperformed state-of-the-art studies with a k-fold accuracy performance score of 0.96. We also applied hyperparameter tuning to enhance performance for multi-class brain tumor diagnosis. Our novel proposed approach has the potential to revolutionize early brain tumor diagnosis, providing medical professionals with a cost-effective and timely diagnostic mechanism.
Collapse
Affiliation(s)
- Shaha Al-Otaibi
- Department of Information Systems, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ali Raza
- Institute of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan, Pakistan
| | - Jaber Alyami
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
29
|
Qiu L, Zhai J. A hybrid CNN-SVM model for enhanced autism diagnosis. PLoS One 2024; 19:e0302236. [PMID: 38743688 PMCID: PMC11093301 DOI: 10.1371/journal.pone.0302236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 03/29/2024] [Indexed: 05/16/2024] Open
Abstract
Autism is a representative disorder of pervasive developmental disorder. It exerts influence upon an individual's behavior and performance, potentially co-occurring with other mental illnesses. Consequently, an effective diagnostic approach proves to be invaluable in both therapeutic interventions and the timely provision of medical support. Currently, most scholars' research primarily relies on neuroimaging techniques for auxiliary diagnosis and does not take into account the distinctive features of autism's social impediments. In order to address this deficiency, this paper introduces a novel convolutional neural network-support vector machine model that integrates resting state functional magnetic resonance imaging data with the social responsiveness scale metrics for the diagnostic assessment of autism. We selected 821 subjects containing the social responsiveness scale measure from the publicly available Autism Brain Imaging Data Exchange dataset, including 379 subjects with autism spectrum disorder and 442 typical controls. After preprocessing of fMRI data, we compute the static and dynamic functional connectivity for each subject. Subsequently, convolutional neural networks and attention mechanisms are utilized to extracts their respective features. The extracted features, combined with the social responsiveness scale features, are then employed as novel inputs for the support vector machine to categorize autistic patients and typical controls. The proposed model identifies salient features within the static and dynamic functional connectivity, offering a possible biological foundation for clinical diagnosis. By incorporating the behavioral assessments, the model achieves a remarkable classification accuracy of 94.30%, providing a more reliable support for auxiliary diagnosis.
Collapse
Affiliation(s)
- Linjie Qiu
- School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| | - Jian Zhai
- School of Mathematical Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
30
|
Akter A, Nosheen N, Ahmed S, Hossain M, Yousuf MA, Almoyad MAA, Hasan KF, Moni MA. Robust clinical applicable CNN and U-Net based algorithm for MRI classification and segmentation for brain tumor. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122347. [DOI: 10.1016/j.eswa.2023.122347] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/24/2025]
|
31
|
Behera TK, Khan MA, Bakshi S. Brain MR Image Classification Using Superpixel-Based Deep Transfer Learning. IEEE J Biomed Health Inform 2024; 28:1218-1227. [PMID: 36269915 DOI: 10.1109/jbhi.2022.3216270] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
Nowadays, brain MR (Magnetic Resonance) images are widely used by clinicians to examine the brain's anatomy to look into various pathological conditions like cerebrovascular incidents and neuro-degenerative diseases. Generally, these diseases can be identified with the MR images as "normal" and "abnormal" brains in a two-class classification problem or as disease-specific classes in a multi-class problem. This article presents an ensemble transfer learning-inspired deep architecture that uses the simple linear iterative clustering (SLIC)-based superpixel algorithm along with convolutional neural network (CNN) to classify the MR images as normal or abnormal. Superpixel algorithm segments the input MR images into clusters of regions defined by similarity measures using perceptual feature space. These superpixel images are beneficial as they can provide a compact and meaningful role in computationally demanding applications. The superpixel images are then fed to the deep convolutional neural network (CNN) to classify the images. Three brain MR image datasets, NITR-DHH, DS-75, and DS-160, are used to conduct the experimentation. Through the use of deep transfer learning, the model achieves performance accuracy of 88.15% (NITR-DHH), 98.15% (DS-160), and 98.33% (DS-75) even with the small-scale medical image dataset. The experimentally obtained results demonstrate that the proposed method is promising and efficient for clinical applications for diagnosing different brain diseases via MR images.
Collapse
|
32
|
Cekic E, Pinar E, Pinar M, Dagcinar A. Deep Learning-Assisted Segmentation and Classification of Brain Tumor Types on Magnetic Resonance and Surgical Microscope Images. World Neurosurg 2024; 182:e196-e204. [PMID: 38030068 DOI: 10.1016/j.wneu.2023.11.073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/15/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
OBJECTIVE The primary aim of this research was to harness the capabilities of deep learning to enhance neurosurgical procedures, focusing on accurate tumor boundary delineation and classification. Through advanced diagnostic tools, we aimed to offer surgeons a more insightful perspective during surgeries, improving surgical outcomes and patient care. METHODS The study deployed the Mask R-convolutional neural network (CNN) architecture, leveraging its sophisticated features to process and analyze data from surgical microscope videos and preoperative magnetic resonance images. Resnet101 and Resnet50 backbone networks are used in the Mask R-CNN method, and experimental results are given. We subsequently tested its performance across various metrics, such as accuracy, precision, recall, dice coefficient (DICE), and Jaccard index. Deep learning models were trained from magnetic resonance imaging and surgical microscope images, and the classification result obtained for each patient was combined with the weighted average. RESULTS The algorithm exhibited remarkable capabilities in distinguishing among meningiomas, metastases, and high-grade glial tumors. Specifically, for the Mask R-CNN Resnet 101 architecture, precision, recall, DICE, and Jaccard index values were recorded as 96%, 93%, 91%, and 84%, respectively. Conversely, for the Mask R-CNN Resnet 50 architecture, these values stood at 94%, 89%, 89%, and 82%. Additionally, the model achieved an impressive DICE score range of 94%-95% and an accuracy of 98% in pathology estimation. CONCLUSIONS As illustrated in our study, the confluence of deep learning with neurosurgical procedures marks a transformative phase in medical science. The results are promising but underscore diverse data sets' significance for training and refining these deep learning models.
Collapse
Affiliation(s)
- Efecan Cekic
- Department of Neurosurgery, Polatli Duatepe State Hospital, Ankara, Turkey.
| | - Ertugrul Pinar
- Department of Neurosurgery, Private Pendik Yuzyil Hospital, İstanbul, Turkey
| | - Merve Pinar
- Department of Computer Engineering, Marmara University, İstanbul, Turkey
| | - Adnan Dagcinar
- Department of Neurosurgery, Marmara University, İstanbul, Turkey
| |
Collapse
|
33
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
34
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
35
|
Rodríguez Mallma MJ, Vilca-Aguilar M, Zuloaga-Rotta L, Borja-Rosales R, Salas-Ojeda M, Mauricio D. Machine Learning Approach for Analyzing 3-Year Outcomes of Patients with Brain Arteriovenous Malformation (AVM) after Stereotactic Radiosurgery (SRS). Diagnostics (Basel) 2023; 14:22. [PMID: 38201331 PMCID: PMC10871108 DOI: 10.3390/diagnostics14010022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/14/2023] [Accepted: 12/17/2023] [Indexed: 01/12/2024] Open
Abstract
A cerebral arteriovenous malformation (AVM) is a tangle of abnormal blood vessels that irregularly connects arteries and veins. Stereotactic radiosurgery (SRS) has been shown to be an effective treatment for AVM patients, but the factors associated with AVM obliteration remains a matter of debate. In this study, we aimed to develop a model that can predict whether patients with AVM will be cured 36 months after intervention by means of SRS and identify the most important predictors that explain the probability of being cured. A machine learning (ML) approach was applied using decision tree (DT) and logistic regression (LR) techniques on historical data (sociodemographic, clinical, treatment, angioarchitecture, and radiosurgery procedure) of 202 patients with AVM who underwent SRS at the Instituto de Radiocirugía del Perú (IRP) between 2005 and 2018. The LR model obtained the best results for predicting AVM cure with an accuracy of 0.92, sensitivity of 0.93, specificity of 0.89, and an area under the curve (AUC) of 0.98, which shows that ML models are suitable for predicting the prognosis of medical conditions such as AVM and can be a support tool for medical decision-making. In addition, several factors were identified that could explain whether patients with AVM would be cured at 36 months with the highest likelihood: the location of the AVM, the occupation of the patient, and the presence of hemorrhage.
Collapse
Affiliation(s)
| | - Marcos Vilca-Aguilar
- Instituto de Radiocirugía del Perú, Clínica San Pablo, Lima 15023, Peru
- Servicio de Neurocirugía, Hospital María Auxiliadora, Lima 15828, Peru
| | - Luis Zuloaga-Rotta
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru
| | - Rubén Borja-Rosales
- Facultad de Ingeniería Industrial y de Sistemas, Universidad Nacional de Ingeniería, Lima 15333, Peru
| | | | - David Mauricio
- Universidad Nacional Mayor de San Marcos, Lima 15081, Peru
| |
Collapse
|
36
|
Ali S, Akhlaq F, Imran AS, Kastrati Z, Daudpota SM, Moosa M. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review. Comput Biol Med 2023; 166:107555. [PMID: 37806061 DOI: 10.1016/j.compbiomed.2023.107555] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 08/13/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.
Collapse
Affiliation(s)
- Subhan Ali
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Filza Akhlaq
- Department of Computer Science, Sukkur IBA University, Sukkur, 65200, Sindh, Pakistan.
| | - Ali Shariq Imran
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Zenun Kastrati
- Department of Informatics, Linnaeus University, Växjö, 351 95, Sweden.
| | | | - Muhammad Moosa
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| |
Collapse
|
37
|
B. A, Kaur M, Singh D, Roy S, Amoon M. Efficient Skip Connections-Based Residual Network (ESRNet) for Brain Tumor Classification. Diagnostics (Basel) 2023; 13:3234. [PMID: 37892055 PMCID: PMC10606037 DOI: 10.3390/diagnostics13203234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
Brain tumors pose a complex and urgent challenge in medical diagnostics, requiring precise and timely classification due to their diverse characteristics and potentially life-threatening consequences. While existing deep learning (DL)-based brain tumor classification (BTC) models have shown significant progress, they encounter limitations like restricted depth, vanishing gradient issues, and difficulties in capturing intricate features. To address these challenges, this paper proposes an efficient skip connections-based residual network (ESRNet). leveraging the residual network (ResNet) with skip connections. ESRNet ensures smooth gradient flow during training, mitigating the vanishing gradient problem. Additionally, the ESRNet architecture includes multiple stages with increasing numbers of residual blocks for improved feature learning and pattern recognition. ESRNet utilizes residual blocks from the ResNet architecture, featuring skip connections that enable identity mapping. Through direct addition of the input tensor to the convolutional layer output within each block, skip connections preserve the gradient flow. This mechanism prevents vanishing gradients, ensuring effective information propagation across network layers during training. Furthermore, ESRNet integrates efficient downsampling techniques and stabilizing batch normalization layers, which collectively contribute to its robust and reliable performance. Extensive experimental results reveal that ESRNet significantly outperforms other approaches in terms of accuracy, sensitivity, specificity, F-score, and Kappa statistics, with median values of 99.62%, 99.68%, 99.89%, 99.47%, and 99.42%, respectively. Moreover, the achieved minimum performance metrics, including accuracy (99.34%), sensitivity (99.47%), specificity (99.79%), F-score (99.04%), and Kappa statistics (99.21%), underscore the exceptional effectiveness of ESRNet for BTC. Therefore, the proposed ESRNet showcases exceptional performance and efficiency in BTC, holding the potential to revolutionize clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Ashwini B.
- Department of ISE, NMAM Institute of Technology, Nitte (Deemed to be University), Nitte 574110, India;
| | - Manjit Kaur
- School of Computer Science and Artificial Intelligence, SR University, Warangal 506371, India
| | - Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA;
- Research and Development Cell, Lovely Professional University, Phagwara 144411, India
| | - Satyabrata Roy
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur 303007, India;
| | - Mohammed Amoon
- Department of Computer Science, Community College, King Saud University, P.O. Box 28095, Riyadh 11437, Saudi Arabia
| |
Collapse
|
38
|
Raghavendra U, Gudigar A, Paul A, Goutham TS, Inamdar MA, Hegde A, Devi A, Ooi CP, Deo RC, Barua PD, Molinari F, Ciaccio EJ, Acharya UR. Brain tumor detection and screening using artificial intelligence techniques: Current trends and future perspectives. Comput Biol Med 2023; 163:107063. [PMID: 37329621 DOI: 10.1016/j.compbiomed.2023.107063] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 05/16/2023] [Accepted: 05/19/2023] [Indexed: 06/19/2023]
Abstract
A brain tumor is an abnormal mass of tissue located inside the skull. In addition to putting pressure on the healthy parts of the brain, it can lead to significant health problems. Depending on the region of the brain tumor, it can cause a wide range of health issues. As malignant brain tumors grow rapidly, the mortality rate of individuals with this cancer can increase substantially with each passing week. Hence it is vital to detect these tumors early so that preventive measures can be taken at the initial stages. Computer-aided diagnostic (CAD) systems, in coordination with artificial intelligence (AI) techniques, have a vital role in the early detection of this disorder. In this review, we studied 124 research articles published from 2000 to 2022. Here, the challenges faced by CAD systems based on different modalities are highlighted along with the current requirements of this domain and future prospects in this area of research.
Collapse
Affiliation(s)
- U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India.
| | - Aritra Paul
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - T S Goutham
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Mahesh Anil Inamdar
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Ajay Hegde
- Consultant Neurosurgeon Manipal Hospitals, Sarjapur Road, Bangalore, India
| | - Aruna Devi
- School of Education and Tertiary Access, University of the Sunshine Coast, Caboolture Campus, Australia
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore, 599494, Singapore
| | - Ravinesh C Deo
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD, 4300, Australia
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW, 2010, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD, 4350, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Politecnico di Torino, 10129, Torino, Italy
| | - Edward J Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY, 10032, USA
| | - U Rajendra Acharya
- School of Mathematics, Physics, and Computing, University of Southern Queensland, Springfield, QLD, 4300, Australia; International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, 860-8555, Japan
| |
Collapse
|
39
|
Mahum R, Sharaf M, Hassan H, Liang L, Huang B. A Robust Brain Tumor Detector Using BiLSTM and Mayfly Optimization and Multi-Level Thresholding. Biomedicines 2023; 11:1715. [PMID: 37371810 DOI: 10.3390/biomedicines11061715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 06/09/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
A brain tumor refers to an abnormal growth of cells in the brain that can be either benign or malignant. Oncologists typically use various methods such as blood or visual tests to detect brain tumors, but these approaches can be time-consuming, require additional human effort, and may not be effective in detecting small tumors. This work proposes an effective approach to brain tumor detection that combines segmentation and feature fusion. Segmentation is performed using the mayfly optimization algorithm with multilevel Kapur's threshold technique to locate brain tumors in MRI scans. Key features are achieved from tumors employing Histogram of Oriented Gradients (HOG) and ResNet-V2, and a bidirectional long short-term memory (BiLSTM) network is used to classify tumors into three categories: pituitary, glioma, and meningioma. The suggested methodology is trained and tested on two datasets, Figshare and Harvard, achieving high accuracy, precision, recall, F1 score, and area under the curve (AUC). The results of a comparative analysis with existing DL and ML methods demonstrate that the proposed approach offers superior outcomes. This approach has the potential to improve brain tumor detection, particularly for small tumors, but further validation and testing are needed before clinical use.
Collapse
Affiliation(s)
- Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
| | - Mohamed Sharaf
- Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
| | - Haseeb Hassan
- College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
| | - Lixin Liang
- College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
| |
Collapse
|
40
|
Olayah F, Senan EM, Ahmed IA, Awaji B. Blood Slide Image Analysis to Classify WBC Types for Prediction Haematology Based on a Hybrid Model of CNN and Handcrafted Features. Diagnostics (Basel) 2023; 13:diagnostics13111899. [PMID: 37296753 DOI: 10.3390/diagnostics13111899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 05/24/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
White blood cells (WBCs) are one of the main components of blood produced by the bone marrow. WBCs are part of the immune system that protects the body from infectious diseases and an increase or decrease in the amount of any type that causes a particular disease. Thus, recognizing the WBC types is essential for diagnosing the patient's health and identifying the disease. Analyzing blood samples to determine the amount and WBC types requires experienced doctors. Artificial intelligence techniques were applied to analyze blood samples and classify their types to help doctors distinguish between types of infectious diseases due to increased or decreased WBC amounts. This study developed strategies for analyzing blood slide images to classify WBC types. The first strategy is to classify WBC types by the SVM-CNN technique. The second strategy for classifying WBC types is by SVM based on hybrid CNN features, which are called VGG19-ResNet101-SVM, ResNet101-MobileNet-SVM, and VGG19-ResNet101-MobileNet-SVM techniques. The third strategy for classifying WBC types by FFNN is based on a hybrid model of CNN and handcrafted features. With MobileNet and handcrafted features, FFNN achieved an AUC of 99.43%, accuracy of 99.80%, precision of 99.75%, specificity of 99.75%, and sensitivity of 99.68%.
Collapse
Affiliation(s)
- Fekry Olayah
- Department of Information System, Faculty Computer Science and information System, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | | - Bakri Awaji
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| |
Collapse
|
41
|
Mostafa AM, Zakariah M, Aldakheel EA. Brain Tumor Segmentation Using Deep Learning on MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13091562. [PMID: 37174953 PMCID: PMC10177460 DOI: 10.3390/diagnostics13091562] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/18/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023] Open
Abstract
Brain tumor (BT) diagnosis is a lengthy process, and great skill and expertise are required from radiologists. As the number of patients has expanded, so has the amount of data to be processed, making previous techniques both costly and ineffective. Many academics have examined a range of reliable and quick techniques for identifying and categorizing BTs. Recently, deep learning (DL) methods have gained popularity for creating computer algorithms that can quickly and reliably diagnose or segment BTs. To identify BTs in medical images, DL permits a pre-trained convolutional neural network (CNN) model. The suggested magnetic resonance imaging (MRI) images of BTs are included in the BT segmentation dataset, which was created as a benchmark for developing and evaluating algorithms for BT segmentation and diagnosis. There are 335 annotated MRI images in the collection. For the purpose of developing and testing BT segmentation and diagnosis algorithms, the brain tumor segmentation (BraTS) dataset was produced. A deep CNN was also utilized in the model-building process for segmenting BTs using the BraTS dataset. To train the model, a categorical cross-entropy loss function and an optimizer, such as Adam, were employed. Finally, the model's output successfully identified and segmented BTs in the dataset, attaining a validation accuracy of 98%.
Collapse
Affiliation(s)
- Almetwally M Mostafa
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Mohammed Zakariah
- Department of Computer Science, College of Computer and Information Science, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Eman Abdullah Aldakheel
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
42
|
Hussain S, Haider S, Maqsood S, Damaševičius R, Maskeliūnas R, Khan M. ETISTP: An Enhanced Model for Brain Tumor Identification and Survival Time Prediction. Diagnostics (Basel) 2023; 13:diagnostics13081456. [PMID: 37189556 DOI: 10.3390/diagnostics13081456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 03/30/2023] [Accepted: 04/14/2023] [Indexed: 05/17/2023] Open
Abstract
Technology-assisted diagnosis is increasingly important in healthcare systems. Brain tumors are a leading cause of death worldwide, and treatment plans rely heavily on accurate survival predictions. Gliomas, a type of brain tumor, have particularly high mortality rates and can be further classified as low- or high-grade, making survival prediction challenging. Existing literature provides several survival prediction models that use different parameters, such as patient age, gross total resection status, tumor size, or tumor grade. However, accuracy is often lacking in these models. The use of tumor volume instead of size may improve the accuracy of survival prediction. In response to this need, we propose a novel model, the enhanced brain tumor identification and survival time prediction (ETISTP), which computes tumor volume, classifies it into low- or high-grade glioma, and predicts survival time with greater accuracy. The ETISTP model integrates four parameters: patient age, survival days, gross total resection (GTR) status, and tumor volume. Notably, ETISTP is the first model to employ tumor volume for prediction. Furthermore, our model minimizes the computation time by allowing for parallel execution of tumor volume computation and classification. The simulation results demonstrate that ETISTP outperforms prominent survival prediction models.
Collapse
Affiliation(s)
- Shah Hussain
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Shahab Haider
- Department of Computer Science, City University of Science and Information Technology, Peshawar 25000, Pakistan
| | - Sarmad Maqsood
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Rytis Maskeliūnas
- Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Muzammil Khan
- Department of Computer & Software Technology, University of Swat, Swat 19200, Pakistan
| |
Collapse
|
43
|
Srinivasan S, Bai PSM, Mathivanan SK, Muthukumaran V, Babu JC, Vilcekova L. Grade Classification of Tumors from Brain Magnetic Resonance Images Using a Deep Learning Technique. Diagnostics (Basel) 2023; 13:diagnostics13061153. [PMID: 36980463 PMCID: PMC10046932 DOI: 10.3390/diagnostics13061153] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/14/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | | | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
- Correspondence:
| |
Collapse
|
44
|
Hybrid Multilevel Thresholding Image Segmentation Approach for Brain MRI. Diagnostics (Basel) 2023; 13:diagnostics13050925. [PMID: 36900074 PMCID: PMC10000536 DOI: 10.3390/diagnostics13050925] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/17/2023] [Accepted: 02/21/2023] [Indexed: 03/06/2023] Open
Abstract
A brain tumor is an abnormal growth of tissues inside the skull that can interfere with the normal functioning of the neurological system and the body, and it is responsible for the deaths of many individuals every year. Magnetic Resonance Imaging (MRI) techniques are widely used for detection of brain cancers. Segmentation of brain MRI is a foundational process with numerous clinical applications in neurology, including quantitative analysis, operational planning, and functional imaging. The segmentation process classifies the pixel values of the image into different groups based on the intensity levels of the pixels and a selected threshold value. The quality of the medical image segmentation extensively depends on the method which selects the threshold values of the image for the segmentation process. The traditional multilevel thresholding methods are computationally expensive since these methods thoroughly search for the best threshold values to maximize the accuracy of the segmentation process. Metaheuristic optimization algorithms are widely used for solving such problems. However, these algorithms suffer from the problem of local optima stagnation and slow convergence speed. In this work, the original Bald Eagle Search (BES) algorithm problems are resolved in the proposed Dynamic Opposite Bald Eagle Search (DOBES) algorithm by employing Dynamic Opposition Learning (DOL) at the initial, as well as exploitation, phases. Using the DOBES algorithm, a hybrid multilevel thresholding image segmentation approach has been developed for MRI image segmentation. The hybrid approach is divided into two phases. In the first phase, the proposed DOBES optimization algorithm is used for the multilevel thresholding. After the selection of the thresholds for the image segmentation, the morphological operations have been utilized in the second phase to remove the unwanted area present in the segmented image. The performance efficiency of the proposed DOBES based multilevel thresholding algorithm with respect to BES has been verified using the five benchmark images. The proposed DOBES based multilevel thresholding algorithm attains higher Peak Signal-to-Noise ratio (PSNR) and Structured Similarity Index Measure (SSIM) value in comparison to the BES algorithm for the benchmark images. Additionally, the proposed hybrid multilevel thresholding segmentation approach has been compared with the existing segmentation algorithms to validate its significance. The results show that the proposed algorithm performs better for tumor segmentation in MRI images as the SSIM value attained using the proposed hybrid segmentation approach is nearer to 1 when compared with ground truth images.
Collapse
|
45
|
Investigating the Impact of Two Major Programming Environments on the Accuracy of Deep Learning-Based Glioma Detection from MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13040651. [PMID: 36832138 PMCID: PMC9955350 DOI: 10.3390/diagnostics13040651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/04/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection.
Collapse
|
46
|
Astolfi RS, da Silva DS, Guedes IS, Nascimento CS, Damaševičius R, Jagatheesaperumal SK, de Albuquerque VHC, Leite JAD. Computer-Aided Ankle Ligament Injury Diagnosis from Magnetic Resonance Images Using Machine Learning Techniques. SENSORS (BASEL, SWITZERLAND) 2023; 23:1565. [PMID: 36772604 PMCID: PMC9919370 DOI: 10.3390/s23031565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/16/2023] [Accepted: 01/28/2023] [Indexed: 06/18/2023]
Abstract
Ankle injuries caused by the Anterior Talofibular Ligament (ATFL) are the most common type of injury. Thus, finding new ways to analyze these injuries through novel technologies is critical for assisting medical diagnosis and, as a result, reducing the subjectivity of this process. As a result, the purpose of this study is to compare the ability of specialists to diagnose lateral tibial tuberosity advancement (LTTA) injury using computer vision analysis on magnetic resonance imaging (MRI). The experiments were carried out on a database obtained from the Vue PACS-Carestream software, which contained 132 images of ATFL and normal (healthy) ankles. Because there were only a few images, image augmentation techniques was used to increase the number of images in the database. Following that, various feature extraction algorithms (GLCM, LBP, and HU invariant moments) and classifiers such as Multi-Layer Perceptron (MLP), Support Vector Machine (SVM), k-Nearest Neighbors (kNN), and Random Forest (RF) were used. Based on the results from this analysis, for cases that lack clear morphologies, the method delivers a hit rate of 85.03% with an increase of 22% over the human expert-based analysis.
Collapse
Affiliation(s)
- Rodrigo S. Astolfi
- Graduate Program in Surgery, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Daniel S. da Silva
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Ingrid S. Guedes
- Graduate Program in Surgery, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Caio S. Nascimento
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| | - Robertas Damaševičius
- Department of Software Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania
| | - Senthil K. Jagatheesaperumal
- Department of Electronics and Communication Engineering, Mepco Schlenk Engineering College, Sivakasi 626005, TN, India
| | | | - José Alberto D. Leite
- Graduate Program in Surgery, Federal University of Ceará, Fortaleza 60455-970, CE, Brazil
| |
Collapse
|
47
|
Kurdi SZ, Ali MH, Jaber MM, Saba T, Rehman A, Damaševičius R. Brain Tumor Classification Using Meta-Heuristic Optimized Convolutional Neural Networks. J Pers Med 2023; 13:jpm13020181. [PMID: 36836415 PMCID: PMC9965936 DOI: 10.3390/jpm13020181] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
The field of medical image processing plays a significant role in brain tumor classification. The survival rate of patients can be increased by diagnosing the tumor at an early stage. Several automatic systems have been developed to perform the tumor recognition process. However, the existing systems could be more efficient in identifying the exact tumor region and hidden edge details with minimum computation complexity. The Harris Hawks optimized convolution network (HHOCNN) is used in this work to resolve these issues. The brain magnetic resonance (MR) images are pre-processed, and the noisy pixels are eliminated to minimize the false tumor recognition rate. Then, the candidate region process is applied to identify the tumor region. The candidate region method investigates the boundary regions with the help of the line segments concept, which reduces the loss of hidden edge details. Various features are extracted from the segmented region, which is classified by applying a convolutional neural network (CNN). The CNN computes the exact region of the tumor with fault tolerance. The proposed HHOCNN system was implemented using MATLAB, and performance was evaluated using pixel accuracy, error rate, accuracy, specificity, and sensitivity metrics. The nature-inspired Harris Hawks optimization algorithm minimizes the misclassification error rate and improves the overall tumor recognition accuracy to 98% achieved on the Kaggle dataset.
Collapse
Affiliation(s)
- Sarah Zuhair Kurdi
- Medical College, Kufa University, Al.Najaf Teaching Hospital M.B.ch.B/F.I.C.M Neurosurgery, Baghdad 54001, Iraq
| | - Mohammed Hasan Ali
- Computer Techniques Engineering Department, Faculty of Information Technology, Imam Ja’afar Al-Sadiq University, Baghdad 10021, Iraq
- College of Computer Science and Mathematics, University of Kufa, Najaf 540011, Iraq
| | - Mustafa Musa Jaber
- Department of Medical Instruments Engineering Techniques, Dijlah University College, Baghdad 00964, Iraq
- Department of Medical Instruments Engineering Techniques, Al-Turath University College, Baghdad 10021, Iraq
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
- Correspondence:
| |
Collapse
|
48
|
Attique Khan M, R. Mostafa R, Zhang YD, Baili J, Alhaisoni M, Tariq U, Ali Khan J, Jin Kim Y, Cha J. Deep-Net: Fine-Tuned Deep Neural Network Multi-Features Fusion for Brain Tumor Recognition. COMPUTERS, MATERIALS & CONTINUA 2023; 76:3029-3047. [DOI: 10.32604/cmc.2023.038838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 05/23/2023] [Indexed: 08/25/2024]
|
49
|
Yadav AS, Kumar S, Karetla GR, Cotrina-Aliaga JC, Arias-Gonzáles JL, Kumar V, Srivastava S, Gupta R, Ibrahim S, Paul R, Naik N, Singla B, Tatkar NS. A Feature Extraction Using Probabilistic Neural Network and BTFSC-Net Model with Deep Learning for Brain Tumor Classification. J Imaging 2022; 9:jimaging9010010. [PMID: 36662108 PMCID: PMC9865827 DOI: 10.3390/jimaging9010010] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 12/21/2022] [Accepted: 12/28/2022] [Indexed: 01/03/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. MATERIALS AND METHODS to reduce noise from medical images, the hybrid probabilistic wiener filter (HPWF) is first applied as a preprocessing step. Then, to combine robust edge analysis (REA) properties in magnetic resonance imaging (MRI) and computed tomography (CT) medical images, a fusion network based on deep learning convolutional neural networks (DLCNN) is developed. Here, the brain images' slopes and borders are detected using REA. To separate the sick region from the color image, adaptive fuzzy c-means integrated k-means (HFCMIK) clustering is then implemented. To extract hybrid features from the fused image, low-level features based on the redundant discrete wavelet transform (RDWT), empirical color features, and texture characteristics based on the gray-level cooccurrence matrix (GLCM) are also used. Finally, to distinguish between benign and malignant tumors, a deep learning probabilistic neural network (DLPNN) is deployed. RESULTS according to the findings, the suggested BTFSC-Net model performed better than more traditional preprocessing, fusion, segmentation, and classification techniques. Additionally, 99.21% segmentation accuracy and 99.46% classification accuracy were reached using the proposed BTFSC-Net model. CONCLUSIONS earlier approaches have not performed as well as our presented method for image fusion, segmentation, feature extraction, classification operations, and brain tumor classification. These results illustrate that the designed approach performed more effectively in terms of enhanced quantitative evaluation with better accuracy as well as visual performance.
Collapse
Affiliation(s)
- Arun Singh Yadav
- Department of Computer Science, University of Lucknow, Lucknow 226007, Uttar Pradesh, India
| | - Surendra Kumar
- Department of Computer Application, Marwadi University, Rajkot 360003, Gujrat, India
| | - Girija Rani Karetla
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Penrith, NSW 2751, Australia
| | | | - José Luis Arias-Gonzáles
- Department of Business, Pontificia Universidad Católica del Perú, Av. Universitaria 1801, San Miguel 15088, Peru
| | - Vinod Kumar
- Department of Computer Applications, ABES Engineering College, Ghaziabad 201009, Uttar Pradesh, India
| | - Satyajee Srivastava
- Department of Computer Science and Engineering, University of Engineering and Technology Roorkee, Roorkee 247667, Uttarakhand, India
| | - Reena Gupta
- Department of Pharmacognosy, Institute of Pharmaceutical Research, GLA University, Mathura 281406, Uttar Pradesh, India
| | - Sufyan Ibrahim
- Neuro-Informatics Laboratory, Department of Neurological Surgery, Mayo Clinic, Rochester, MN 55905, USA
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02115, USA
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
| | - Nithesh Naik
- iTRUE (International Training and Research in Uro-Oncology and Endourology) Group, Manipal 576104, Karnataka, India
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India
- Curiouz TechLab Private Limited, BIRAC-BioNEST, Manipal Government of Karnataka Bioincubator, Manipal 576104, Karnataka, India
- Correspondence: ; Tel.: +91-83-1087-4339
| | - Babita Singla
- Chitkara Business School, Chitkara University, Chandigarh 140401, Punjab, India
| | - Nisha S. Tatkar
- Department of Postgraduate Diploma in Management, Institute of PGDM, Mumbai Education Trust, Mumbai 400050, Maharashtra, India
| |
Collapse
|
50
|
Pattanaik BB, Anitha K, Rathore S, Biswas P, Sethy PK, Behera SK. Brain tumor magnetic resonance images classification based machine learning paradigms. Contemp Oncol (Pozn) 2022; 26:268-274. [PMID: 36816391 PMCID: PMC9933351 DOI: 10.5114/wo.2023.124612] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 12/26/2022] [Indexed: 02/04/2023] Open
Abstract
INTRODUCTION Cancer of the nervous system is one of the most common types of cancer in the world and mostly due to presence of a tumour in the brain. The symptoms and severity of the brain tumour depend on its location. The tumour within the brain may develop from nerves, dura (meningioma), pituitary gland (pituitary adenoma), or from the brain tissue itself (glioma). MATERIAL AND METHODS In this study we proposed a feature engineering approach for classification magnetic resonance imaging (MRI) of 3 kinds of most common brain tumour, i.e. glioma, meningioma, pituitary, and no-tumour. Here 5 machine learning classifiers were used, i.e. support vector machine, K-nearest neighbour (KNN), Naive Bayes, Decision Tree, and Ensemble classifier with their paradigms. RESULTS The handcrafted features such as histogram of oriented gradients, local binary pattern features, and grey level co-occurrence matrix are extracted from the MRI, and the feature fusion technique is adopted to enhance the dimension of feature vector. The Fine KNN outperforms among the classifiers for recognition of 4 kinds of MRI: glioma, meningioma, pituitary, and no tumour, and achieved 91.1% accuracy and 0.95 area under the curve (AUC). CONCLUSIONS The proposed method, i.e. Fine KNN, achieved 91.1% accuracy and 0.96 AUC. Furthermore, this model has the possibility to integrate in low-end devices unlike deep learning, which required a complex system.
Collapse
Affiliation(s)
| | - Komma Anitha
- ECE Department, PVP Siddhartha Institute of Technology, Vijayawada, India
| | - Shanti Rathore
- Department of ET and T, Dr. C. V. Raman University Bilaspur Chhattisgarh, India
| | - Preesat Biswas
- Department of ET and T, GEC Jagdalpur, Chhattisgarh,India
| | | | | |
Collapse
|