1
|
Bouhafra S, El Bahi H. Deep Learning Approaches for Brain Tumor Detection and Classification Using MRI Images (2020 to 2024): A Systematic Review. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:1403-1433. [PMID: 39349785 DOI: 10.1007/s10278-024-01283-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Revised: 09/08/2024] [Accepted: 09/10/2024] [Indexed: 05/22/2025]
Abstract
Brain tumor is a type of disease caused by uncontrolled cell proliferation in the brain leading to serious health issues such as memory loss and motor impairment. Therefore, early diagnosis of brain tumors plays a crucial role to extend the survival of patients. However, given the busy nature of the work of radiologists and aiming to reduce the likelihood of false diagnoses, advancing technologies including computer-aided diagnosis and artificial intelligence have shown an important role in assisting radiologists. In recent years, a number of deep learning-based methods have been applied for brain tumor detection and classification using MRI images and achieved promising results. The main objective of this paper is to present a detailed review of the previous researches in this field. In addition, This work summarizes the existing limitations and significant highlights. The study systematically reviews 60 articles researches published between 2020 and January 2024, extensively covering methods such as transfer learning, autoencoders, transformers, and attention mechanisms. The key findings formulated in this paper provide an analytic comparison and future directions. The review aims to provide a comprehensive understanding of automatic techniques that may be useful for professionals and academic communities working on brain tumor classification and detection.
Collapse
Affiliation(s)
- Sara Bouhafra
- Faculty of Sciences and Techniques, Department of Computer Science, L2IS Laboratory, Cadi Ayyad University, Marrakesh, Morocco.
| | - Hassan El Bahi
- Faculty of Sciences and Techniques, Department of Computer Science, L2IS Laboratory, Cadi Ayyad University, Marrakesh, Morocco
| |
Collapse
|
2
|
Awasthi V, Tiwari M, Yadav A, Thakur G, Panda MM, Kumar H, Tripathi S. Optimizing brain tumor detection in MRI scans through InceptionResNetV2 and deep stacked Autoencoders with SwiGLU activation and sparsity regularization. MethodsX 2025; 14:103255. [PMID: 40144141 PMCID: PMC11938147 DOI: 10.1016/j.mex.2025.103255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Accepted: 03/05/2025] [Indexed: 03/28/2025] Open
Abstract
This study presents an automated framework for brain tumor classification aimed at accurately distinguishing tumor types in MRI images. The proposed model integrates InceptionResNetV2 for feature extraction with Deep Stacked Autoencoders (DSAEs) for classification, enhanced by sparsity regularization and the SwiGLU activation function. InceptionResNetV2, pre-trained on ImageNet, was fine-tuned to extract multi-scale features, while the DSAE structure compressed these features to highlight critical attributes essential for classification. The approach achieved high performance, reaching an overall accuracy of 99.53 %, precision of 98.27 %, recall of 99.21 %, specificity of 98.73 %, and an F1-score of 98.74 %. These results demonstrate the model's efficacy in accurately categorizing glioma, meningioma, pituitary tumors, and non-tumor cases, with minimal misclassifications. Despite its success, limitations include the model's dependency on pre-trained weights and significant computational resources. Future studies should address these limitations by enhancing interpretability, exploring domain-specific transfer learning, and validating on diverse datasets to strengthen the model's utility in real-world settings. Overall, the InceptionResNetV2 integrated with DSAEs, sparsity regularization, and SwiGLU offers a promising solution for reliable and efficient brain tumor diagnosis in clinical environments.•Leveraging a pre-trained InceptionResNetV2 model to capture multi-scale features from MRI data.•Utilizing Deep Stacked Autoencoders with sparsity regularization to emphasize critical attributes for precise classification.•Incorporating the SwiGLU activation function to capture complex, non-linear patterns within the data.
Collapse
Affiliation(s)
- Vishal Awasthi
- Department of Electronics and Communication Engineering, Chhatrapati Shahu Ji Maharaj University, Kanpur, India
| | - Mamta Tiwari
- Department of Computer Application, Chhatrapati Shahu Ji Maharaj University, Kanpur, India
| | - Amit Yadav
- Department of Computer Application, PSIT College of Higher Education, Kanpur, India
| | | | - Mamata Mayee Panda
- Department of Computer Application, Chhatrapati Shahu Ji Maharaj University, Kanpur, India
| | - Hemant Kumar
- Department of Information Technology, Chhatrapati Shahu Ji Maharaj University, Kanpur, India
| | - Shivneet Tripathi
- Department of Computer Application, Chhatrapati Shahu Ji Maharaj University, Kanpur, India
| |
Collapse
|
3
|
Nisa ZU, Bhatti SM, Jaffar A, Mazhar T, Shahzad T, Ghadi YY, Almogren A, Hamam H. Beyond Accuracy: Evaluating certainty of AI models for brain tumour detection. Comput Biol Med 2025; 193:110375. [PMID: 40424764 DOI: 10.1016/j.compbiomed.2025.110375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 05/06/2025] [Accepted: 05/09/2025] [Indexed: 05/29/2025]
Abstract
Brain tumors pose a severe health risk, often leading to fatal outcomes if not detected early. While most studies focus on improving classification accuracy, this research emphasizes prediction certainty, quantified through loss values. Traditional metrics like accuracy and precision do not capture confidence in predictions, which is critical for medical applications. This study establishes a correlation between lower loss values and higher prediction certainty, ensuring more reliable tumor classification. We evaluate CNN, ResNet50, XceptionNet, and a Proposed Model (VGG19 with customized classification layers) using accuracy, precision, recall, and loss. Results show that while accuracy remains comparable across models, the Proposed Model achieves the best performance (96.95 % accuracy, 0.087 loss), outperforming others in both precision and recall. These findings demonstrate that certainty-aware AI models are essential for reliable clinical decision-making. This study highlights the potential of AI to bridge the shortage of medical professionals by integrating reliable diagnostic tools in healthcare. AI-powered systems can enhance early detection and improve patient outcomes, reinforcing the need for certainty-driven AI adoption in medical imaging.
Collapse
Affiliation(s)
- Zaib Un Nisa
- Department of Computer Science and Information Technology, The Superior University, Lahore, 54600, Pakistan; Intelligent Data Visual Computing Research (IDVCR), Lahore, 54600, Pakistan.
| | - Sohail Masood Bhatti
- Department of Computer Science and Information Technology, The Superior University, Lahore, 54600, Pakistan; Intelligent Data Visual Computing Research (IDVCR), Lahore, 54600, Pakistan.
| | - Arfan Jaffar
- Department of Computer Science and Information Technology, The Superior University, Lahore, 54600, Pakistan; Intelligent Data Visual Computing Research (IDVCR), Lahore, 54600, Pakistan.
| | - Tehseen Mazhar
- School of Computer Science, National College of Business Administration and Economics, Lahore, 54000, Pakistan; Department of Computer Science, School Education Department, Government of Punjab, Layyah, 31200, Pakistan.
| | - Tariq Shahzad
- Department of Computer Engineering, COMSATS University Islamabad, Sahiwal Campus, Sahiwal, 57000, Pakistan.
| | - Yazeed Yasin Ghadi
- Department of Computer Science and Software Engineering, Al Ain University, Abu Dhabi, 12555, United Arab Emirates.
| | - Ahmad Almogren
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, 11633, Saudi Arabia.
| | - Habib Hamam
- School of Electrical Engineering, University of Johannesburg, Johannesburg, 2006, South Africa; International Institute of Technology and Management (IITG), Av. Grandes Ecoles, Libreville, BP 1989, Gabon; College of Computer Science and Eng. (Invited Prof.), University of Ha'il, Ha'il, 55476, Saudi Arabia; Faculty of Engineering, Uni de Moncton, Moncton, NB, E1A3E9, Canada.
| |
Collapse
|
4
|
Ghose P, Jamil HM. BrainView: A Cloud-based Deep Learning System for Brain Image Segmentation, Tumor Detection and Visualization. Biomed J 2025:100871. [PMID: 40409506 DOI: 10.1016/j.bj.2025.100871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2024] [Revised: 04/15/2025] [Accepted: 05/06/2025] [Indexed: 05/25/2025] Open
Abstract
A brain tumor is an abnormal growth in the brain that disrupts its functionality and poses a significant threat to human life by damaging neurons. Early detection and classification of brain tumors are crucial to prevent complications and maintain good health. Recent advancements in deep learning techniques have shown immense potential in image classification and segmentation for tumor identification and classification. In this study, we present a platform, BrainView, for detection, and segmentation of brain tumors from Magnetic Resonance Images (MRI) using deep learning. We utilized EfficientNetB7 pre-trained model to design our proposed DeepBrainNet classification model for analyzing brain MRI images to classify its type. We also proposed a EfficinetNetB7 based image segmentation model, called the EffB7-UNet, for tumor localization. Experimental results show significantly high classification (99.96%) and segmentation (92.734%) accuracies for our proposed models. Finally, we discuss the contours of a cloud application for BrainView using Flask and Flutter to help researchers and clinicians use our machine learning models online for research purposes.
Collapse
Affiliation(s)
- Partho Ghose
- Department of Biological and Agricultural Engineering, Texas A&M University, College Station, Texas, USA.
| | - Hasan M Jamil
- Department of Computer Science, University of Idaho, Moscow, Idaho, USA.
| |
Collapse
|
5
|
Alom MR, Farid FA, Rahaman MA, Rahman A, Debnath T, Miah ASM, Mansor S. An explainable AI-driven deep neural network for accurate breast cancer detection from histopathological and ultrasound images. Sci Rep 2025; 15:17531. [PMID: 40394112 PMCID: PMC12092800 DOI: 10.1038/s41598-025-97718-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2024] [Accepted: 04/07/2025] [Indexed: 05/22/2025] Open
Abstract
Breast cancer represents a significant global health challenge, which makes it essential to detect breast cancer early and accurately to improve patient prognosis and reduce mortality rates. However, traditional diagnostic processes relying on manual analysis of medical images are inherently complex and subject to variability between observers, highlighting the urgent need for robust automated breast cancer detection systems. While deep learning has demonstrated potential, many current models struggle with limited accuracy and lack of interpretability. This research introduces the Deep Neural Breast Cancer Detection (DNBCD) model, an explainable AI-based framework that utilizes deep learning methods for classifying breast cancer using histopathological and ultrasound images. The proposed model employs Densenet121 as a foundation, integrating customized Convolutional Neural Network (CNN) layers including GlobalAveragePooling2D, Dense, and Dropout layers along with transfer learning to achieve both high accuracy and interpretability for breast cancer diagnosis. The proposed DNBCD model integrates several preprocessing techniques, including image normalization and resizing, and augmentation techniques to enhance the model's robustness and address class imbalances using class weight. It employs Grad-CAM (Gradient-weighted Class Activation Mapping) to offer visual justifications for its predictions, increasing trust and transparency among healthcare providers. The model was assessed using two benchmark datasets: Breakhis-400x (B-400x) and Breast Ultrasound Images Dataset (BUSI) containing 1820 and 1578 images, respectively. We systematically divided the datasets into training (70%), testing (20%,) and validation (10%) sets, ensuring efficient model training and evaluation obtaining accuracies of 93.97% for B-400x dataset having benign and malignant classes and 89.87% for BUSI dataset having benign, malignant, and normal classes for breast cancer detection. Experimental results demonstrate that the proposed DNBCD model significantly outperforms existing state-of-the-art approaches with potential uses in clinical environments. We also made all the materials publicly accessible for the research community at: https://github.com/romzanalom/XAI-Based-Deep-Neural-Breast-Cancer-Detection .
Collapse
Affiliation(s)
- Md Romzan Alom
- Department of Computer Science and Engineering, Green University of Bangladesh (GUB), Purbachal American City, Kanchon, Dhaka, 1460, Bangladesh
| | - Fahmid Al Farid
- Faculty of Artificial Intelligence and Engineering, Multimedia University, 63100, Cyberjaya, Malaysia
| | - Muhammad Aminur Rahaman
- Department of Computer Science and Engineering, Green University of Bangladesh (GUB), Purbachal American City, Kanchon, Dhaka, 1460, Bangladesh.
| | - Anichur Rahman
- Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka, 1350, Bangladesh.
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh.
| | - Tanoy Debnath
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Abu Saleh Musa Miah
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology (BAUST), Nilphamari, Bangladesh
| | - Sarina Mansor
- Faculty of Artificial Intelligence and Engineering, Multimedia University, 63100, Cyberjaya, Malaysia.
| |
Collapse
|
6
|
Soundarya B, Poongodi C. A novel hybrid feature fusion approach using handcrafted features with transfer learning model for enhanced skin cancer classification. Comput Biol Med 2025; 190:110104. [PMID: 40168807 DOI: 10.1016/j.compbiomed.2025.110104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 03/20/2025] [Accepted: 03/26/2025] [Indexed: 04/03/2025]
Abstract
Skin cancer is a deadly disease and has the highest rising rates globally. It arises from aberrant skin cells, which are often caused by prolonged exposure to ultraviolet rays from sunlight or artificial tanning devices. Dermatologists rely on visual inspection and need to identify suspicious lesions. Prompt and accurate diagnosis is pivotal for effective treatment and enhancing the chances of recovery. Recently, skin cancer prediction has been made utilising machine and deep learning algorithms for early detection. This methodology presents a novel hybrid feature extraction and is fused with a deep learning model for dermoscopic image analysis. Skin lesion images from sources like ISIC were pre-processed. Features were extracted using the Grey-Level Co-Occurrence Matrix (GLCM), Redundant Discrete Wavelet Transform (RDWT) and a various pre-trained model. After evaluating all the combinations, the proposed feature fusion model performed well rather than all other models. This proposed feature fusion model includes GLCM, RDWT, and DenseNet121 features, which were estimated with the various classifiers, among which an impressive accuracy of 93.46 % was obtained with the XGBoost classifier and 94.25 % with the ensemble classifier. This study underscores the efficacy of integrating diverse feature extraction techniques to increase the reliability and effectiveness of skin cancer diagnosis.
Collapse
Affiliation(s)
- B Soundarya
- Bannari Amman Institute of Technology Sathyamangalam, India.
| | - C Poongodi
- Bannari Amman Institute of Technology Sathyamangalam, India.
| |
Collapse
|
7
|
Chen W, Liu J, Tan X, Zhang J, Du G, Fu Q, Jiang H. EnSLDe: an enhanced short-range and long-range dependent system for brain tumor classification. Front Oncol 2025; 15:1512739. [PMID: 40291907 PMCID: PMC12021619 DOI: 10.3389/fonc.2025.1512739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2024] [Accepted: 03/21/2025] [Indexed: 04/30/2025] Open
Abstract
Introduction Brain tumors pose significant harm to the functionality of the human nervous system. There are lots of models which can classify brain tumor type. However, the available methods did not pay special attention to long-range information, which limits model accuracy improvement. Methods To solve this problem, in this paper, an enhanced short-range and long-range dependent system for brain tumor classification, named as EnSLDe, is proposed. The EnSLDe model consists of three main modules: the Feature Extraction Module (FExM), the Feature Enhancement Module (FEnM), and the Classification Module. Firstly, the FExM is used to extract features and the multi-scale parallel subnetwork is constructed to fuse shallow and deep features. Then, the extracted features are enhanced by the FEnM. The FEnM can capture the important dependencies across a larger sequence range and retain critical information at a local scale. Finally, the fused and enhanced features are input to the classification module for brain tumor classification. The combination of these modules enables the efficient extraction of both local and global contextual information. Results In order to validate the model, two public data sets including glioma, meningioma, and pituitary tumor were validated, and good experimental results were obtained, demonstrating the potential of the model EnSLDe in brain tumor classification.
Collapse
Affiliation(s)
- Wenna Chen
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Junqiang Liu
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Xinghua Tan
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Jincan Zhang
- College of Information Engineering, Henan University of Science and Technology, Luoyang, China
| | - Ganqin Du
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Qizhi Fu
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| | - Hongwei Jiang
- The First Affiliated Hospital, and College of Clinical Medicine of Henan University of Science and Technology, Luoyang, China
| |
Collapse
|
8
|
Ahmed F, Sharma A, Shatabda S, Dehzangi I. DeepPhoPred: Accurate Deep Learning Model to Predict Microbial Phosphorylation. Proteins 2025; 93:465-481. [PMID: 39239684 DOI: 10.1002/prot.26734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 06/27/2024] [Accepted: 07/15/2024] [Indexed: 09/07/2024]
Abstract
Phosphorylation is a substantial posttranslational modification of proteins that refers to adding a phosphate group to the amino acid side chain after translation process in the ribosome. It is vital to coordinate cellular functions, such as regulating metabolism, proliferation, apoptosis, subcellular trafficking, and other crucial physiological processes. Phosphorylation prediction in a microbial organism can assist in understanding pathogenesis and host-pathogen interaction, drug and antibody design, and antimicrobial agent development. Experimental methods for predicting phosphorylation sites are costly, slow, and tedious. Hence low-cost and high-speed computational approaches are highly desirable. This paper presents a new deep learning tool called DeepPhoPred for predicting microbial phospho-serine (pS), phospho-threonine (pT), and phospho-tyrosine (pY) sites. DeepPhoPred incorporates a two-headed convolutional neural network architecture with the squeeze and excitation blocks followed by fully connected layers that jointly learn significant features from the peptide's structural and evolutionary information to predict phosphorylation sites. Our empirical results demonstrate that DeepPhoPred significantly outperforms the existing microbial phosphorylation site predictors with its highly efficient deep-learning architecture. DeepPhoPred as a standalone predictor, all its source codes, and our employed datasets are publicly available at https://github.com/faisalahm3d/DeepPhoPred.
Collapse
Affiliation(s)
- Faisal Ahmed
- Digital Health Unit, NVISION Systems and Technologies SL, Barcelona, Spain
- Department of Computer Engineering and Mathematics, Universitat Rovira i Virgili, Tarragona, Spain
| | - Alok Sharma
- Laboratory of Medical Science Mathematics, Department of Biological Sciences, Graduate School of Science, The University of Tokyo, Tokyo, Japan
- Institute for Integrated and Intelligent Systems, Griffith University, Brisbane, Queensland, Australia
- College of Informatics, Korea University, Seoul, South Korea
- Laboratory for Medical Science Mathematics, RIKEN Center for Integrative Medical Sciences, Japan
| | - Swakkhar Shatabda
- Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh
| | - Iman Dehzangi
- Department of Computer Science, Rutgers University, Camden, New Jersey, USA
- Center for Computational and Integrative Biology (CCIB), Rutgers University, Camden, New Jersey, USA
| |
Collapse
|
9
|
Ragab M, Katib I, Sharaf SA, Alterazi HA, Subahi A, Alattas SG, Binyamin SS, Alyami J. Automated brain tumor recognition using equilibrium optimizer with deep learning approach on MRI images. Sci Rep 2024; 14:29448. [PMID: 39604452 PMCID: PMC11603070 DOI: 10.1038/s41598-024-80888-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 11/22/2024] [Indexed: 11/29/2024] Open
Abstract
Brain tumours (BT) affect human health owing to their location. Artificial intelligence (AI) is intended to assist in diagnosing and treating complex diseases by combining technologies like deep learning (DL), big data analytics, and machine learning (ML). AI can identify and categorize tumours by analyzing brain imaging approaches like Magnetic Resonance Imaging (MRI). The medical sector has been promptly shifted by evolving technology, and an essential element of these transformations is AI technology. AI model determines tumours' class, size, aggressiveness, and location. This assists medical doctors in making more exact diagnoses and treatment plans and helps patients better understand their health. Also, AI is used to track the progress of patients through treatment. AI-based analytics is used to predict potential tumour recurrence and assess treatment response. This study presents Brain Tumor Recognition using an Equilibrium Optimizer with a Deep Learning Approach (BTR-EODLA) technique for MRI images. The BTR-EODLA technique intends to recognize whether or not a BT presence exists. In the BTR-EODLA technique, median filtering (MF) is deployed to eliminate the noise in the input MRI. Besides, the squeeze-excitation ResNet (SE-ResNet50) model is applied to derive feature vectors, and its parameters are fine-tuned by the design of the EO model. The BTR-EODLA technique utilizes the stacked autoencoder (SAE) model for BT detection. A sequence of experiments is performed to ensure the improved performance of the BTR-EODLA technique. The investigational validation of the BTR-EODLA technique portrayed a superior accuracy value of 98.78% over existing models.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Sanaa A Sharaf
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Hassan A Alterazi
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Alanoud Subahi
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Rabigh, 25732, Saudi Arabia
| | - Sana G Alattas
- Biological Sciences Department, College of Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Sami Saeed Binyamin
- Computer and Information Technology Department, The Applied College, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| | - Jaber Alyami
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
- King Fahd Medical Research Center, Smart Medical Imaging Research Group , King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| |
Collapse
|
10
|
Rustom F, Moroze E, Parva P, Ogmen H, Yazdanbakhsh A. Deep learning and transfer learning for brain tumor detection and classification. Biol Methods Protoc 2024; 9:bpae080. [PMID: 39659666 PMCID: PMC11631523 DOI: 10.1093/biomethods/bpae080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 09/23/2024] [Accepted: 10/24/2024] [Indexed: 12/12/2024] Open
Abstract
Convolutional neural networks (CNNs) are powerful tools that can be trained on image classification tasks and share many structural and functional similarities with biological visual systems and mechanisms of learning. In addition to serving as a model of biological systems, CNNs possess the convenient feature of transfer learning where a network trained on one task may be repurposed for training on another, potentially unrelated, task. In this retrospective study of public domain MRI data, we investigate the ability of neural network models to be trained on brain cancer imaging data while introducing a unique camouflage animal detection transfer learning step as a means of enhancing the networks' tumor detection ability. Training on glioma and normal brain MRI data, post-contrast T1-weighted and T2-weighted, we demonstrate the potential success of this training strategy for improving neural network classification accuracy. Qualitative metrics such as feature space and DeepDreamImage analysis of the internal states of trained models were also employed, which showed improved generalization ability by the models following camouflage animal transfer learning. Image saliency maps further this investigation by allowing us to visualize the most important image regions from a network's perspective while learning. Such methods demonstrate that the networks not only 'look' at the tumor itself when deciding, but also at the impact on the surrounding tissue in terms of compressions and midline shifts. These results suggest an approach to brain tumor MRIs that is comparable to that of trained radiologists while also exhibiting a high sensitivity to subtle structural changes resulting from the presence of a tumor.
Collapse
Affiliation(s)
- Faris Rustom
- Computational Neuroscience and Vision Lab, Neuroscience Program, Boston University, Boston, MA, 02215, USA
| | - Ezekiel Moroze
- Computational Neuroscience and Vision Lab, Neuroscience Program, Boston University, Boston, MA, 02215, USA
| | - Pedram Parva
- Department of Radiology, VA Boston Healthcare System, Boston, MA, 02132, USA
- Boston University Chobanian & Avedisian School of Medicine, Boston, MA, 02118, USA
- Harvard Medical School, Boston, MA, 02115, USA
| | - Haluk Ogmen
- Department of Electrical & Computer Engineering, Laboratory of Perceptual & Cognitive Dynamics, University of Denver, Denver, CO, 80208, United States
| | - Arash Yazdanbakhsh
- Department of Psychological and Brain Sciences, Computational Neuroscience and Vision Lab, Center for Systems Neuroscience, and Program for Neuroscience, Boston University, Boston, MA, 02215, United States
| |
Collapse
|
11
|
Rajesh R U, Sangeetha D. Therapeutic potentials and targeting strategies of quercetin on cancer cells: Challenges and future prospects. PHYTOMEDICINE : INTERNATIONAL JOURNAL OF PHYTOTHERAPY AND PHYTOPHARMACOLOGY 2024; 133:155902. [PMID: 39059266 DOI: 10.1016/j.phymed.2024.155902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/08/2024] [Accepted: 07/19/2024] [Indexed: 07/28/2024]
Abstract
BACKGROUND Every cell in the human body is vital because it maintains equilibrium and carries out a variety of tasks, including growth and development. These activities are carried out by a set of instructions carried by many different genes and organized into DNA. It is well recognized that some lifestyle decisions, like using tobacco, alcohol, UV, or multiple sexual partners, might increase one's risk of developing cancer. The advantages of natural products for any health issue are well known, and researchers are making attempts to separate flavonoid-containing substances from plants. Various parts of plants contain a phenolic compound called flavonoid. Quercetin, which belongs to the class of compounds known as flavones with chromone skeletal structure, has anti-cancer activity. PURPOSE The study was aimed at investigating the therapeutic action of the flavonoid quercetin on various cancer cells. METHODS The phrases quercetin, anti-cancer, nanoparticles, and cell line were used to search the data using online resources such as PubMed, and Google Scholar. Several critical previous studies have been included. RESULTS Quercetin inhibits various dysregulated signaling pathways that cause cancer cells to undergo apoptosis to exercise its anticancer effects. Numerous signaling pathways are impacted by quercetin, such as the Hedgehog system, Akt, NF-κB pathway, downregulated mutant p53, JAK/STAT, G1 phase arrest, Wnt/β-Catenin, and MAPK. There are downsides to quercetin, like hydrophobicity, first-pass effect, instability in the gastrointestinal tract, etc., because of which it is not well-established in the pharmaceutical industry. The solution to these drawbacks in the future is using bio-nanomaterials like chitosan, PLGA, liposomes, and silk fibroin as carriers, which can enhance the target specificity of quercetin. The first section of this review covers the specifics of flavonoids and quercetin; the second section covers the anti-cancer activity of quercetin; and the third section explains the drawbacks and conjugation of quercetin with nanoparticles for drug delivery by overcoming quercetin's drawback. CONCLUSIONS Overall, this review presented details about quercetin, which is a plant derivative with a promising molecular mechanism of action. They inhibit cancer by various mechanisms with little or no side effects. It is anticipated that plant-based materials will become increasingly relevant in the treatment of cancer.
Collapse
Affiliation(s)
- Udaya Rajesh R
- Department of Chemistry, School of Advanced Science, Vellore Institute of Technology, Vellore, 632014 Tamil Nadu, India
| | - Dhanaraj Sangeetha
- Department of Chemistry, School of Advanced Science, Vellore Institute of Technology, Vellore, 632014 Tamil Nadu, India.
| |
Collapse
|
12
|
Ullah Z, Jamjoom M, Thirumalaisamy M, Alajmani SH, Saleem F, Sheikh-Akbari A, Khan UA. A Deep Learning Based Intelligent Decision Support System for Automatic Detection of Brain Tumor. Biomed Eng Comput Biol 2024; 15:11795972241277322. [PMID: 39238891 PMCID: PMC11375672 DOI: 10.1177/11795972241277322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 08/06/2024] [Indexed: 09/07/2024] Open
Abstract
Brain tumor (BT) is an awful disease and one of the foremost causes of death in human beings. BT develops mainly in 2 stages and varies by volume, form, and structure, and can be cured with special clinical procedures such as chemotherapy, radiotherapy, and surgical mediation. With revolutionary advancements in radiomics and research in medical imaging in the past few years, computer-aided diagnostic systems (CAD), especially deep learning, have played a key role in the automatic detection and diagnosing of various diseases and significantly provided accurate decision support systems for medical clinicians. Thus, convolution neural network (CNN) is a commonly utilized methodology developed for detecting various diseases from medical images because it is capable of extracting distinct features from an image under investigation. In this study, a deep learning approach is utilized to extricate distinct features from brain images in order to detect BT. Hence, CNN from scratch and transfer learning models (VGG-16, VGG-19, and LeNet-5) are developed and tested on brain images to build an intelligent decision support system for detecting BT. Since deep learning models require large volumes of data, data augmentation is used to populate the existing dataset synthetically in order to utilize the best fit detecting models. Hyperparameter tuning was conducted to set the optimum parameters for training the models. The achieved results show that VGG models outperformed others with an accuracy rate of 99.24%, average precision of 99%, average recall of 99%, average specificity of 99%, and average f1-score of 99% each. The results of the proposed models compared to the other state-of-the-art models in the literature show better performance of the proposed models in terms of accuracy, sensitivity, specificity, and f1-score. Moreover, comparative analysis shows that the proposed models are reliable in that they can be used for detecting BT as well as helping medical practitioners to diagnose BT.
Collapse
Affiliation(s)
- Zahid Ullah
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
| | - Mona Jamjoom
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | | | - Samah H Alajmani
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Farrukh Saleem
- School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds, UK
| | - Akbar Sheikh-Akbari
- School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds, UK
| | - Usman Ali Khan
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
13
|
Leung JH, Karmakar R, Mukundan A, Lin WS, Anwar F, Wang HC. Technological Frontiers in Brain Cancer: A Systematic Review and Meta-Analysis of Hyperspectral Imaging in Computer-Aided Diagnosis Systems. Diagnostics (Basel) 2024; 14:1888. [PMID: 39272675 PMCID: PMC11394276 DOI: 10.3390/diagnostics14171888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 08/19/2024] [Accepted: 08/23/2024] [Indexed: 09/15/2024] Open
Abstract
Brain cancer is a substantial factor in the mortality associated with cancer, presenting difficulties in the timely identification of the disease. The precision of diagnoses is significantly dependent on the proficiency of radiologists and neurologists. Although there is potential for early detection with computer-aided diagnosis (CAD) algorithms, the majority of current research is hindered by its modest sample sizes. This meta-analysis aims to comprehensively assess the diagnostic test accuracy (DTA) of computer-aided design (CAD) models specifically designed for the detection of brain cancer utilizing hyperspectral (HSI) technology. We employ Quadas-2 criteria to choose seven papers and classify the proposed methodologies according to the artificial intelligence method, cancer type, and publication year. In order to evaluate heterogeneity and diagnostic performance, we utilize Deeks' funnel plot, the forest plot, and accuracy charts. The results of our research suggest that there is no notable variation among the investigations. The CAD techniques that have been examined exhibit a notable level of precision in the automated detection of brain cancer. However, the absence of external validation hinders their potential implementation in real-time clinical settings. This highlights the necessity for additional studies in order to authenticate the CAD models for wider clinical applicability.
Collapse
Affiliation(s)
- Joseph-Hang Leung
- Department of Radiology, Ditmanson Medical Foundation Chia-yi Christian Hospital, Chia Yi 60002, Taiwan;
| | - Riya Karmakar
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan; (R.K.); (A.M.)
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan; (R.K.); (A.M.)
| | - Wen-Shou Lin
- Neurology Division, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st. Rd., Lingya District, Kaohsiung City 80284, Taiwan
| | - Fathima Anwar
- Faculty of Allied Health Sciences, The University of Lahore, 1-Km Defense Road, Lahore 54590, Punjab, Pakistan;
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi 62102, Taiwan; (R.K.); (A.M.)
- Department of Medical Research, Dalin Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, No. 2, Minsheng Road, Dalin, Chia Yi 62247, Taiwan
- Department of Technology Development, Hitspectra Intelligent Technology Co., Ltd., 8F.11-1, No. 25, Chenggong 2nd Rd., Qianzhen Dist., Kaohsiung City 80661, Taiwan
| |
Collapse
|
14
|
Chen A, Lin D, Gao Q. Enhancing brain tumor detection in MRI images using YOLO-NeuroBoost model. Front Neurol 2024; 15:1445882. [PMID: 39239397 PMCID: PMC11374633 DOI: 10.3389/fneur.2024.1445882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Accepted: 08/01/2024] [Indexed: 09/07/2024] Open
Abstract
Brain tumors are diseases characterized by abnormal cell growth within or around brain tissues, including various types such as benign and malignant tumors. However, there is currently a lack of early detection and precise localization of brain tumors in MRI images, posing challenges to diagnosis and treatment. In this context, achieving accurate target detection of brain tumors in MRI images becomes particularly important as it can improve the timeliness of diagnosis and the effectiveness of treatment. To address this challenge, we propose a novel approach-the YOLO-NeuroBoost model. This model combines the improved YOLOv8 algorithm with several innovative techniques, including dynamic convolution KernelWarehouse, attention mechanism CBAM (Convolutional Block Attention Module), and Inner-GIoU loss function. Our experimental results demonstrate that our method achieves mAP scores of 99.48 and 97.71 on the Br35H dataset and the open-source Roboflow dataset, respectively, indicating the high accuracy and efficiency of this method in detecting brain tumors in MRI images. This research holds significant importance for improving early diagnosis and treatment of brain tumors and provides new possibilities for the development of the medical image analysis field.
Collapse
Affiliation(s)
- Aruna Chen
- College of Mathematics Science, Inner Mongolia Normal University, Hohhot, China
- Center for Applied Mathematical Science, Inner Mongolia, Hohhot, China
- Laboratory of Infinite-Dimensional Hamiltonian System and Its Algorithm Application, Ministry of Education (IMNU), Hohhot, China
| | - Da Lin
- School of Mathematical Sciences, Inner Mongolia University, Hohhot, China
| | - Qiqi Gao
- College of Mathematics Science, Inner Mongolia Normal University, Hohhot, China
| |
Collapse
|
15
|
Zhong LW, Chen KS, Yang HB, Liu SD, Zong ZT, Zhang XQ. Exploring machine learning applications in Meningioma Research (2004-2023). Heliyon 2024; 10:e32596. [PMID: 38975185 PMCID: PMC11225743 DOI: 10.1016/j.heliyon.2024.e32596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/19/2024] [Accepted: 06/05/2024] [Indexed: 07/09/2024] Open
Abstract
Objective This study aims to examine the trends in machine learning application to meningiomas between 2004 and 2023. Methods Publication data were extracted from the Science Citation Index Expanded (SCI-E) within the Web of Science Core Collection (WOSCC). Using CiteSpace 6.2.R6, a comprehensive analysis of publications, authors, cited authors, countries, institutions, cited journals, references, and keywords was conducted on December 1, 2023. Results The analysis included a total of 342 articles. Prior to 2007, no publications existed in this field, and the number remained modest until 2017. A significant increase occurred in publications from 2018 onwards. The majority of the top 10 authors hailed from Germany and China, with the USA also exerting substantial international influence, particularly in academic institutions. Journals from the IEEE series contributed significantly to the publications. "Deep learning," "brain tumor," and "classification" emerged as the primary keywords of focus among researchers. The developmental pattern in this field primarily involved a combination of interdisciplinary integration and the refinement of major disciplinary branches. Conclusion Machine learning has demonstrated significant value in predicting early meningiomas and tailoring treatment plans. Key research focuses involve optimizing detection indicators and selecting superior machine learning algorithms. Future efforts should aim to develop high-performance algorithms to drive further innovation in this field.
Collapse
Affiliation(s)
- Li-wei Zhong
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Kun-shan Chen
- The Second Affiliated Hospital of Jiujiang University, Jiujiang, Jiangxi, China
| | - Hua-biao Yang
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Shi-dan Liu
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Zhi-tao Zong
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| | - Xue-qin Zhang
- Jiujiang Traditional Chinese Medicine Hospital, Jiujiang, Jiangxi, China
| |
Collapse
|
16
|
Albalawi E, Thakur A, Dorai DR, Bhatia Khan S, Mahesh TR, Almusharraf A, Aurangzeb K, Anwar MS. Enhancing brain tumor classification in MRI scans with a multi-layer customized convolutional neural network approach. Front Comput Neurosci 2024; 18:1418546. [PMID: 38933391 PMCID: PMC11199693 DOI: 10.3389/fncom.2024.1418546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/23/2024] [Indexed: 06/28/2024] Open
Abstract
Background The necessity of prompt and accurate brain tumor diagnosis is unquestionable for optimizing treatment strategies and patient prognoses. Traditional reliance on Magnetic Resonance Imaging (MRI) analysis, contingent upon expert interpretation, grapples with challenges such as time-intensive processes and susceptibility to human error. Objective This research presents a novel Convolutional Neural Network (CNN) architecture designed to enhance the accuracy and efficiency of brain tumor detection in MRI scans. Methods The dataset used in the study comprises 7,023 brain MRI images from figshare, SARTAJ, and Br35H, categorized into glioma, meningioma, no tumor, and pituitary classes, with a CNN-based multi-task classification model employed for tumor detection, classification, and location identification. Our methodology focused on multi-task classification using a single CNN model for various brain MRI classification tasks, including tumor detection, classification based on grade and type, and tumor location identification. Results The proposed CNN model incorporates advanced feature extraction capabilities and deep learning optimization techniques, culminating in a groundbreaking paradigm shift in automated brain MRI analysis. With an exceptional tumor classification accuracy of 99%, our method surpasses current methodologies, demonstrating the remarkable potential of deep learning in medical applications. Conclusion This study represents a significant advancement in the early detection and treatment planning of brain tumors, offering a more efficient and accurate alternative to traditional MRI analysis methods.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - D. Ramya Dorai
- Department of Information Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- School of Science, Engineering and Environment, University of Salford, Manchester, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - T. R. Mahesh
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Ahlam Almusharraf
- Department of Management, College of Business Administration, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Khursheed Aurangzeb
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | | |
Collapse
|
17
|
Chauhan S, Cheruku R, Reddy Edla D, Kampa L, Nayak SR, Giri J, Mallik S, Aluvala S, Boddu V, Qin H. BT-CNN: a balanced binary tree architecture for classification of brain tumour using MRI imaging. Front Physiol 2024; 15:1349111. [PMID: 38665597 PMCID: PMC11043606 DOI: 10.3389/fphys.2024.1349111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 03/21/2024] [Indexed: 04/28/2024] Open
Abstract
Deep learning is a very important technique in clinical diagnosis and therapy in the present world. Convolutional Neural Network (CNN) is a recent development in deep learning that is used in computer vision. Our medical investigation focuses on the identification of brain tumour. To improve the brain tumour classification performance a Balanced binary Tree CNN (BT-CNN) which is framed in a binary tree-like structure is proposed. It has a two distinct modules-the convolution and the depthwise separable convolution group. The usage of convolution group achieves lower time and higher memory, while the opposite is true for the depthwise separable convolution group. This balanced binarty tree inspired CNN balances both the groups to achieve maximum performance in terms of time and space. The proposed model along with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. are experimented on public datasets. Before we feed the data into model the images are pre-processed using CLAHE, denoising, cropping, and scaling. The pre-processed dataset is partitioned into training and testing datasets as per 5 fold cross validation. The proposed model is trained and compared its perforarmance with state-of-the-art models like CNN-KNN and models proposed by Musallam et al., Saikat et al., and Amin et al. The proposed model reported average training accuracy of 99.61% compared to other models. The proposed model achieved 96.06% test accuracy where as other models achieved 68.86%, 85.8%, 86.88%, and 90.41% respectively. Further, the proposed model obtained lowest standard deviation on training and test accuracies across all folds, making it invariable to dataset.
Collapse
Affiliation(s)
- Sohamkumar Chauhan
- Department of CSE, National Institute of Technology Goa, Ponda, Goa, India
| | - Ramalingaswamy Cheruku
- Department of CSE, National Institute of Technology Warangal, Hanumkonda, Telangana, India
| | - Damodar Reddy Edla
- Department of CSE, National Institute of Technology Goa, Ponda, Goa, India
| | - Lavanya Kampa
- Department of CSE, University College of Sciences, Acharya Nagarjuna University, Guntur, Andra Pradesh, India
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - Jayant Giri
- Department of Mechanical Engineering, Yeshwantrao Chavan College of Engineering, Nagpur, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA, United States
| | - Srinivas Aluvala
- Department of Computer Science and Artificial Intelligence, SR University, Warangal, Telangana, India
| | - Vijayasree Boddu
- Department of ECE, National Institute of Technology Warangal, Hanumkonda, Telangana, India
| | - Hong Qin
- Department of Computer Science and Engineering, University of Tennessee at Chattanooga, Chattanooga, TN, United States
| |
Collapse
|
18
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
19
|
Rethemiotaki I. Brain tumour detection from magnetic resonance imaging using convolutional neural networks. Contemp Oncol (Pozn) 2024; 27:230-241. [PMID: 38405206 PMCID: PMC10883197 DOI: 10.5114/wo.2023.135320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 01/02/2024] [Indexed: 02/27/2024] Open
Abstract
Introduction The aim of this work is to detect and classify brain tumours using computational intelligence techniques on magnetic resonance imaging (MRI) images. Material and methods A dataset of 3264 MRI brain images consisting of 4 categories: unspecified glioma, meningioma, pituitary, and healthy brain, was used in this study. Twelve convolutional neural networks (GoogleNet, MobileNetV2, Xception, DesNet-BC, ResNet 50, SqueezeNet, ShuffleNet, VGG-16, AlexNet, Enet, EfficientB0, and MobileNetV2 with meta pseudo-labels) were used to classify gliomas, meningiomas, pituitary tumours, and healthy brains to find the most appropriate model. The experiments included image preprocessing and hyperparameter tuning. The performance of each neural network was evaluated based on accuracy, precision, recall, and F-measure for each type of brain tumour. Results The experimental results show that the MobileNetV2 convolutional neural network (CNN) model was able to diagnose brain tumours with 99% accuracy, 98% recall, and 99% F1 score. On the other hand, the validation data analysis shows that the CNN model GoogleNet has the highest accuracy (97%) among CNNs and seems to be the best choice for brain tumour classification. Conclusions The results of this work highlight the importance of artificial intelligence and machine learning for brain tumour prediction. Furthermore, this study achieved the highest accuracy in brain tumour classification to date, and it is also the only study to compare the performance of so many neural networks simultaneously.
Collapse
Affiliation(s)
- Irene Rethemiotaki
- School of Electrical and Computer Engineering, Technical University of Crete, Chania, Crete, Greece
| |
Collapse
|
20
|
Çetin-Kaya Y, Kaya M. A Novel Ensemble Framework for Multi-Classification of Brain Tumors Using Magnetic Resonance Imaging. Diagnostics (Basel) 2024; 14:383. [PMID: 38396422 PMCID: PMC10888105 DOI: 10.3390/diagnostics14040383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 02/01/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Brain tumors can have fatal consequences, affecting many body functions. For this reason, it is essential to detect brain tumor types accurately and at an early stage to start the appropriate treatment process. Although convolutional neural networks (CNNs) are widely used in disease detection from medical images, they face the problem of overfitting in the training phase on limited labeled and insufficiently diverse datasets. The existing studies use transfer learning and ensemble models to overcome these problems. When the existing studies are examined, it is evident that there is a lack of models and weight ratios that will be used with the ensemble technique. With the framework proposed in this study, several CNN models with different architectures are trained with transfer learning and fine-tuning on three brain tumor datasets. A particle swarm optimization-based algorithm determined the optimum weights for combining the five most successful CNN models with the ensemble technique. The results across three datasets are as follows: Dataset 1, 99.35% accuracy and 99.20 F1-score; Dataset 2, 98.77% accuracy and 98.92 F1-score; and Dataset 3, 99.92% accuracy and 99.92 F1-score. We achieved successful performances on three brain tumor datasets, showing that the proposed framework is reliable in classification. As a result, the proposed framework outperforms existing studies, offering clinicians enhanced decision-making support through its high-accuracy classification performance.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| | - Mahir Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpaşa University, Tokat 60250, Turkey
| |
Collapse
|
21
|
Sulaiman A, Anand V, Gupta S, Al Reshan MS, Alshahrani H, Shaikh A, Elmagzoub MA. An intelligent LinkNet-34 model with EfficientNetB7 encoder for semantic segmentation of brain tumor. Sci Rep 2024; 14:1345. [PMID: 38228639 PMCID: PMC10792164 DOI: 10.1038/s41598-024-51472-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/05/2024] [Indexed: 01/18/2024] Open
Abstract
A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.
Collapse
Affiliation(s)
- Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - M A Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| |
Collapse
|
22
|
Rahman A, Debnath T, Kundu D, Khan MSI, Aishi AA, Sazzad S, Sayduzzaman M, Band SS. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024; 11:58-109. [PMID: 38617415 PMCID: PMC11007421 DOI: 10.3934/publichealth.2024004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/18/2023] [Indexed: 04/16/2024] Open
Abstract
In recent years, machine learning (ML) and deep learning (DL) have been the leading approaches to solving various challenges, such as disease predictions, drug discovery, medical image analysis, etc., in intelligent healthcare applications. Further, given the current progress in the fields of ML and DL, there exists the promising potential for both to provide support in the realm of healthcare. This study offered an exhaustive survey on ML and DL for the healthcare system, concentrating on vital state of the art features, integration benefits, applications, prospects and future guidelines. To conduct the research, we found the most prominent journal and conference databases using distinct keywords to discover scholarly consequences. First, we furnished the most current along with cutting-edge progress in ML-DL-based analysis in smart healthcare in a compendious manner. Next, we integrated the advancement of various services for ML and DL, including ML-healthcare, DL-healthcare, and ML-DL-healthcare. We then offered ML and DL-based applications in the healthcare industry. Eventually, we emphasized the research disputes and recommendations for further studies based on our observations.
Collapse
Affiliation(s)
- Anichur Rahman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Tanoy Debnath
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
- Department of CSE, Green University of Bangladesh, 220/D, Begum Rokeya Sarani, Dhaka -1207, Bangladesh
| | - Dipanjali Kundu
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Md. Saikat Islam Khan
- Department of CSE, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Airin Afroj Aishi
- Department of Computing and Information System, Daffodil International University, Savar, Dhaka, Bangladesh
| | - Sadia Sazzad
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Mohammad Sayduzzaman
- Department of CSE, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka-1350
| | - Shahab S. Band
- Department of Information Management, International Graduate School of Artificial Intelligence, National Yunlin University of Science and Technology, Taiwan
| |
Collapse
|
23
|
Gokapay DK, Mohanty SN. Enhanced MRI-based brain tumor segmentation and feature extraction using Berkeley wavelet transform and ETCCNN. Digit Health 2024; 10:20552076241305282. [PMID: 39698507 PMCID: PMC11653464 DOI: 10.1177/20552076241305282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 11/19/2024] [Indexed: 12/20/2024] Open
Abstract
Objective Brain tumors are abnormal growths of brain cells that are typically diagnosed via magnetic resonance imaging (MRI), which helps to discriminate between malignant and benign tumors. Using MRI image analysis, tumor sites have been identified and classified into four distinct tumor categories: meningioma, glioma, not tumor, and pituitary. If a brain tumor is not detected in its early stages, it could progress to a severe level or cause death. Therefore, to address these issues, the proposed approach uses an efficient classifier based on deep learning for brain tumor detection. Methods This article describes the classification and detection of brain tumor by an efficient two-channel convolutional neural network. The input image is initially rotated during the augmentation stage. Morphological operations, thresholding, and region filling are then used in the pre-processing stage. The output is then segmented using the Berkeley Wavelet Transform. A two-channel convolutional neural network is used to extract features from segmented objects. In the end, the most effective deep neural network is employed to determine the features of brain tumors. The classifier will utilize the Enhanced Serval Optimization Algorithm to determine the optimal gain parameters. MATLAB serves as the platform of choice for implementing the suggested model. Results Several performance metrics are calculated to assess the proposed brain tumor detection method, such as accuracy, F measures, kappa, precision, sensitivity, and specificity. The proposed model has a 98.8% detection accuracy for brain tumors. Conclusion The evaluation shows that the suggested strategy has produced the best results.
Collapse
Affiliation(s)
- Dilip Kumar Gokapay
- School of Computer Science & Engineering (SCOPE), VIT-AP University, Amaravati, Andhra Pradesh, India
| | - Sachi Nandan Mohanty
- School of Computer Science & Engineering (SCOPE), VIT-AP University, Amaravati, Andhra Pradesh, India
| |
Collapse
|
24
|
Soleimani P, Farezi N. Utilizing deep learning via the 3D U-net neural network for the delineation of brain stroke lesions in MRI image. Sci Rep 2023; 13:19808. [PMID: 37957203 PMCID: PMC10643611 DOI: 10.1038/s41598-023-47107-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 11/09/2023] [Indexed: 11/15/2023] Open
Abstract
The segmentation of acute stroke lesions plays a vital role in healthcare by assisting doctors in making prompt and well-informed treatment choices. Although Magnetic Resonance Imaging (MRI) is a time-intensive procedure, it produces high-fidelity images widely regarded as the most reliable diagnostic tool available. Employing deep learning techniques for automated stroke lesion segmentation can offer valuable insights into the precise location and extent of affected tissue, enabling medical professionals to effectively evaluate treatment risks and make informed assessments. In this research, a deep learning approach is introduced for segmenting acute and sub-acute stroke lesions from MRI images. To enhance feature learning through brain hemisphere symmetry, pre-processing techniques are applied to the data. To tackle the class imbalance challenge, we employed a strategy of using small patches with balanced sampling during training, along with a dynamically weighted loss function that incorporates f1-score and IOU-score (Intersection over Union). Furthermore, the 3D U-Net architecture is used to generate predictions for complete patches, employing a high degree of overlap between patches to minimize the requirement for subsequent post-processing steps. The 3D U-Net model, utilizing ResnetV2 as the pre-trained encoder for IOU-score and Seresnext101 for f1-score, stands as the leading state-of-the-art (SOTA) model for segmentation tasks. However, recent research has introduced a novel model that surpasses these metrics and demonstrates superior performance compared to other backbone architectures. The f1-score and IOU-score were computed for various backbones, with Seresnext101 achieving the highest f1-score and ResnetV2 performing the highest IOU-score. These calculations were conducted using a threshold value of 0.5. This research proposes a valuable model based on transfer learning for the classification of brain diseases in MRI scans. The achieved f1-score using the recommended classifiers demonstrates the effectiveness of the approach employed in this study. The findings indicate that Seresnext101 attains the highest f1-score of 0.94226, while ResnetV2 achieves the best IOU-score of 0.88342, making it the preferred architecture for segmentation methods. Furthermore, the study presents experimental results of the 3D U-Net model applied to brain stroke lesion segmentation, suggesting prospects for researchers interested in segmenting brain strokes and enhancing 3D U-Net models.
Collapse
Affiliation(s)
- Parisa Soleimani
- Faculty of Physics, University of Tabriz, Tabriz, Iran.
- Department of Engineering Sciences, Faculty of Advanced Technologies, University of Mohaghegh Ardabili, Namin, Iran.
| | - Navid Farezi
- Faculty of Physics, University of Tabriz, Tabriz, Iran
| |
Collapse
|
25
|
Rahman MM, Nasir MK, Nur-A-Alam M, Khan MSI. Proposing a hybrid technique of feature fusion and convolutional neural network for melanoma skin cancer detection. J Pathol Inform 2023; 14:100341. [PMID: 38028129 PMCID: PMC10630642 DOI: 10.1016/j.jpi.2023.100341] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 09/20/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Skin cancer is among the most common cancer types worldwide. Automatic identification of skin cancer is complicated because of the poor contrast and apparent resemblance between skin and lesions. The rate of human death can be significantly reduced if melanoma skin cancer could be detected quickly using dermoscopy images. This research uses an anisotropic diffusion filtering method on dermoscopy images to remove multiplicative speckle noise. To do this, the fast-bounding box (FBB) method is applied here to segment the skin cancer region. We also employ 2 feature extractors to represent images. The first one is the Hybrid Feature Extractor (HFE), and second one is the convolutional neural network VGG19-based CNN. The HFE combines 3 feature extraction approaches namely, Histogram-Oriented Gradient (HOG), Local Binary Pattern (LBP), and Speed Up Robust Feature (SURF) into a single fused feature vector. The CNN method is also used to extract additional features from test and training datasets. This 2-feature vector is then fused to design the classification model. The proposed method is then employed on 2 datasets namely, ISIC 2017 and the academic torrents dataset. Our proposed method achieves 99.85%, 91.65%, and 95.70% in terms of accuracy, sensitivity, and specificity, respectively, making it more successful than previously proposed machine learning algorithms.
Collapse
Affiliation(s)
- Md. Mahbubur Rahman
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Mirpur-2, Dhaka 1216, Bangladesh
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Mostofa Kamal Nasir
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Md. Nur-A-Alam
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
- Department of CSE, Dhaka International University, Dhaka 1205, Bangladesh
| | - Md. Saikat Islam Khan
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
- Department of CSE, Dhaka International University, Dhaka 1205, Bangladesh
| |
Collapse
|
26
|
Zulfiqar F, Ijaz Bajwa U, Mehmood Y. Multi-class classification of brain tumor types from MR images using EfficientNets. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
27
|
Anand V, Gupta S, Gupta D, Gulzar Y, Xin Q, Juneja S, Shah A, Shaikh A. Weighted Average Ensemble Deep Learning Model for Stratification of Brain Tumor in MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13071320. [PMID: 37046538 PMCID: PMC10093740 DOI: 10.3390/diagnostics13071320] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Revised: 03/22/2023] [Accepted: 03/26/2023] [Indexed: 04/05/2023] Open
Abstract
Brain tumor diagnosis at an early stage can improve the chances of successful treatment and better patient outcomes. In the biomedical industry, non-invasive diagnostic procedures, such as magnetic resonance imaging (MRI), can be used to diagnose brain tumors. Deep learning, a type of artificial intelligence, can analyze MRI images in a matter of seconds, reducing the time it takes for diagnosis and potentially improving patient outcomes. Furthermore, an ensemble model can help increase the accuracy of classification by combining the strengths of multiple models and compensating for their individual weaknesses. Therefore, in this research, a weighted average ensemble deep learning model is proposed for the classification of brain tumors. For the weighted ensemble classification model, three different feature spaces are taken from the transfer learning VGG19 model, Convolution Neural Network (CNN) model without augmentation, and CNN model with augmentation. These three feature spaces are ensembled with the best combination of weights, i.e., weight1, weight2, and weight3 by using grid search. The dataset used for simulation is taken from The Cancer Genome Atlas (TCGA), having a lower-grade glioma collection with 3929 MRI images of 110 patients. The ensemble model helps reduce overfitting by combining multiple models that have learned different aspects of the data. The proposed ensemble model outperforms the three individual models for detecting brain tumors in terms of accuracy, precision, and F1-score. Therefore, the proposed model can act as a second opinion tool for radiologists to diagnose the tumor from MRI images of the brain.
Collapse
Affiliation(s)
- Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Deepali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, Punjab, India
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa 31982, Saudi Arabia
| | - Qin Xin
- Faculty of Science and Technology, University of the Faroe Islands, Vestarabryggja 15, FO 100 Torshavn, Faroe Islands, Denmark
| | - Sapna Juneja
- Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, Gombak 53100, Selangor, Malaysia
| | - Asadullah Shah
- Kulliyyah of Information and Communication Technology, International Islamic University Malaysia, Gombak 53100, Selangor, Malaysia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 55461, Saudi Arabia
| |
Collapse
|
28
|
Srinivasan S, Bai PSM, Mathivanan SK, Muthukumaran V, Babu JC, Vilcekova L. Grade Classification of Tumors from Brain Magnetic Resonance Images Using a Deep Learning Technique. Diagnostics (Basel) 2023; 13:diagnostics13061153. [PMID: 36980463 PMCID: PMC10046932 DOI: 10.3390/diagnostics13061153] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/14/2023] [Accepted: 03/14/2023] [Indexed: 03/22/2023] Open
Abstract
To improve the accuracy of tumor identification, it is necessary to develop a reliable automated diagnostic method. In order to precisely categorize brain tumors, researchers developed a variety of segmentation algorithms. Segmentation of brain images is generally recognized as one of the most challenging tasks in medical image processing. In this article, a novel automated detection and classification method was proposed. The proposed approach consisted of many phases, including pre-processing MRI images, segmenting images, extracting features, and classifying images. During the pre-processing portion of an MRI scan, an adaptive filter was utilized to eliminate background noise. For feature extraction, the local-binary grey level co-occurrence matrix (LBGLCM) was used, and for image segmentation, enhanced fuzzy c-means clustering (EFCMC) was used. After extracting the scan features, we used a deep learning model to classify MRI images into two groups: glioma and normal. The classifications were created using a convolutional recurrent neural network (CRNN). The proposed technique improved brain image classification from a defined input dataset. MRI scans from the REMBRANDT dataset, which consisted of 620 testing and 2480 training sets, were used for the research. The data demonstrate that the newly proposed method outperformed its predecessors. The proposed CRNN strategy was compared against BP, U-Net, and ResNet, which are three of the most prevalent classification approaches currently being used. For brain tumor classification, the proposed system outcomes were 98.17% accuracy, 91.34% specificity, and 98.79% sensitivity.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
| | | | - Sandeep Kumar Mathivanan
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Venkatesan Muthukumaran
- Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Jyothi Chinna Babu
- Department of Electronics and Communications Engineering, Annamacharya Institute of Technology and Sciences, Rajampet 516126, India
| | - Lucia Vilcekova
- Faculty of Management, Comenius University Bratislava, Odbojarov 10, 820 05 Bratislava, Slovakia
- Correspondence:
| |
Collapse
|
29
|
Papadomanolakis TN, Sergaki ES, Polydorou AA, Krasoudakis AG, Makris-Tsalikis GN, Polydorou AA, Afentakis NM, Athanasiou SA, Vardiambasis IO, Zervakis ME. Tumor Diagnosis against Other Brain Diseases Using T2 MRI Brain Images and CNN Binary Classifier and DWT. Brain Sci 2023; 13:348. [PMID: 36831891 PMCID: PMC9954603 DOI: 10.3390/brainsci13020348] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
PURPOSE Brain tumors are diagnosed and classified manually and noninvasively by radiologists using Magnetic Resonance Imaging (MRI) data. The risk of misdiagnosis may exist due to human factors such as lack of time, fatigue, and relatively low experience. Deep learning methods have become increasingly important in MRI classification. To improve diagnostic accuracy, researchers emphasize the need to develop Computer-Aided Diagnosis (CAD) computational diagnostics based on artificial intelligence (AI) systems by using deep learning methods such as convolutional neural networks (CNN) and improving the performance of CNN by combining it with other data analysis tools such as wavelet transform. In this study, a novel diagnostic framework based on CNN and DWT data analysis is developed for the diagnosis of glioma tumors in the brain, among other tumors and other diseases, with T2-SWI MRI scans. It is a binary CNN classifier that treats the disease "glioma tumor" as positive and the other pathologies as negative, resulting in a very unbalanced binary problem. The study includes a comparative analysis of a CNN trained with wavelet transform data of MRIs instead of their pixel intensity values in order to demonstrate the increased performance of the CNN and DWT analysis in diagnosing brain gliomas. The results of the proposed CNN architecture are also compared with a deep CNN pre-trained on VGG16 transfer learning network and with the SVM machine learning method using DWT knowledge. METHODS To improve the accuracy of the CNN classifier, the proposed CNN model uses as knowledge the spatial and temporal features extracted by converting the original MRI images to the frequency domain by performing Discrete Wavelet Transformation (DWT), instead of the traditionally used original scans in the form of pixel intensities. Moreover, no pre-processing was applied to the original images. The images used are MRIs of type T2-SWI sequences parallel to the axial plane. Firstly, a compression step is applied for each MRI scan applying DWT up to three levels of decomposition. These data are used to train a 2D CNN in order to classify the scans as showing glioma or not. The proposed CNN model is trained on MRI slices originated from 382 various male and female adult patients, showing healthy and pathological images from a selection of diseases (showing glioma, meningioma, pituitary, necrosis, edema, non-enchasing tumor, hemorrhagic foci, edema, ischemic changes, cystic areas, etc.). The images are provided by the database of the Medical Image Computing and Computer-Assisted Intervention (MICCAI) and the Ischemic Stroke Lesion Segmentation (ISLES) challenges on Brain Tumor Segmentation (BraTS) challenges 2016 and 2017, as well as by the numerous records kept in the public general hospital of Chania, Crete, "Saint George". RESULTS The proposed frameworks are experimentally evaluated by examining MRI slices originating from 190 different patients (not included in the training set), of which 56% are showing gliomas by the longest two axes less than 2 cm and 44% are showing other pathological effects or healthy cases. Results show convincing performance when using as information the spatial and temporal features extracted by the original scans. With the proposed CNN model and with data in DWT format, we achieved the following statistic percentages: accuracy 0.97, sensitivity (recall) 1, specificity 0.93, precision 0.95, FNR 0, and FPR 0.07. These numbers are higher for this data format (respectively: accuracy by 6% higher, recall by 11%, specificity by 7%, precision by 5%, FNR by 0.1%, and FPR is the same) than it would be, had we used as input data the intensity values of the MRIs (instead of the DWT analysis of the MRIs). Additionally, our study showed that when our CNN takes into account the TL of the existing network VGG, the performance values are lower, as follows: accuracy 0.87, sensitivity (recall) 0.91, specificity 0.84, precision 0.86, FNR of 0.08, and FPR 0.14. CONCLUSIONS The experimental results show the outperformance of the CNN, which is not based on transfer learning, but is using as information the MRI brain scans decomposed into DWT information instead of the pixel intensity of the original scans. The results are promising for the proposed CNN based on DWT knowledge to serve for binary diagnosis of glioma tumors among other tumors and diseases. Moreover, the SVM learning model using DWT data analysis performs with higher accuracy and sensitivity than using pixel values.
Collapse
Affiliation(s)
| | - Eleftheria S. Sergaki
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| | - Andreas A. Polydorou
- Areteio Hospital, 2nd University Department of Surgery, Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | | | | | - Alexios A. Polydorou
- Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | - Nikolaos M. Afentakis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Sofia A. Athanasiou
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Ioannis O. Vardiambasis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Michail E. Zervakis
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| |
Collapse
|
30
|
Investigating the Impact of Two Major Programming Environments on the Accuracy of Deep Learning-Based Glioma Detection from MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13040651. [PMID: 36832138 PMCID: PMC9955350 DOI: 10.3390/diagnostics13040651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/04/2023] [Accepted: 02/07/2023] [Indexed: 02/12/2023] Open
Abstract
Brain tumors have been the subject of research for many years. Brain tumors are typically classified into two main groups: benign and malignant tumors. The most common tumor type among malignant brain tumors is known as glioma. In the diagnosis of glioma, different imaging technologies could be used. Among these techniques, MRI is the most preferred imaging technology due to its high-resolution image data. However, the detection of gliomas from a huge set of MRI data could be challenging for the practitioners. In order to solve this concern, many Deep Learning (DL) models based on Convolutional Neural Networks (CNNs) have been proposed to be used in detecting glioma. However, understanding which CNN architecture would work efficiently under various conditions including development environment or programming aspects as well as performance analysis has not been studied so far. In this research work, therefore, the purpose is to investigate the impact of two major programming environments (namely, MATLAB and Python) on the accuracy of CNN-based glioma detection from Magnetic Resonance Imaging (MRI) images. To this end, experiments on the Brain Tumor Segmentation (BraTS) dataset (2016 and 2017) consisting of multiparametric magnetic MRI images are performed by implementing two popular CNN architectures, the three-dimensional (3D) U-Net and the V-Net in the programming environments. From the results, it is concluded that the use of Python with Google Colaboratory (Colab) might be highly useful in the implementation of CNN-based models for glioma detection. Moreover, the 3D U-Net model is found to perform better, attaining a high accuracy on the dataset. The authors believe that the results achieved from this study would provide useful information to the research community in their appropriate implementation of DL approaches for brain tumor detection.
Collapse
|