1
|
Jayaram K, Kumarganesh S, Immanuvel A, Ganesh C. Classifications of meningioma brain images using the novel Convolutional Fuzzy C Means (CFCM) architecture and performance analysis of hardware incorporated tumor segmentation module. NETWORK (BRISTOL, ENGLAND) 2025:1-22. [PMID: 40271969 DOI: 10.1080/0954898x.2025.2491537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/03/2025] [Accepted: 04/01/2025] [Indexed: 04/25/2025]
Abstract
In this paper, meningioma detection and segmentation method is proposed. This research work proposes an effective method to locate meningioma pictures through a novel CFCM classification approach. This proposed method consist of Non-Sub sampled Contourlet Transform decomposition module which decomposes the entire brain image into multi-scale sub-band images and then the heuristic and uniqueness features have been computed individually. Then, these heuristic and uniqueness features are trained and classified using Convolutional Fuzzy C Means (CFCM) classifier. This proposed method is applied on two independent brain imaging datasets. The proposed meningioma identification system stated in this work obtained 98.81% of Se, 98.83% of Sp, 99.04% of Acc, 99.12% of pr, and 99.14% of FIS on Nanfang University dataset brain images. The proposed meningioma identification system stated in this work obtained 98.92% of Se, 98.88% of Sp, 98.9% of Acc, 98.88% of pr, and 99.36% of FIS on the BRATS 2021 brain images. Finally, the tumour segmentation module is designed in VLSI, and it is simulated using Xilinx project navigator in this paper.
Collapse
Affiliation(s)
- K Jayaram
- Department of ECE, Kalaignarkarunanidhi Institute of Technology, Coimbatore, India
| | - S Kumarganesh
- Department of ECE, Knowledge Institute of Technology, Salem, India
| | - A Immanuvel
- Department of ECE, Paavai College of Engineering, Namakkal, India
| | - C Ganesh
- Department of CCE, Sri Eshwar College of Engineering, Coimbatore, India
| |
Collapse
|
2
|
Lisha R, Agees Kumar C, Ajith Bosco Raj T. Deep Learning-Assisted Computer-Aided Diagnosis System for Early Detection of Lung Cancer. JOURNAL OF CLINICAL ULTRASOUND : JCU 2025. [PMID: 39887401 DOI: 10.1002/jcu.23929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 04/11/2024] [Accepted: 04/29/2024] [Indexed: 02/01/2025]
Abstract
PURPOSE The largest cause of cancer-related fatalities worldwide is lung cancer. The dimensions and positioning of the primary tumor, the presence of lesions, the type of lung cancer like malignant or benign, and the good mental health diagnosis all play a part in the diagnosis of the disease. METHODS Three processes should be used by standard computer-assisted diagnosis (CAD) systems to detect lung cancer: preprocessing, feature extraction, and classification. Fast nonlocal means filter is first used for preprocessing (FNLM). The lung pictures are processed using the Binary Grasshopper Optimization Algorithm (BGOA) to extract the features. RESULTS The 10 levels of the neural network architecture which automatically gathers data and categorizes them are added to the current study's suggested model, which subtracts the five levels of Imagenet. Using the same Modèle dataset, the proposed model was compared to deep learning techniques. CONCLUSION In terms of accuracy and sensitivity, the suggested model performed better than the existing techniques (99.53% accuracy and 98.95% sensitivity). The effectiveness of the suggested strategy is superior to that of alternative methods when it is near the true positive values at the ROC curve.
Collapse
Affiliation(s)
- R Lisha
- Department of Electronics and Communication Engineering, Arunachala College of Engineering for Women, Kanyakumari, India
| | - C Agees Kumar
- Department of Electrical and Electronics Engineering, Arunachala College of Engineering for Women, Kanyakumari, India
| | - T Ajith Bosco Raj
- Department of Electronics and Communication Engineering, PSN College of Engineering and Technology, Tirunelveli, India
| |
Collapse
|
3
|
Bhagyalaxmi K, Dwarakanath B. CDCG-UNet: Chaotic Optimization Assisted Brain Tumor Segmentation Based on Dilated Channel Gate Attention U-Net Model. Neuroinformatics 2025; 23:12. [PMID: 39841321 DOI: 10.1007/s12021-024-09701-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/08/2024] [Indexed: 01/23/2025]
Abstract
Brain tumours are one of the most deadly and noticeable types of cancer, affecting both children and adults. One of the major drawbacks in brain tumour identification is the late diagnosis and high cost of brain tumour-detecting devices. Most existing approaches use ML algorithms to address problems, but they have drawbacks such as low accuracy, high loss, and high computing cost. To address these challenges, a novel U-Net model for tumour segmentation in magnetic resonance images (MRI) is proposed. Initially, images are claimed from the dataset and pre-processed with the Probabilistic Hybrid Wiener filter (PHWF) to remove unwanted noise and improve image quality. To reduce model complexity, the pre-processed images are submitted to a feature extraction procedure known as 3D Convolutional Vision Transformer (3D-VT). To perform the segmentation approach using chaotic optimization assisted Dilated Channel Gate attention U-Net (CDCG-UNet) model to segment brain tumour regions effectively. The proposed approach segments tumour portions as whole tumour (WT), tumour Core (TC), and Enhancing Tumour (ET) positions. The optimization loss function can be performed using the Chaotic Harris Shrinking Spiral optimization algorithm (CHSOA). The proposed CDCG-UNet model is evaluated with three datasets: BRATS 2021, BRATS 2020, and BRATS 2023. For the BRATS 2021 dataset, the proposed CDCG-UNet model obtained a dice score of 0.972 for ET, 0.987 for CT, and 0.98 for WT. For the BRATS 2020 dataset, the proposed CDCG-UNet model produced a dice score of 98.87% for ET, 98.67% for CT, and 99.1% for WT. The CDCG-UNet model is further evaluated using the BRATS 2023 dataset, which yields 98.42% for ET, 98.08% for CT, and 99.3% for WT.
Collapse
Affiliation(s)
- K Bhagyalaxmi
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Ramapuram, Chennai, 600089, India.
| | - B Dwarakanath
- Department of Information Technology, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Ramapuram, Chennai, 600089, India
| |
Collapse
|
4
|
Rai HM, Yoo J, Dashkevych S. Transformative Advances in AI for Precise Cancer Detection: A Comprehensive Review of Non-Invasive Techniques. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2025. [DOI: 10.1007/s11831-024-10219-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/07/2024] [Indexed: 03/02/2025]
|
5
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024; 60:2272-2289. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
6
|
Nazir MI, Akter A, Hussen Wadud MA, Uddin MA. Utilizing customized CNN for brain tumor prediction with explainable AI. Heliyon 2024; 10:e38997. [PMID: 39449697 PMCID: PMC11497403 DOI: 10.1016/j.heliyon.2024.e38997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Revised: 08/28/2024] [Accepted: 10/04/2024] [Indexed: 10/26/2024] Open
Abstract
Timely diagnosis of brain tumors using MRI and its potential impact on patient survival are critical issues addressed in this study. Traditional DL models often lack transparency, leading to skepticism among medical experts owing to their "black box" nature. This study addresses this gap by presenting an innovative approach for brain tumor detection. It utilizes a customized Convolutional Neural Network (CNN) model empowered by three advanced explainable artificial intelligence (XAI) techniques: Shapley Additive Explana-tions (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM). The study utilized the BR35H dataset, which includes 3060 brain MRI images encompassing both tumorous and non-tumorous cases. The proposed model achieved a remarkable training accuracy of 100 % and validation accuracy of 98.67 %. Precision, recall, and F1 score metrics demonstrated exceptional performance at 98.50 %, confirming the accuracy of the model in tumor detection. Detailed result analysis, including a confusion matrix, comparison with existing models, and generalizability tests on other datasets, establishes the superiority of the proposed approach and sets a new benchmark for accuracy. By integrating a customized CNN model with XAI techniques, this research enhances trust in AI-driven medical diagnostics and offers a promising pathway for early tumor detection and potentially life-saving interventions.
Collapse
Affiliation(s)
- Md Imran Nazir
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka, Bangladesh
| | - Afsana Akter
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka, Bangladesh
| | - Md Anwar Hussen Wadud
- Department of Computer Science & Engineering, Sunamgonj Science and Technology University, Sunamganj, 3000, Bangladesh
| | - Md Ashraf Uddin
- Department of Computer Science & Engineering, Jagannath University, Dhaka, Bangladesh
| |
Collapse
|
7
|
Yuan J. Brain tumor image segmentation method using hybrid attention module and improved mask RCNN. Sci Rep 2024; 14:20615. [PMID: 39232028 PMCID: PMC11375165 DOI: 10.1038/s41598-024-71250-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Accepted: 08/26/2024] [Indexed: 09/06/2024] Open
Abstract
To meet the needs of automated medical analysis of brain tumor magnetic resonance imaging, this study introduces an enhanced instance segmentation method built upon mask region-based convolutional neural network. By incorporating squeeze-and-excitation networks, a channel attention mechanism, and concatenated attention neural network, a spatial attention mechanism, the model can more adeptly focus on the critical regions and finer details of brain tumors. Residual network-50 combined attention module and feature pyramid network as the backbone network to effectively capture multi-scale characteristics of brain tumors. At the same time, the region proposal network and region of interest align technology were used to ensure that the segmentation area matched the actual tumor morphology. The originality of the research lies in the deep residual network that combines attention mechanism with feature pyramid network to replace the backbone based on mask region convolutional neural network, achieving an improvement in the efficiency of brain tumor feature extraction. After a series of experiments, the precision of the model is 90.72%, which is 0.76% higher than that of the original model. Recall was 91.68%, an increase of 0.95%; Mean Intersection over Union was 94.56%, an increase of 1.39%. This method achieves precise segmentation of brain tumor magnetic resonance imaging, and doctors can easily and accurately locate the tumor area through the segmentation results, thereby quickly measuring the diameter, area, and other information of the tumor, providing doctors with more comprehensive diagnostic information.
Collapse
Affiliation(s)
- Jinglin Yuan
- School of Applied Science, Macao Polytechnic University, Macau, 999078, China.
| |
Collapse
|
8
|
Vossough A, Khalili N, Familiar AM, Gandhi D, Viswanathan K, Tu W, Haldar D, Bagheri S, Anderson H, Haldar S, Storm PB, Resnick A, Ware JB, Nabavizadeh A, Fathi Kazerooni A. Training and Comparison of nnU-Net and DeepMedic Methods for Autosegmentation of Pediatric Brain Tumors. AJNR Am J Neuroradiol 2024; 45:1081-1089. [PMID: 38724204 PMCID: PMC11383404 DOI: 10.3174/ajnr.a8293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/01/2024] [Indexed: 08/11/2024]
Abstract
BACKGROUND AND PURPOSE Tumor segmentation is essential in surgical and treatment planning and response assessment and monitoring in pediatric brain tumors, the leading cause of cancer-related death among children. However, manual segmentation is time-consuming and has high interoperator variability, underscoring the need for more efficient methods. After training, we compared 2 deep-learning-based 3D segmentation models, DeepMedic and nnU-Net, with pediatric-specific multi-institutional brain tumor data based on multiparametric MR images. MATERIALS AND METHODS Multiparametric preoperative MR imaging scans of 339 pediatric patients (n = 293 internal and n = 46 external cohorts) with a variety of tumor subtypes were preprocessed and manually segmented into 4 tumor subregions, ie, enhancing tumor, nonenhancing tumor, cystic components, and peritumoral edema. After training, performances of the 2 models on internal and external test sets were evaluated with reference to ground truth manual segmentations. Additionally, concordance was assessed by comparing the volume of the subregions as a percentage of the whole tumor between model predictions and ground truth segmentations using the Pearson or Spearman correlation coefficients and the Bland-Altman method. RESULTS The mean Dice score for nnU-Net internal test set was 0.9 (SD, 0.07) (median, 0.94) for whole tumor; 0.77 (SD, 0.29) for enhancing tumor; 0.66 (SD, 0.32) for nonenhancing tumor; 0.71 (SD, 0.33) for cystic components, and 0.71 (SD, 0.40) for peritumoral edema, respectively. For DeepMedic, the mean Dice scores were 0.82 (SD, 0.16) for whole tumor; 0.66 (SD, 0.32) for enhancing tumor; 0.48 (SD, 0.27) for nonenhancing tumor; 0.48 (SD, 0.36) for cystic components, and 0.19 (SD, 0.33) for peritumoral edema, respectively. Dice scores were significantly higher for nnU-Net (P ≤ .01). Correlation coefficients for tumor subregion percentage volumes were higher (0.98 versus 0.91 for enhancing tumor, 0.97 versus 0.75 for nonenhancing tumor, 0.98 versus 0.80 for cystic components, 0.95 versus 0.33 for peritumoral edema in the internal test set). Bland-Altman plots were better for nnU-Net compared with DeepMedic. External validation of the trained nnU-Net model on the multi-institutional Brain Tumor Segmentation Challenge in Pediatrics (BraTS-PEDs) 2023 data set revealed high generalization capability in the segmentation of whole tumor, tumor core (a combination of enhancing tumor, nonenhancing tumor, and cystic components), and enhancing tumor with mean Dice scores of 0.87 (SD, 0.13) (median, 0.91), 0.83 (SD, 0.18) (median, 0.89), and 0.48 (SD, 0.38) (median, 0.58), respectively. CONCLUSIONS The pediatric-specific data-trained nnU-Net model is superior to DeepMedic for whole tumor and subregion segmentation of pediatric brain tumors.
Collapse
Affiliation(s)
- Arastoo Vossough
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
- Department of Radiology (A.V.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Nastaran Khalili
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Ariana M Familiar
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Deep Gandhi
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Karthik Viswanathan
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Wenxin Tu
- College of Arts and Sciences (W.T.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Debanjan Haldar
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Sina Bagheri
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Hannah Anderson
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Shuvanjan Haldar
- School of Engineering (S.H.), Rutgers University, New Brunswick, New Jersey
| | - Phillip B Storm
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Neurosurgery (P.B.S., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Adam Resnick
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
| | - Jeffrey B Ware
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ali Nabavizadeh
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Radiology (A.V., S.B., J.B.W., A.N.), University of Pennsylvania, Philadelphia, Pennsylvania
| | - Anahita Fathi Kazerooni
- From the Center for Data Driven Discovery in Biomedicine (A.V., N.K., A.M.F., D.G., K.V., D.H., S.B., H.A., P.B.S., A.R., A.N., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Department of Neurosurgery (P.B.S., A.F.K.), Children's Hospital of Philadelphia, Philadelphia, Pennsylvania
- Center for AI & Data Science for Integrated Diagnostics (A.F.K.), University of Pennsylvania, Philadelphia, Pennsylvania
- Center for Biomedical Image Computing and Analytics (A.F.K.), University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
9
|
Sarah P, Krishnapriya S, Saladi S, Karuna Y, Bavirisetti DP. A novel approach to brain tumor detection using K-Means++, SGLDM, ResNet50, and synthetic data augmentation. Front Physiol 2024; 15:1342572. [PMID: 39077759 PMCID: PMC11284281 DOI: 10.3389/fphys.2024.1342572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/24/2024] [Indexed: 07/31/2024] Open
Abstract
Introduction: Brain tumors are abnormal cell growths in the brain, posing significant treatment challenges. Accurate early detection using non-invasive methods is crucial for effective treatment. This research focuses on improving the early detection of brain tumors in MRI images through advanced deep-learning techniques. The primary goal is to identify the most effective deep-learning model for classifying brain tumors from MRI data, enhancing diagnostic accuracy and reliability. Methods: The proposed method for brain tumor classification integrates segmentation using K-means++, feature extraction from the Spatial Gray Level Dependence Matrix (SGLDM), and classification with ResNet50, along with synthetic data augmentation to enhance model robustness. Segmentation isolates tumor regions, while SGLDM captures critical texture information. The ResNet50 model then classifies the tumors accurately. To further improve the interpretability of the classification results, Grad-CAM is employed, providing visual explanations by highlighting influential regions in the MRI images. Result: In terms of accuracy, sensitivity, and specificity, the evaluation on the Br35H::BrainTumorDetection2020 dataset showed superior performance of the suggested method compared to existing state-of-the-art approaches. This indicates its effectiveness in achieving higher precision in identifying and classifying brain tumors from MRI data, showcasing advancements in diagnostic reliability and efficacy. Discussion: The superior performance of the suggested method indicates its robustness in accurately classifying brain tumors from MRI images, achieving higher accuracy, sensitivity, and specificity compared to existing methods. The method's enhanced sensitivity ensures a greater detection rate of true positive cases, while its improved specificity reduces false positives, thereby optimizing clinical decision-making and patient care in neuro-oncology.
Collapse
Affiliation(s)
- Ponuku Sarah
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Srigiri Krishnapriya
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Saritha Saladi
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Yepuganti Karuna
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Durga Prasad Bavirisetti
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
10
|
Zahoor MM, Khan SH, Alahmadi TJ, Alsahfi T, Mazroa ASA, Sakr HA, Alqahtani S, Albanyan A, Alshemaimri BK. Brain Tumor MRI Classification Using a Novel Deep Residual and Regional CNN. Biomedicines 2024; 12:1395. [PMID: 39061969 PMCID: PMC11274019 DOI: 10.3390/biomedicines12071395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/30/2024] [Accepted: 06/10/2024] [Indexed: 07/28/2024] Open
Abstract
Brain tumor classification is essential for clinical diagnosis and treatment planning. Deep learning models have shown great promise in this task, but they are often challenged by the complex and diverse nature of brain tumors. To address this challenge, we propose a novel deep residual and region-based convolutional neural network (CNN) architecture, called Res-BRNet, for brain tumor classification using magnetic resonance imaging (MRI) scans. Res-BRNet employs a systematic combination of regional and boundary-based operations within modified spatial and residual blocks. The spatial blocks extract homogeneity, heterogeneity, and boundary-related features of brain tumors, while the residual blocks significantly capture local and global texture variations. We evaluated the performance of Res-BRNet on a challenging dataset collected from Kaggle repositories, Br35H, and figshare, containing various tumor categories, including meningioma, glioma, pituitary, and healthy images. Res-BRNet outperformed standard CNN models, achieving excellent accuracy (98.22%), sensitivity (0.9811), F1-score (0.9841), and precision (0.9822). Our results suggest that Res-BRNet is a promising tool for brain tumor classification, with the potential to improve the accuracy and efficiency of clinical diagnosis and treatment planning.
Collapse
Affiliation(s)
- Mirza Mumtaz Zahoor
- Faculty of Computer Sciences, Ibadat International University, Islamabad 44000, Pakistan;
| | - Saddam Hussain Khan
- Department of Computer System Engineering, University of Engineering and Applied Science (UEAS), Swat 19060, Pakistan;
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah 21959, Saudi Arabia;
| | - Alanoud S. Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia;
| | - Hesham A. Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura 35511, Dakahlia, Egypt;
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia;
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj 16278, Saudi Arabia;
| | | |
Collapse
|
11
|
Irastorza-Valera L, Soria-Gómez E, Benitez JM, Montáns FJ, Saucedo-Mora L. Review of the Brain's Behaviour after Injury and Disease for Its Application in an Agent-Based Model (ABM). Biomimetics (Basel) 2024; 9:362. [PMID: 38921242 PMCID: PMC11202129 DOI: 10.3390/biomimetics9060362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 05/28/2024] [Accepted: 06/05/2024] [Indexed: 06/27/2024] Open
Abstract
The brain is the most complex organ in the human body and, as such, its study entails great challenges (methodological, theoretical, etc.). Nonetheless, there is a remarkable amount of studies about the consequences of pathological conditions on its development and functioning. This bibliographic review aims to cover mostly findings related to changes in the physical distribution of neurons and their connections-the connectome-both structural and functional, as well as their modelling approaches. It does not intend to offer an extensive description of all conditions affecting the brain; rather, it presents the most common ones. Thus, here, we highlight the need for accurate brain modelling that can subsequently be used to understand brain function and be applied to diagnose, track, and simulate treatments for the most prevalent pathologies affecting the brain.
Collapse
Affiliation(s)
- Luis Irastorza-Valera
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
- PIMM Laboratory, ENSAM–Arts et Métiers ParisTech, 151 Bd de l’Hôpital, 75013 Paris, France
| | - Edgar Soria-Gómez
- Achúcarro Basque Center for Neuroscience, Barrio Sarriena, s/n, 48940 Leioa, Spain;
- Ikerbasque, Basque Foundation for Science, Plaza Euskadi, 5, 48009 Bilbao, Spain
- Department of Neurosciences, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Spain
| | - José María Benitez
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
| | - Francisco J. Montáns
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
- Department of Mechanical and Aerospace Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Luis Saucedo-Mora
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
- Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PJ, UK
- Department of Nuclear Science and Engineering, Massachusetts Institute of Technology (MIT), 77 Massachusetts Ave, Cambridge, MA 02139, USA
| |
Collapse
|
12
|
Usha MP, Kannan G, Ramamoorthy M. Multimodal Brain Tumor Classification Using Convolutional Tumnet Architecture. Behav Neurol 2024; 2024:4678554. [PMID: 38882177 PMCID: PMC11178426 DOI: 10.1155/2024/4678554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 12/22/2023] [Accepted: 01/10/2024] [Indexed: 06/18/2024] Open
Abstract
The most common and aggressive tumor is brain malignancy, which has a short life span in the fourth grade of the disease. As a result, the medical plan may be a crucial step toward improving the well-being of a patient. Both diagnosis and therapy are part of the medical plan. Brain tumors are commonly imaged with magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). In this paper, multimodal fused imaging with classification and segmentation for brain tumors was proposed using the deep learning method. The MRI and CT brain tumor images of the same slices (308 slices of meningioma and sarcoma) are combined using three different types of pixel-level fusion methods. The presence/absence of a tumor is classified using the proposed Tumnet technique, and the tumor area is found accordingly. In the other case, Tumnet is also applied for single-modal MRI/CT (561 image slices) for classification. The proposed Tumnet was modeled with 5 convolutional layers, 3 pooling layers with ReLU activation function, and 3 fully connected layers. The first-order statistical fusion metrics for an average method of MRI-CT images are obtained as SSIM tissue at 83%, SSIM bone at 84%, accuracy at 90%, sensitivity at 96%, and specificity at 95%, and the second-order statistical fusion metrics are obtained as the standard deviation of fused images at 79% and entropy at 0.99. The entropy value confirms the presence of additional features in the fused image. The proposed Tumnet yields a sensitivity of 96%, an accuracy of 98%, a specificity of 99%, normalized values of the mean of 0.75, a standard deviation of 0.4, a variance of 0.16, and an entropy of 0.90.
Collapse
Affiliation(s)
- M Padma Usha
- Department of Electronics and Communication Engineering B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India
| | - G Kannan
- Department of Electronics and Communication Engineering B.S. Abdur Rahman Crescent Institute of Science and Technology, Vandalur, Chennai, India
| | - M Ramamoorthy
- Department of Artificial Intelligence and Machine Learning Saveetha School of Engineering SIMATS, Chennai, 600124, India
| |
Collapse
|
13
|
Fiscone C, Curti N, Ceccarelli M, Remondini D, Testa C, Lodi R, Tonon C, Manners DN, Castellani G. Generalizing the Enhanced-Deep-Super-Resolution Neural Network to Brain MR Images: A Retrospective Study on the Cam-CAN Dataset. eNeuro 2024; 11:ENEURO.0458-22.2023. [PMID: 38729763 PMCID: PMC11140654 DOI: 10.1523/eneuro.0458-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 09/12/2023] [Accepted: 09/28/2023] [Indexed: 05/12/2024] Open
Abstract
The Enhanced-Deep-Super-Resolution (EDSR) model is a state-of-the-art convolutional neural network suitable for improving image spatial resolution. It was previously trained with general-purpose pictures and then, in this work, tested on biomedical magnetic resonance (MR) images, comparing the network outcomes with traditional up-sampling techniques. We explored possible changes in the model response when different MR sequences were analyzed. T1w and T2w MR brain images of 70 human healthy subjects (F:M, 40:30) from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) repository were down-sampled and then up-sampled using EDSR model and BiCubic (BC) interpolation. Several reference metrics were used to quantitatively assess the performance of up-sampling operations (RMSE, pSNR, SSIM, and HFEN). Two-dimensional and three-dimensional reconstructions were evaluated. Different brain tissues were analyzed individually. The EDSR model was superior to BC interpolation on the selected metrics, both for two- and three- dimensional reconstructions. The reference metrics showed higher quality of EDSR over BC reconstructions for all the analyzed images, with a significant difference of all the criteria in T1w images and of the perception-based SSIM and HFEN in T2w images. The analysis per tissue highlights differences in EDSR performance related to the gray-level values, showing a relative lack of outperformance in reconstructing hyperintense areas. The EDSR model, trained on general-purpose images, better reconstructs MR T1w and T2w images than BC, without any retraining or fine-tuning. These results highlight the excellent generalization ability of the network and lead to possible applications on other MR measurements.
Collapse
Affiliation(s)
- Cristiana Fiscone
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
| | - Nico Curti
- Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
| | - Mattia Ceccarelli
- Department of Agricultural and Food Sciences, University of Bologna, Bologna 40127, Italy
| | - Daniel Remondini
- Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
- INFN, Bologna 40127, Italy
| | - Claudia Testa
- Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
- INFN, Bologna 40127, Italy
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
| | - David Neil Manners
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
- Department for Life Quality Studies, University of Bologna, Rimini 47921, Italy
| | - Gastone Castellani
- Department of Medical and Surgical Sciences, University of Bologna, Bologna 40138, Italy
| |
Collapse
|
14
|
Rai HM, Yoo J, Dashkevych S. Two-headed UNetEfficientNets for parallel execution of segmentation and classification of brain tumors: incorporating postprocessing techniques with connected component labelling. J Cancer Res Clin Oncol 2024; 150:220. [PMID: 38684578 PMCID: PMC11058623 DOI: 10.1007/s00432-024-05718-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 03/21/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The purpose of this study is to develop accurate and automated detection and segmentation methods for brain tumors, given their significant fatality rates, with aggressive malignant tumors like Glioblastoma Multiforme (GBM) having a five-year survival rate as low as 5 to 10%. This underscores the urgent need to improve diagnosis and treatment outcomes through innovative approaches in medical imaging and deep learning techniques. METHODS In this work, we propose a novel approach utilizing the two-headed UNetEfficientNets model for simultaneous segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) images. The model combines the strengths of EfficientNets and a modified two-headed Unet model. We utilized a publicly available dataset consisting of 3064 brain MR images classified into three tumor classes: Meningioma, Glioma, and Pituitary. To enhance the training process, we performed 12 types of data augmentation on the training dataset. We evaluated the methodology using six deep learning models, ranging from UNetEfficientNet-B0 to UNetEfficientNet-B5, optimizing the segmentation and classification heads using binary cross entropy (BCE) loss with Dice and BCE with focal loss, respectively. Post-processing techniques such as connected component labeling (CCL) and ensemble models were applied to improve segmentation outcomes. RESULTS The proposed UNetEfficientNet-B4 model achieved outstanding results, with an accuracy of 99.4% after postprocessing. Additionally, it obtained high scores for DICE (94.03%), precision (98.67%), and recall (99.00%) after post-processing. The ensemble technique further improved segmentation performance, with a global DICE score of 95.70% and Jaccard index of 91.20%. CONCLUSION Our study demonstrates the high efficiency and accuracy of the proposed UNetEfficientNet-B4 model in the automatic and parallel detection and segmentation of brain tumors from MRI images. This approach holds promise for improving diagnosis and treatment planning for patients with brain tumors, potentially leading to better outcomes and prognosis.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea
| | - Serhii Dashkevych
- Department of Computer Engineering, Vistula University, Stokłosy 3, 02-787, Warszawa, Poland
| |
Collapse
|
15
|
Shams UA, Javed I, Fizan M, Shah AR, Mustafa G, Zubair M, Massoud Y, Mehmood MQ, Naveed MA. Bio-net dataset: AI-based diagnostic solutions using peripheral blood smear images. Blood Cells Mol Dis 2024; 105:102823. [PMID: 38241949 DOI: 10.1016/j.bcmd.2024.102823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 01/02/2024] [Accepted: 01/02/2024] [Indexed: 01/21/2024]
Abstract
Peripheral blood smear examination is one of the basic steps in the evaluation of different blood cells. It is a confirmatory step after an automated complete blood count analysis. Manual microscopy is time-consuming and requires professional laboratory expertise. Therefore, the turn-around time for peripheral smear in a health care center is approximately 3-4 hours. To avoid the traditional method of manual counting under the microscope a computerized automation of peripheral blood smear examination has been adopted, which is a challenging task in medical diagnostics. In recent times, deep learning techniques have overcome the challenges associated with human microscopic evaluation of peripheral smears and this has led to reduced cost and precise diagnosis. However, their application can be significantly improved by the availability of annotated datasets. This study presents a large customized annotated blood cell dataset (named the Bio-Net dataset from healthy individuals) and blood cell detection and counting in the peripheral blood smear images. A mini-version of the dataset for specialized WBC-based image processing tasks is also equipped to classify the healthy and mature WBCs in their respective classes. An object detection algorithm called You Only Look Once (YOLO) with a refashion disposition has been trained on the novel dataset to automatically detect and classify blood cells into RBCs, WBCs, and platelets and compare the results with other publicly available datasets to highlight the versatility. In short the introduction of the Bio-Net dataset and AI-powered detection and counting offers a significant potential for advancement in biomedical research for analyzing and understanding biological data.
Collapse
Affiliation(s)
- Usman Ali Shams
- Department of Hematology, University of Health Sciences (UHS), Khayaban-e-Jamia Punjab, Lahore 54600, Pakistan
| | - Isma Javed
- MicroNano Lab, Department of Electrical Engineering, Information Technology University (ITU) of Punjab, Ferozepur Road, Lahore 54600, Pakistan
| | - Muhammad Fizan
- MicroNano Lab, Department of Electrical Engineering, Information Technology University (ITU) of Punjab, Ferozepur Road, Lahore 54600, Pakistan
| | - Aqib Raza Shah
- MicroNano Lab, Department of Electrical Engineering, Information Technology University (ITU) of Punjab, Ferozepur Road, Lahore 54600, Pakistan
| | - Ghulam Mustafa
- Department of Hematology, University of Health Sciences (UHS), Khayaban-e-Jamia Punjab, Lahore 54600, Pakistan
| | - Muhammad Zubair
- Innovative Technologies Laboratories (ITL), King Abdullah University of Science and Technology (KAUST), Saudi Arabia.
| | - Yehia Massoud
- Innovative Technologies Laboratories (ITL), King Abdullah University of Science and Technology (KAUST), Saudi Arabia.
| | - Muhammad Qasim Mehmood
- MicroNano Lab, Department of Electrical Engineering, Information Technology University (ITU) of Punjab, Ferozepur Road, Lahore 54600, Pakistan.
| | - Muhammad Asif Naveed
- Department of Hematology, University of Health Sciences (UHS), Khayaban-e-Jamia Punjab, Lahore 54600, Pakistan.
| |
Collapse
|
16
|
Fang Y, Wang H, Cao D, Cai S, Qian C, Feng M, Zhang W, Cao L, Chen H, Wei L, Mu S, Pei Z, Li J, Wang R, Wang S. Multi-center application of a convolutional neural network for preoperative detection of cavernous sinus invasion in pituitary adenomas. Neuroradiology 2024; 66:353-360. [PMID: 38236424 DOI: 10.1007/s00234-024-03287-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 01/07/2024] [Indexed: 01/19/2024]
Abstract
OBJECTIVE Cavernous sinus invasion (CSI) plays a pivotal role in determining management in pituitary adenomas. The study aimed to develop a Convolutional Neural Network (CNN) model to diagnose CSI in multiple centers. METHODS A total of 729 cases were retrospectively obtained in five medical centers with (n = 543) or without CSI (n = 186) from January 2011 to December 2021. The CNN model was trained using T1-enhanced MRI from two pituitary centers of excellence (n = 647). The other three municipal centers (n = 82) as the external testing set were imported to evaluate the model performance. The area-under-the-receiver-operating-characteristic-curve values (AUC-ROC) analyses were employed to evaluate predicted performance. Gradient-weighted class activation mapping (Grad-CAM) was used to determine models' regions of interest. RESULTS The CNN model achieved high diagnostic accuracy (0.89) in identifying CSI in the external testing set, with an AUC-ROC value of 0.92 (95% CI, 0.88-0.97), better than CSI clinical predictor of diameter (AUC-ROC: 0.75), length (AUC-ROC: 0.80), and the three kinds of dichotomizations of the Knosp grading system (AUC-ROC: 0.70-0.82). In cases with Knosp grade 3A (n = 24, CSI rate, 0.35), the accuracy the model accounted for 0.78, with sensitivity and specificity values of 0.72 and 0.78, respectively. According to the Grad-CAM results, the views of the model were confirmed around the sellar region with CSI. CONCLUSIONS The deep learning model is capable of accurately identifying CSI and satisfactorily able to localize CSI in multicenters.
Collapse
Affiliation(s)
- Yi Fang
- Department of Neurosurgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dongcheng District, Beijing, 100730, China
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - He Wang
- Department of Neurosurgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dongcheng District, Beijing, 100730, China
| | - Demao Cao
- Department of Neurosurgery, The Affiliated Hospital of Yangzhou University, Yangzhou University, Jiangsu, China
| | - Shengyu Cai
- Department of Neurosurgery, the Second Affiliated Hospital, Fujian Medical University, Quanzhou, China
| | - Chengxing Qian
- Department of Neurosurgery, the Tongling People's Hospital, Tongling, China
| | - Ming Feng
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - Wentai Zhang
- Department of Neurosurgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dongcheng District, Beijing, 100730, China
| | - Lei Cao
- Department of Neurosurgery, the Tiantan Hospital, Capital Medical University, Beijing, China
| | - Hongjie Chen
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - Liangfeng Wei
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - Shuwen Mu
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - Zhijie Pei
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - Jun Li
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China
| | - Renzhi Wang
- Department of Neurosurgery, the Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No. 1 Shuai Fu Yuan, Dongcheng District, Beijing, 100730, China.
- Chinese University of Hong Kong (Shenzhen) School of Medicine, Shenzhen, Guangdong, People's Republic of China.
| | - Shousen Wang
- Department of Neurosurgery, Fuzhou 900TH Hospital, Fuzong Clinical Medical College of Fujian Medical University, No. 156, Xi'erhuanbei Road, Fuzhou, Fujian, China.
| |
Collapse
|
17
|
Dakdareh SG, Abbasian K. Diagnosis of Alzheimer's Disease and Mild Cognitive Impairment Using Convolutional Neural Networks. J Alzheimers Dis Rep 2024; 8:317-328. [PMID: 38405350 PMCID: PMC10894608 DOI: 10.3233/adr-230118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 01/04/2024] [Indexed: 02/27/2024] Open
Abstract
Background Alzheimer's disease and mild cognitive impairment are common diseases in the elderly, affecting more than 50 million people worldwide in 2020. Early diagnosis is crucial for managing these diseases, but their complexity poses a challenge. Convolutional neural networks have shown promise in accurate diagnosis. Objective The main objective of this research is to diagnose Alzheimer's disease and mild cognitive impairment in healthy individuals using convolutional neural networks. Methods This study utilized three different convolutional neural network models, two of which were pre-trained models, namely AlexNet and DenseNet, while the third model was a CNN1D-LSTM neural network. Results Among the neural network models used, the AlexNet demonstrated the highest accuracy, exceeding 98%, in diagnosing mild cognitive impairment and Alzheimer's disease in healthy individuals. Furthermore, the accuracy of the DenseNet and CNN1D-LSTM models is 88% and 91.89%, respectively. Conclusions The research highlights the potential of convolutional neural networks in diagnosing mild cognitive impairment and Alzheimer's disease. The use of pre-trained neural networks and the integration of various patient data contribute to achieving accurate results. The high accuracy achieved by the AlexNet neural network underscores its effectiveness in disease classification. These findings pave the way for future research and improvements in the field of diagnosing these diseases using convolutional neural networks, ultimately aiding in early detection and effective management of mild cognitive impairment and Alzheimer's disease.
Collapse
Affiliation(s)
| | - Karim Abbasian
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
18
|
Vinta SR, Chintalapati PV, Babu GR, Tamma R, Sai Chaitanya Kumar G. EDLNet: ensemble deep learning network model for automatic brain tumor classification and segmentation. J Biomol Struct Dyn 2024:1-13. [PMID: 38345061 DOI: 10.1080/07391102.2024.2311343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 01/23/2024] [Indexed: 03/08/2025]
Abstract
The brain's abnormal and uncontrollable cell partitioning is a severe cancer disease. The tissues around the brain or the skull induce this tumor to develop spontaneously. For the treatment of a brain tumor, surgical techniques are typically preferred. Deep learning models in the biomedical field have recently attracted a lot of attention for detecting and treating diseases. This article proposes a new Ensemble Deep Learning Network (EDLNet) model. This research uses the Modified Faster RCNN approach to classify brain MRI scan images into cancerous and non-cancerous. A deep recurrent convolutional neural network (DRCNN)-based diagnostic method for early-stage brain tumor segmentation is presented. The evaluation outcomes show that the proposed tumor classification and segmentation model's performance accurately segments tissues from MRI images. For the analysis of the proposed model, two different publicly available datasets (D1&D2) are used. For D1 and D2 datasets, a total of 99.76% and 99.87% accuracies are achieved by the proposed model. The performance results of the proposed model are more effective than the state-of-the-art network models as per the experimental results.
Collapse
Affiliation(s)
- Surendra Reddy Vinta
- School of Computer Science and Engineering, VITAP University, Andhra Pradesh, India
| | - Phaneendra Varma Chintalapati
- Department of Computer Science and Engineering, Shri Vishnu Engineering College for Women (A), Bhimavaram, Andhra Pradesh, India
| | - Gurujukota Ramesh Babu
- Department of Computer Science and Engineering, Shri Vishnu Engineering College for Women (A), Bhimavaram, Andhra Pradesh, India
| | - Rajyalakshmi Tamma
- Department of Computer Science and Engineering, University College of Narasaraopet (JNTUN), Narasaraopet, Andhra Pradesh, India
| | - Gunupudi Sai Chaitanya Kumar
- Department of Computer Science and Engineering, DVR & Dr. HS MIC College of Technology, Kanchikacherla, Andhra Pradesh, India
| |
Collapse
|
19
|
Srinivasan S, Francis D, Mathivanan SK, Rajadurai H, Shivahare BD, Shah MA. A hybrid deep CNN model for brain tumor image multi-classification. BMC Med Imaging 2024; 24:21. [PMID: 38243215 PMCID: PMC10799524 DOI: 10.1186/s12880-024-01195-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
The current approach to diagnosing and classifying brain tumors relies on the histological evaluation of biopsy samples, which is invasive, time-consuming, and susceptible to manual errors. These limitations underscore the pressing need for a fully automated, deep-learning-based multi-classification system for brain malignancies. This article aims to leverage a deep convolutional neural network (CNN) to enhance early detection and presents three distinct CNN models designed for different types of classification tasks. The first CNN model achieves an impressive detection accuracy of 99.53% for brain tumors. The second CNN model, with an accuracy of 93.81%, proficiently categorizes brain tumors into five distinct types: normal, glioma, meningioma, pituitary, and metastatic. Furthermore, the third CNN model demonstrates an accuracy of 98.56% in accurately classifying brain tumors into their different grades. To ensure optimal performance, a grid search optimization approach is employed to automatically fine-tune all the relevant hyperparameters of the CNN models. The utilization of large, publicly accessible clinical datasets results in robust and reliable classification outcomes. This article conducts a comprehensive comparison of the proposed models against classical models, such as AlexNet, DenseNet121, ResNet-101, VGG-19, and GoogleNet, reaffirming the superiority of the deep CNN-based approach in advancing the field of brain tumor classification and early detection.
Collapse
Affiliation(s)
- Saravanan Srinivasan
- Department of Computer Science and Engineering, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Chennai, 600062, India
| | - Divya Francis
- Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul, 624622, India
| | | | - Hariharan Rajadurai
- School of Computing Science and Engineering, VIT Bhopal University, Bhopal-Indore Highway Kothrikalan, Sehore, 466114, India
| | - Basu Dev Shivahare
- School of Computing Science and Engineering, Galgotias University, Greater Noida, 203201, India
| | - Mohd Asif Shah
- Department of Economics, Kabridahar University, Po Box 250, Kebri Dehar, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
20
|
Pitarch C, Ungan G, Julià-Sapé M, Vellido A. Advances in the Use of Deep Learning for the Analysis of Magnetic Resonance Image in Neuro-Oncology. Cancers (Basel) 2024; 16:300. [PMID: 38254790 PMCID: PMC10814384 DOI: 10.3390/cancers16020300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/28/2023] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
Machine Learning is entering a phase of maturity, but its medical applications still lag behind in terms of practical use. The field of oncological radiology (and neuro-oncology in particular) is at the forefront of these developments, now boosted by the success of Deep-Learning methods for the analysis of medical images. This paper reviews in detail some of the most recent advances in the use of Deep Learning in this field, from the broader topic of the development of Machine-Learning-based analytical pipelines to specific instantiations of the use of Deep Learning in neuro-oncology; the latter including its use in the groundbreaking field of ultra-low field magnetic resonance imaging.
Collapse
Affiliation(s)
- Carla Pitarch
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Eurecat, Digital Health Unit, Technology Centre of Catalonia, 08005 Barcelona, Spain
| | - Gulnur Ungan
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Margarida Julià-Sapé
- Departament de Bioquímica i Biologia Molecular and Institut de Biotecnologia i Biomedicina (IBB), Universitat Autònoma de Barcelona (UAB), 08193 Barcelona, Spain; (G.U.); (M.J.-S.)
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| | - Alfredo Vellido
- Department of Computer Science, Universitat Politècnica de Catalunya (UPC BarcelonaTech) and Intelligent Data Science and Artificial Intelligence (IDEAI-UPC) Research Center, 08034 Barcelona, Spain;
- Centro de Investigación Biomédica en Red (CIBER), 28029 Madrid, Spain
| |
Collapse
|
21
|
Gupta I, Singh S, Gupta S, Ranjan Nayak S. Classification of Brain Tumours in MRI Images using a Convolutional Neural Network. Curr Med Imaging 2024; 20:e270323214998. [PMID: 37018519 DOI: 10.2174/1573405620666230327124902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 01/17/2023] [Accepted: 02/01/2023] [Indexed: 04/07/2023]
Abstract
INTRODUCTION Recent advances in deep learning have aided the well-being business in Medical Imaging of numerous disorders like brain tumours, a serious malignancy caused by unregulated and aberrant cell portioning. The most frequent and widely used machine learning algorithm for visual learning and image identification is CNN. METHODS In this article, the convolutional neural network (CNN) technique is used. Augmentation of data and processing of images is used to classify scan imagery of brain MRI as malignant or benign. The performance of the proposed CNN model is compared with pre-trained models: VGG-16, ResNet-50, and Inceptionv3 using the technique which is transfer learning. RESULTS Even though the experiment was conducted on a relatively limited dataset, the experimental results reveal that the suggested scratched CNN model accuracy achieved is 94 percent, VGG-16 was extremely effective and had a very low complexity rate with an accuracy of 90 percent, whereas ResNet- 50 reached 86 percent and Inception v3 obtained 64 percent accuracy. CONCLUSION When compared to previous pre-trained models, the suggested model consumes significantly less processing resources and achieves significantly higher accuracy outcomes and reduction in losses.
Collapse
Affiliation(s)
- Isha Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Swati Singh
- University Institute of Technology, Himachal Pradesh University, Shimla, 171005, India
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Soumya Ranjan Nayak
- School of Computer Engineering, KIIT Deemed to be University, Bhubaneswar 751024, Odisha, India
| |
Collapse
|
22
|
Jennifer SS, Shamim MH, Reza AW, Siddique N. Sickle cell disease classification using deep learning. Heliyon 2023; 9:e22203. [PMID: 38045118 PMCID: PMC10692811 DOI: 10.1016/j.heliyon.2023.e22203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 10/24/2023] [Accepted: 11/06/2023] [Indexed: 12/05/2023] Open
Abstract
This paper presents a transfer and deep learning based approach to the classification of Sickle Cell Disease (SCD). Five transfer learning models such as ResNet-50, AlexNet, MobileNet, VGG-16 and VGG-19, and a sequential convolutional neural network (CNN) have been implemented for SCD classification. ErythrocytesIDB dataset has been used for training and testing the models. In order to make up for the data insufficiency of the erythrocytesIDB dataset, advanced image augmentation techniques are employed to ensure the robustness of the dataset, enhance dataset diversity and improve the accuracy of the models. An ablation experiment using Random Forest and Support Vector Machine (SVM) classifiers along with various hyperparameter tweaking was carried out to determine the contribution of different model elements on their predicted accuracy. A rigorous statistical analysis was carried out for evaluation and to further evaluate the model's robustness, an adversarial attack test was conducted. The experimental results demonstrate compelling performance across all models. After performing the statistical tests, it was observed that MobileNet showed a significant improvement (p = 0.0229), while other models (ResNet-50, AlexNet, VGG-16, VGG-19) did not (p > 0.05). Notably, the ResNet-50 model achieves remarkable precision, recall, and F1-score values of 100 % for circular, elongated, and other cell shapes when experimented with a smaller dataset. The AlexNet model achieves a balanced precision (98 %) and recall (99 %) for circular and elongated shapes. Meanwhile, the other models showcase competitive performance.
Collapse
Affiliation(s)
- Sanjeda Sara Jennifer
- Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
| | - Mahbub Hasan Shamim
- Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
| | - Ahmed Wasif Reza
- Department of Computer Science and Engineering, East West University, Dhaka, Bangladesh
| | - Nazmul Siddique
- School of Computing, Engineering and Intelligent Systems, Ulster University, UK
| |
Collapse
|
23
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
24
|
Talukder MA, Islam MM, Uddin MA, Akhter A, Pramanik MAJ, Aryal S, Almoyad MAA, Hasan KF, Moni MA. An efficient deep learning model to categorize brain tumor using reconstruction and fine-tuning. EXPERT SYSTEMS WITH APPLICATIONS 2023; 230:120534. [DOI: 10.1016/j.eswa.2023.120534] [Citation(s) in RCA: 31] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2024]
|
25
|
Li F, Zhai P, Yang C, Feng G, Yang J, Yuan Y. Automated diagnosis of anterior cruciate ligament via a weighted multi-view network. Front Bioeng Biotechnol 2023; 11:1268543. [PMID: 37885456 PMCID: PMC10598377 DOI: 10.3389/fbioe.2023.1268543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 09/22/2023] [Indexed: 10/28/2023] Open
Abstract
Objective: To build a three-dimensional (3D) deep learning-based computer-aided diagnosis (CAD) system and investigate its applicability for automatic detection of anterior cruciate ligament (ACL) of the knee joint in magnetic resonance imaging (MRI). Methods: In this study, we develop a 3D weighted multi-view convolutional neural network by fusing different views of MRI to detect ACL. The network is evaluated on two MRI datasets, the in-house MRI-ACL dataset and the publicly available MRNet-v1.0 dataset. In the MRI-ACL dataset, the retrospective study collects 100 cases, and four views per patient are included. There are 50 ACL patients and 50 normal patients, respectively. The MRNet-v1.0 dataset contains 1,250 cases with three views, of which 208 are ACL patients, and the rest are normal or other abnormal patients. Results: The area under the receiver operating characteristic curve (AUC) of the ACL diagnosis system is 97.00% and 92.86% at the optimal threshold for the MRI-ACL dataset and the MRNet-v1.0 dataset, respectively, indicating a high overall diagnostic accuracy. In comparison, the best AUC of the single-view diagnosis methods are 96.00% (MRI-ACL dataset) and 91.78% (MRNet-v1.0 dataset), and our method improves by about 1.00% and 1.08%. Furthermore, our method also improves by about 1.00% (MRI-ACL dataset) and 0.28% (MRNet-v1.0 dataset) compared with the multi-view network (i.e., MRNet). Conclusion: The presented 3D weighted multi-view network achieves superior AUC in diagnosing ACL, not only in the in-house MRI-ACL dataset but also in the publicly available MRNet-v1.0 dataset, which demonstrates its clinical applicability for the automatic detection of ACL.
Collapse
Affiliation(s)
- Feng Li
- Orthopedic Department, Ningbo No. 2 Hospital, Ningbo, China
| | - Penghua Zhai
- Center for Pattern Recognition and Intelligent Medicine, Guoke Ningbo Life science and Health industry Research Institute, Ningbo, China
| | - Chao Yang
- Orthopedic Department, Ningbo No. 2 Hospital, Ningbo, China
| | - Gong Feng
- Orthopedic Department, Ningbo No. 2 Hospital, Ningbo, China
| | - Ji Yang
- Orthopedic Department, Ningbo No. 2 Hospital, Ningbo, China
| | - Yi Yuan
- Orthopedic Department, Ningbo No. 2 Hospital, Ningbo, China
| |
Collapse
|
26
|
Yi M, Lou J, Cui R, Zhao J. Globus pallidus/putamen T 1WI signal intensity ratio in grading and predicting prognosis of neonatal acute bilirubin encephalopathy. Front Pediatr 2023; 11:1192126. [PMID: 37842026 PMCID: PMC10570546 DOI: 10.3389/fped.2023.1192126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 09/07/2023] [Indexed: 10/17/2023] Open
Abstract
Purpose This study sought to investigate the relationship between the globus pallidus/putamen T1 weighted image (T1WI) signal intensity ratio (G/P ratio) and the acute bilirubin encephalopathy (ABE) in neonates, and to develop a new strategy for the grading and prognosis of ABE based on the G/P ratio. Methods A total of 77 full-term neonates with ABE were scored according to bilirubin-induced neurological dysfunction and divided into mild, moderate, and severe groups. Cranial magnetic resonance imaging examinations were performed and the G/P ratio was recorded. The follow-up reexaminations were carried out at 6 months, 1 year, and 2 years after the initial examination. The neonates were then divided into two groups, the good prognosis group and the kernicterus spectrum disorder (KSD) group, according to the evaluation of Gesell Developmental Schedules and Brainstem Audio Electric Potential at 6 months. Main findings The differences of G/P ratios were statistically significant, not only among the mild, moderate, and severe ABE groups for the initial examinations but also between the KSD and the good prognosis groups for the follow-up reexaminations. Therefore, the ABE grading model and prognosis predicting model could be established based on the G/P ratio. In the KSD group, the area under the receiver operating characteristic curve of the G/P ratio-based predicting model was 93.5%, the optimal critical point was 1.29, the sensitivity was 88.2%, and the specificity was 93.3%. Conclusions The G/P ratio can be used as an indicating parameter for both the clinical grading of neonatal ABE and the assessment of neonatal ABE prognosis. Specifically, the G/P ratio greater than 1.29 indicates a KSD of neonatal ABE.
Collapse
Affiliation(s)
- Minggang Yi
- Department of Radiology, Children's Hospital Affiliated to Shandong University, Jinan, Shandong, China
- Department of Radiology, Jinan Children's Hospital, Jinan, Shandong, China
| | - Jing Lou
- Department of Radiology, Shandong Jinan Municipal Hospital of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Ruodi Cui
- Department of Radiology, Children's Hospital Affiliated to Shandong University, Jinan, Shandong, China
- Department of Radiology, Jinan Children's Hospital, Jinan, Shandong, China
| | - Jianshe Zhao
- Department of Radiology, Children's Hospital Affiliated to Shandong University, Jinan, Shandong, China
- Department of Radiology, Jinan Children's Hospital, Jinan, Shandong, China
| |
Collapse
|
27
|
Ullah N, Javed A, Alhazmi A, Hasnain SM, Tahir A, Ashraf R. TumorDetNet: A unified deep learning model for brain tumor detection and classification. PLoS One 2023; 18:e0291200. [PMID: 37756305 PMCID: PMC10530039 DOI: 10.1371/journal.pone.0291200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 08/23/2023] [Indexed: 09/29/2023] Open
Abstract
Accurate diagnosis of the brain tumor type at an earlier stage is crucial for the treatment process and helps to save the lives of a large number of people worldwide. Because they are non-invasive and spare patients from having an unpleasant biopsy, magnetic resonance imaging (MRI) scans are frequently employed to identify tumors. The manual identification of tumors is difficult and requires considerable time due to the large number of three-dimensional images that an MRI scan of one patient's brain produces from various angles. Moreover, the variations in location, size, and shape of the brain tumor also make it challenging to detect and classify different types of tumors. Thus, computer-aided diagnostics (CAD) systems have been proposed for the detection of brain tumors. In this paper, we proposed a novel unified end-to-end deep learning model named TumorDetNet for brain tumor detection and classification. Our TumorDetNet framework employs 48 convolution layers with leaky ReLU (LReLU) and ReLU activation functions to compute the most distinctive deep feature maps. Moreover, average pooling and a dropout layer are also used to learn distinctive patterns and reduce overfitting. Finally, one fully connected and a softmax layer are employed to detect and classify the brain tumor into multiple types. We assessed the performance of our method on six standard Kaggle brain tumor MRI datasets for brain tumor detection and classification into (malignant and benign), and (glioma, pituitary, and meningioma). Our model successfully identified brain tumors with remarkable accuracy of 99.83%, classified benign and malignant brain tumors with an ideal accuracy of 100%, and meningiomas, pituitary, and gliomas tumors with an accuracy of 99.27%. These outcomes demonstrate the potency of the suggested methodology for the reliable identification and categorization of brain tumors.
Collapse
Affiliation(s)
- Naeem Ullah
- Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Javed
- Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Alhazmi
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Syed M. Hasnain
- Department of Mathematics and Natural Sciences, Prince Mohammad Bin Fahd University, Al Kobar, Saudi Arabia
| | - Ali Tahir
- College of Computer Science and Information Technology, Jazan University, Jazan, Saudi Arabia
| | - Rehan Ashraf
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| |
Collapse
|
28
|
Ravinder M, Saluja G, Allabun S, Alqahtani MS, Abbas M, Othman M, Soufiene BO. Enhanced brain tumor classification using graph convolutional neural network architecture. Sci Rep 2023; 13:14938. [PMID: 37697022 PMCID: PMC10495443 DOI: 10.1038/s41598-023-41407-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023] Open
Abstract
The Brain Tumor presents a highly critical situation concerning the brain, characterized by the uncontrolled growth of an abnormal cell cluster. Early brain tumor detection is essential for accurate diagnosis and effective treatment planning. In this paper, a novel Convolutional Neural Network (CNN) based Graph Neural Network (GNN) model is proposed using the publicly available Brain Tumor dataset from Kaggle to predict whether a person has brain tumor or not and if yes then which type (Meningioma, Pituitary or Glioma). The objective of this research and the proposed models is to provide a solution to the non-consideration of non-Euclidean distances in image data and the inability of conventional models to learn on pixel similarity based upon the pixel proximity. To solve this problem, we have proposed a Graph based Convolutional Neural Network (GCNN) model and it is found that the proposed model solves the problem of considering non-Euclidean distances in images. We aimed at improving brain tumor detection and classification using a novel technique which combines GNN and a 26 layered CNN that takes in a Graph input pre-convolved using Graph Convolution operation. The objective of Graph Convolution is to modify the node features (data linked to each node) by combining information from nearby nodes. A standard pre-computed Adjacency matrix is used, and the input graphs were updated as the averaged sum of local neighbor nodes, which carry the regional information about the tumor. These modified graphs are given as the input matrices to a standard 26 layered CNN with Batch Normalization and Dropout layers intact. Five different networks namely Net-0, Net-1, Net-2, Net-3 and Net-4 are proposed, and it is found that Net-2 outperformed the other networks namely Net-0, Net-1, Net-3 and Net-4. The highest accuracy achieved was 95.01% by Net-2. With its current effectiveness, the model we propose represents a critical alternative for the statistical detection of brain tumors in patients who are suspected of having one.
Collapse
Affiliation(s)
- M Ravinder
- CSE, Indira Gandhi Delhi Technical University for Women, New Delhi, India
| | - Garima Saluja
- CSE, Indira Gandhi Delhi Technical University for Women, New Delhi, India
| | - Sarah Allabun
- Department of Medical Education, College of Medicine, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Mohammed S Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, King Khalid University, 61421, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, Michael Atiyah Building, University of Leicester, Leicester, LE1 7RH, UK
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Manal Othman
- Department of Medical Education, College of Medicine, Princess Nourah bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Ben Othman Soufiene
- PRINCE Laboratory Research, ISITcom, University of Sousse, Hammam Sousse, Tunisia.
| |
Collapse
|
29
|
Abdusalomov AB, Mukhiddinov M, Whangbo TK. Brain Tumor Detection Based on Deep Learning Approaches and Magnetic Resonance Imaging. Cancers (Basel) 2023; 15:4172. [PMID: 37627200 PMCID: PMC10453020 DOI: 10.3390/cancers15164172] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/11/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
The rapid development of abnormal brain cells that characterizes a brain tumor is a major health risk for adults since it can cause severe impairment of organ function and even death. These tumors come in a wide variety of sizes, textures, and locations. When trying to locate cancerous tumors, magnetic resonance imaging (MRI) is a crucial tool. However, detecting brain tumors manually is a difficult and time-consuming activity that might lead to inaccuracies. In order to solve this, we provide a refined You Only Look Once version 7 (YOLOv7) model for the accurate detection of meningioma, glioma, and pituitary gland tumors within an improved detection of brain tumors system. The visual representation of the MRI scans is enhanced by the use of image enhancement methods that apply different filters to the original pictures. To further improve the training of our proposed model, we apply data augmentation techniques to the openly accessible brain tumor dataset. The curated data include a wide variety of cases, such as 2548 images of gliomas, 2658 images of pituitary, 2582 images of meningioma, and 2500 images of non-tumors. We included the Convolutional Block Attention Module (CBAM) attention mechanism into YOLOv7 to further enhance its feature extraction capabilities, allowing for better emphasis on salient regions linked with brain malignancies. To further improve the model's sensitivity, we have added a Spatial Pyramid Pooling Fast+ (SPPF+) layer to the network's core infrastructure. YOLOv7 now includes decoupled heads, which allow it to efficiently glean useful insights from a wide variety of data. In addition, a Bi-directional Feature Pyramid Network (BiFPN) is used to speed up multi-scale feature fusion and to better collect features associated with tumors. The outcomes verify the efficiency of our suggested method, which achieves a higher overall accuracy in tumor detection than previous state-of-the-art models. As a result, this framework has a lot of potential as a helpful decision-making tool for experts in the field of diagnosing brain tumors.
Collapse
Affiliation(s)
| | | | - Taeg Keun Whangbo
- Department of Computer Engineering, Gachon University, Seongnam-si 13120, Republic of Korea;
| |
Collapse
|
30
|
Mahum R, Sharaf M, Hassan H, Liang L, Huang B. A Robust Brain Tumor Detector Using BiLSTM and Mayfly Optimization and Multi-Level Thresholding. Biomedicines 2023; 11:1715. [PMID: 37371810 DOI: 10.3390/biomedicines11061715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 06/09/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023] Open
Abstract
A brain tumor refers to an abnormal growth of cells in the brain that can be either benign or malignant. Oncologists typically use various methods such as blood or visual tests to detect brain tumors, but these approaches can be time-consuming, require additional human effort, and may not be effective in detecting small tumors. This work proposes an effective approach to brain tumor detection that combines segmentation and feature fusion. Segmentation is performed using the mayfly optimization algorithm with multilevel Kapur's threshold technique to locate brain tumors in MRI scans. Key features are achieved from tumors employing Histogram of Oriented Gradients (HOG) and ResNet-V2, and a bidirectional long short-term memory (BiLSTM) network is used to classify tumors into three categories: pituitary, glioma, and meningioma. The suggested methodology is trained and tested on two datasets, Figshare and Harvard, achieving high accuracy, precision, recall, F1 score, and area under the curve (AUC). The results of a comparative analysis with existing DL and ML methods demonstrate that the proposed approach offers superior outcomes. This approach has the potential to improve brain tumor detection, particularly for small tumors, but further validation and testing are needed before clinical use.
Collapse
Affiliation(s)
- Rabbia Mahum
- Department of Computer Science, University of Engineering and Technology Taxila, Taxila 47050, Pakistan
| | - Mohamed Sharaf
- Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
| | - Haseeb Hassan
- College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
| | - Lixin Liang
- College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
| | - Bingding Huang
- College of Big Data and Internet, Shenzhen Technology University (SZTU), Shenzhen 518118, China
| |
Collapse
|
31
|
Ghamry FM, Emara HM, Hagag A, El-Shafai W, El-Banby GM, Dessouky MI, El-Fishawy AS, El-Hag NA, El-Samie FEA. Efficient algorithms for compression and classification of brain tumor images. JOURNAL OF OPTICS 2023; 52:818-830. [DOI: 10.1007/s12596-022-01040-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 11/29/2022] [Indexed: 09/02/2023]
|
32
|
Sistaninejhad B, Rasi H, Nayeri P. A Review Paper about Deep Learning for Medical Image Analysis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:7091301. [PMID: 37284172 PMCID: PMC10241570 DOI: 10.1155/2023/7091301] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/12/2023] [Accepted: 04/21/2023] [Indexed: 06/08/2023]
Abstract
Medical imaging refers to the process of obtaining images of internal organs for therapeutic purposes such as discovering or studying diseases. The primary objective of medical image analysis is to improve the efficacy of clinical research and treatment options. Deep learning has revamped medical image analysis, yielding excellent results in image processing tasks such as registration, segmentation, feature extraction, and classification. The prime motivations for this are the availability of computational resources and the resurgence of deep convolutional neural networks. Deep learning techniques are good at observing hidden patterns in images and supporting clinicians in achieving diagnostic perfection. It has proven to be the most effective method for organ segmentation, cancer detection, disease categorization, and computer-assisted diagnosis. Many deep learning approaches have been published to analyze medical images for various diagnostic purposes. In this paper, we review the work exploiting current state-of-the-art deep learning approaches in medical image processing. We begin the survey by providing a synopsis of research works in medical imaging based on convolutional neural networks. Second, we discuss popular pretrained models and general adversarial networks that aid in improving convolutional networks' performance. Finally, to ease direct evaluation, we compile the performance metrics of deep learning models focusing on COVID-19 detection and child bone age prediction.
Collapse
Affiliation(s)
| | - Habib Rasi
- Sahand University of Technology, East Azerbaijan, New City of Sahand, Iran
| | - Parisa Nayeri
- Khoy University of Medical Sciences, West Azerbaijan, Khoy, Iran
| |
Collapse
|
33
|
Asif S, Zhao M, Chen X, Zhu Y. BMRI-NET: A Deep Stacked Ensemble Model for Multi-class Brain Tumor Classification from MRI Images. Interdiscip Sci 2023:10.1007/s12539-023-00571-1. [PMID: 37171681 DOI: 10.1007/s12539-023-00571-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/13/2023]
Abstract
Brain tumors are one of the most dangerous health problems for adults and children in many countries. Any failure in the diagnosis of brain tumors may lead to shortening of human life. Accurate and timely diagnosis of brain tumors provides appropriate treatment to increase the patient's chances of survival. Due to the different characteristics of tumors, one of the challenging problems is the classification of three types of brain tumors. With the advent of deep learning (DL) models, three classes of brain tumor classification have been addressed. However, the accuracy of these methods requires significant improvements in brain image classification. The main goal of this article is to design a new method for classifying the three types of brain tumors with extremely high accuracy. In this paper, we propose a novel deep stacked ensemble model called "BMRI-NET" that can detect brain tumors from MR images with high accuracy and recall. The stacked ensemble proposed in this article adapts three pre-trained models, namely DenseNe201, ResNet152V2, and InceptionResNetV2, to improve the generalization capability. We combine decisions from the three models using the stacking technique to obtain final results that are much more accurate than individual models for detecting brain tumors. The efficacy of the proposed model is evaluated on the Figshare brain MRI dataset of three types of brain tumors consisting of 3064 images. The experimental results clearly highlight the robustness of the proposed BMRI-NET model by achieving an overall classification of 98.69% and an average recall, F1-score and MCC of 98.33%, 98.40, and 97.95%, respectively. The results indicate that the proposed BMRI-NET model is superior to existing methods and can assist healthcare professionals in the diagnosis of brain tumors.
Collapse
Affiliation(s)
- Sohaib Asif
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Ming Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Xuehan Chen
- School of Computer Science and Engineering, Central South University, Changsha, China.
| | - Yusen Zhu
- School of Mathematics, Hunan University, Changsha, China
| |
Collapse
|
34
|
Mohamad Mostafa A, El-Meligy MA, Abdullah Alkhayyal M, Alnuaim A, Sharaf M. A framework for brain tumor detection based on segmentation and features fusion using MRI images. Brain Res 2023; 1806:148300. [PMID: 36842569 DOI: 10.1016/j.brainres.2023.148300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/17/2023] [Accepted: 02/21/2023] [Indexed: 02/26/2023]
Abstract
Irregular growth of cells in the skull is recognized as a brain tumor that can have two types such as benign and malignant. There exist various methods which are used by oncologists to assess the existence of brain tumors such as blood tests or visual assessments. Moreover, the noninvasive magnetic resonance imaging (MRI) technique without ionizing radiation has been commonly utilized for diagnosis. However, the segmentation in 3-dimensional MRI is time-consuming and the outcomes mainly depend on the operator's experience. Therefore, a novel and robust automated brain tumor detector has been suggested based on segmentation and fusion of features. To improve the localization results, we pre-processed the images using Gaussian Filter (GF), and SynthStrip: a tool for brain skull stripping. We utilized two known benchmarks for training and testing i.e., Figshare and Harvard. The proposed methodology attained 99.8% accuracy, 99.3% recall, 99.4% precision, 99.5% F1 score, and 0.989 AUC. We performed the comparative analysis of our approach with prevailing DL, classical, and segmentation-based approaches. Additionally, we also performed the cross-validation using Harvard dataset attaining 99.3% identification accuracy. The outcomes exhibit that our approach offers significant outcomes than existing methods and outperforms them.
Collapse
Affiliation(s)
- Almetwally Mohamad Mostafa
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, P.O. BOX 51178, Riyadh 11543, Saudi Arabia.
| | - Mohammed A El-Meligy
- Advanced Manufacturing Institute, King Saud University, Riyadh 11421, Saudi Arabia.
| | - Maram Abdullah Alkhayyal
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, P.O. BOX 51178, Riyadh 11543, Saudi Arabia.
| | - Abeer Alnuaim
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, P.O. BOX 22459, Riyadh 11495, Saudi Arabia.
| | - Mohamed Sharaf
- Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia.
| |
Collapse
|
35
|
Emam MM, Samee NA, Jamjoom MM, Houssein EH. Optimized deep learning architecture for brain tumor classification using improved Hunger Games Search Algorithm. Comput Biol Med 2023; 160:106966. [PMID: 37141655 DOI: 10.1016/j.compbiomed.2023.106966] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/05/2023] [Accepted: 04/19/2023] [Indexed: 05/06/2023]
Abstract
One of the worst diseases is a brain tumor, which is defined by abnormal development of synapses in the brain. Early detection of brain tumors is essential for improving prognosis, and classifying tumors is a vital step in the disease's treatment. Different classification strategies using deep learning have been presented for the diagnosis of brain tumors. However, several challenges exist, such as the need for a competent specialist in classifying brain cancers by deep learning models and the problem of building the most precise deep learning model for categorizing brain tumors. We propose an evolved and highly efficient model based on deep learning and improved metaheuristic algorithms to address these challenges. Specifically, we develop an optimized residual learning architecture for classifying multiple brain tumors and propose an improved variant of the Hunger Games Search algorithm (I-HGS) based on combining two enhancing strategies: Local Escaping Operator (LEO) and Brownian motion. These two strategies balance solution diversity and convergence speed, boosting the optimization performance and staying away from the local optima. First, we have evaluated the I-HGS algorithm on the IEEE Congress on Evolutionary Computation held in 2020 (CEC'2020) test functions, demonstrating that I-HGS outperformed the basic HGS and other popular algorithms regarding statistical convergence, and various measures. The suggested model is then applied to the optimization of the hyperparameters of the Residual Network 50 (ResNet50) model (I-HGS-ResNet50) for brain cancer identification, proving its overall efficacy. We utilize several publicly available, gold-standard datasets of brain MRI images. The proposed I-HGS-ResNet50 model is compared with other existing studies as well as with other deep learning architectures, including Visual Geometry Group 16-layer (VGG16), MobileNet, and Densely Connected Convolutional Network 201 (DenseNet201). The experiments demonstrated that the proposed I-HGS-ResNet50 model surpasses the previous studies and other well-known deep learning models. I-HGS-ResNet50 acquired an accuracy of 99.89%, 99.72%, and 99.88% for the three datasets. These results efficiently prove the potential of the proposed I-HGS-ResNet50 model for accurate brain tumor classification.
Collapse
Affiliation(s)
- Marwa M Emam
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia.
| | - Mona M Jamjoom
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia.
| | - Essam H Houssein
- Faculty of Computers and Information, Minia University, Minia, Egypt.
| |
Collapse
|
36
|
Ali MU, Hussain SJ, Zafar A, Bhutta MR, Lee SW. WBM-DLNets: Wrapper-Based Metaheuristic Deep Learning Networks Feature Optimization for Enhancing Brain Tumor Detection. Bioengineering (Basel) 2023; 10:bioengineering10040475. [PMID: 37106662 PMCID: PMC10135892 DOI: 10.3390/bioengineering10040475] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/07/2023] [Accepted: 04/11/2023] [Indexed: 04/29/2023] Open
Abstract
This study presents wrapper-based metaheuristic deep learning networks (WBM-DLNets) feature optimization algorithms for brain tumor diagnosis using magnetic resonance imaging. Herein, 16 pretrained deep learning networks are used to compute the features. Eight metaheuristic optimization algorithms, namely, the marine predator algorithm, atom search optimization algorithm (ASOA), Harris hawks optimization algorithm, butterfly optimization algorithm, whale optimization algorithm, grey wolf optimization algorithm (GWOA), bat algorithm, and firefly algorithm, are used to evaluate the classification performance using a support vector machine (SVM)-based cost function. A deep-learning network selection approach is applied to determine the best deep-learning network. Finally, all deep features of the best deep learning networks are concatenated to train the SVM model. The proposed WBM-DLNets approach is validated based on an available online dataset. The results reveal that the classification accuracy is significantly improved by utilizing the features selected using WBM-DLNets relative to those obtained using the full set of deep features. DenseNet-201-GWOA and EfficientNet-b0-ASOA yield the best results, with a classification accuracy of 95.7%. Additionally, the results of the WBM-DLNets approach are compared with those reported in the literature.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Raheel Bhutta
- Department of Electrical and Computer Engineering, University of UTAH Asia Campus, Incheon 21985, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, Sungkyunkwan University School of Medicine, Suwon 16419, Republic of Korea
| |
Collapse
|
37
|
Özbay E, Altunbey Özbay F. Interpretable features fusion with precision MRI images deep hashing for brain tumor detection. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107387. [PMID: 36738605 DOI: 10.1016/j.cmpb.2023.107387] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 12/30/2022] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumor is a deadly disease that can affect people of all ages. Radiologists play a critical role in the early diagnosis and treatment of the 14,000 persons diagnosed with brain tumors on average each year. The best method for tumor detection with computer-aided diagnosis systems (CADs) is Magnetic Resonance Imaging (MRI). However, manual evaluation using conventional approaches may result in a number of inaccuracies due to the complicated tissue properties of a large number of images. Therefore a precision medical image hashing approach is proposed that combines interpretability and feature fusion using MRI images of brain tumors, to address the issue of medical image retrieval. METHODS A precision hashing method combining interpretability and feature fusion is proposed to recover the problem of low image resolutions in brain tumor detection on the Brain-Tumor-MRI (BT-MRI) dataset. First, the dataset is pre-trained with the DenseNet201 network using the Comparison-to-Learn method. Then, a global network is created that generates the salience map to yield a mask crop with local region discrimination. Finally, the local network features inputs and public features expressing the local discriminant regions are concatenated for the pooling layer. A hash layer is added between the fully connected layer and the classification layer of the backbone network to generate high-quality hash codes. The final result is obtained by calculating the hash codes with the similarity metric. RESULTS Experimental results with the BT-MRI dataset showed that the proposed method can effectively identify tumor regions and more accurate hash codes can be generated by using the three loss functions in feature fusion. It has been demonstrated that the accuracy of medical image retrieval is effectively increased when our method is compared with existing image retrieval approaches. CONCLUSIONS Our method has demonstrated that the accuracy of medical image retrieval can be effectively increased and potentially applied to CADs.
Collapse
Affiliation(s)
- Erdal Özbay
- Firat University, Faculty of Engineering, Computer Engineering, 23119, Elazig, Turkey.
| | - Feyza Altunbey Özbay
- Firat University, Faculty of Engineering, Software Engineering, 23119, Elazig, Turkey
| |
Collapse
|
38
|
Bhandari M, Shahi TB, Neupane A, Walsh KB. BotanicX-AI: Identification of Tomato Leaf Diseases Using an Explanation-Driven Deep-Learning Model. J Imaging 2023; 9:jimaging9020053. [PMID: 36826972 PMCID: PMC9964407 DOI: 10.3390/jimaging9020053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 02/23/2023] Open
Abstract
Early and accurate tomato disease detection using easily available leaf photos is essential for farmers and stakeholders as it help reduce yield loss due to possible disease epidemics. This paper aims to visually identify nine different infectious diseases (bacterial spot, early blight, Septoria leaf spot, late blight, leaf mold, two-spotted spider mite, mosaic virus, target spot, and yellow leaf curl virus) in tomato leaves in addition to healthy leaves. We implemented EfficientNetB5 with a tomato leaf disease (TLD) dataset without any segmentation, and the model achieved an average training accuracy of 99.84% ± 0.10%, average validation accuracy of 98.28% ± 0.20%, and average test accuracy of 99.07% ± 0.38% over 10 cross folds.The use of gradient-weighted class activation mapping (GradCAM) and local interpretable model-agnostic explanations are proposed to provide model interpretability, which is essential to predictive performance, helpful in building trust, and required for integration into agricultural practice.
Collapse
Affiliation(s)
- Mohan Bhandari
- Department of Science and Technology, Samriddhi College, Bhaktapur 44800, Nepal
| | - Tej Bahadur Shahi
- School of Engineering and Technology, Central Queensland University, Norman Gardens, Rockhampton 4701, Australia
- Central Department of Computer Science and IT, Tribhuvan University, Kathmandu 44600, Nepal
| | - Arjun Neupane
- School of Engineering and Technology, Central Queensland University, Norman Gardens, Rockhampton 4701, Australia
- Correspondence:
| | - Kerry Brian Walsh
- Institute for Future Farming Systems, Central Queensland University, Rockhampton 4701, Australia
| |
Collapse
|
39
|
Papadomanolakis TN, Sergaki ES, Polydorou AA, Krasoudakis AG, Makris-Tsalikis GN, Polydorou AA, Afentakis NM, Athanasiou SA, Vardiambasis IO, Zervakis ME. Tumor Diagnosis against Other Brain Diseases Using T2 MRI Brain Images and CNN Binary Classifier and DWT. Brain Sci 2023; 13:348. [PMID: 36831891 PMCID: PMC9954603 DOI: 10.3390/brainsci13020348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/22/2023] Open
Abstract
PURPOSE Brain tumors are diagnosed and classified manually and noninvasively by radiologists using Magnetic Resonance Imaging (MRI) data. The risk of misdiagnosis may exist due to human factors such as lack of time, fatigue, and relatively low experience. Deep learning methods have become increasingly important in MRI classification. To improve diagnostic accuracy, researchers emphasize the need to develop Computer-Aided Diagnosis (CAD) computational diagnostics based on artificial intelligence (AI) systems by using deep learning methods such as convolutional neural networks (CNN) and improving the performance of CNN by combining it with other data analysis tools such as wavelet transform. In this study, a novel diagnostic framework based on CNN and DWT data analysis is developed for the diagnosis of glioma tumors in the brain, among other tumors and other diseases, with T2-SWI MRI scans. It is a binary CNN classifier that treats the disease "glioma tumor" as positive and the other pathologies as negative, resulting in a very unbalanced binary problem. The study includes a comparative analysis of a CNN trained with wavelet transform data of MRIs instead of their pixel intensity values in order to demonstrate the increased performance of the CNN and DWT analysis in diagnosing brain gliomas. The results of the proposed CNN architecture are also compared with a deep CNN pre-trained on VGG16 transfer learning network and with the SVM machine learning method using DWT knowledge. METHODS To improve the accuracy of the CNN classifier, the proposed CNN model uses as knowledge the spatial and temporal features extracted by converting the original MRI images to the frequency domain by performing Discrete Wavelet Transformation (DWT), instead of the traditionally used original scans in the form of pixel intensities. Moreover, no pre-processing was applied to the original images. The images used are MRIs of type T2-SWI sequences parallel to the axial plane. Firstly, a compression step is applied for each MRI scan applying DWT up to three levels of decomposition. These data are used to train a 2D CNN in order to classify the scans as showing glioma or not. The proposed CNN model is trained on MRI slices originated from 382 various male and female adult patients, showing healthy and pathological images from a selection of diseases (showing glioma, meningioma, pituitary, necrosis, edema, non-enchasing tumor, hemorrhagic foci, edema, ischemic changes, cystic areas, etc.). The images are provided by the database of the Medical Image Computing and Computer-Assisted Intervention (MICCAI) and the Ischemic Stroke Lesion Segmentation (ISLES) challenges on Brain Tumor Segmentation (BraTS) challenges 2016 and 2017, as well as by the numerous records kept in the public general hospital of Chania, Crete, "Saint George". RESULTS The proposed frameworks are experimentally evaluated by examining MRI slices originating from 190 different patients (not included in the training set), of which 56% are showing gliomas by the longest two axes less than 2 cm and 44% are showing other pathological effects or healthy cases. Results show convincing performance when using as information the spatial and temporal features extracted by the original scans. With the proposed CNN model and with data in DWT format, we achieved the following statistic percentages: accuracy 0.97, sensitivity (recall) 1, specificity 0.93, precision 0.95, FNR 0, and FPR 0.07. These numbers are higher for this data format (respectively: accuracy by 6% higher, recall by 11%, specificity by 7%, precision by 5%, FNR by 0.1%, and FPR is the same) than it would be, had we used as input data the intensity values of the MRIs (instead of the DWT analysis of the MRIs). Additionally, our study showed that when our CNN takes into account the TL of the existing network VGG, the performance values are lower, as follows: accuracy 0.87, sensitivity (recall) 0.91, specificity 0.84, precision 0.86, FNR of 0.08, and FPR 0.14. CONCLUSIONS The experimental results show the outperformance of the CNN, which is not based on transfer learning, but is using as information the MRI brain scans decomposed into DWT information instead of the pixel intensity of the original scans. The results are promising for the proposed CNN based on DWT knowledge to serve for binary diagnosis of glioma tumors among other tumors and diseases. Moreover, the SVM learning model using DWT data analysis performs with higher accuracy and sensitivity than using pixel values.
Collapse
Affiliation(s)
| | - Eleftheria S. Sergaki
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| | - Andreas A. Polydorou
- Areteio Hospital, 2nd University Department of Surgery, Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | | | | | - Alexios A. Polydorou
- Medical School, National and Kapodistrian University of Athens, 11528 Athens, Greece
| | - Nikolaos M. Afentakis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Sofia A. Athanasiou
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Ioannis O. Vardiambasis
- Department of Electronic Engineering, Hellenic Mediterranean University, 73133 Chania, Greece
| | - Michail E. Zervakis
- School of Electrical and Computer Engineering, Technical University of Crete, 73100 Chania, Greece
| |
Collapse
|
40
|
Liu X, Wang F, Liu L, Li T, Zhong X, Lin H, Zhang Y, Xue W. Functionalized polydopamine nanospheres as in situ spray for photothermal image-guided tumor precise surgical resection. Biosens Bioelectron 2023; 222:114995. [PMID: 36516631 DOI: 10.1016/j.bios.2022.114995] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/18/2022] [Accepted: 12/06/2022] [Indexed: 12/12/2022]
Abstract
Surgical resection is a critical procedure for treatment of solid tumor, which commonly suffers from postoperative local recurrence due to the possibility of positive surgical margin. Although the widely used clinical imaging techniques (CT, MRI, PET, etc.) show beneficial effects in providing a macroscopic view of preoperative tumor position, they are still failing to provide intraoperative real-time imaging navigation during the surgery and need oral or intravenous injection contrast agents with risk of adverse effects. In this work, we present a nano-spray assisted photothermal imaging system for in vitro cells discrimination as well as in vivo visualization of tumor position and border that guides real-time precise tumor resection during surgery (even for tiny tumor less than 3 mm). Herein, the nano-spray were prepared by RGD peptide functionalized polydopamine (PDA-RGD) nanospheres with excellent photothermal conversion efficiency (54.27%), stability and reversibility, which target ανβ3 integrin overexpressed tumor cells. Such PDA-RGD serve as nanothermometers that convert and amplify biological signal to intuitive thermal image signal, depicting the tumor margin in situ. In comparison to conventional imaging techniques, our approach through topical spraying together with portable infrared camera has the characteristics of low cost, convenient, no radiation hazard, real-time intraoperative imaging-guidance and avoiding the adverse effects risk of oral or intravenous contrast agent. This technology provides a new universal tool for potentially assisting surgeons' decision in real-time during surgery and aiding to improved outcome.
Collapse
Affiliation(s)
- Xin Liu
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China; Center for Hybrid Nanostructure (CHyN), Department of Physics, University of Hamburg, Hamburg, 22761, Germany
| | - Fan Wang
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China
| | - Li Liu
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China
| | - Tiantian Li
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China
| | - Xiangyu Zhong
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China
| | - Hongsheng Lin
- The First Affiliated Hospital of Jinan University, Jinan University, Guangzhou, 510632, China
| | - Yi Zhang
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China.
| | - Wei Xue
- Key Laboratory of Biomaterials of Guangdong Higher Education Institutes, Department of Biomedical Engineering, Jinan University, Guangzhou, 510632, China; MOE Key Laboratory of Tumor Molecular Biology, Jinan University, Guangzhou, 510632, China.
| |
Collapse
|
41
|
Rath A, Mohanty DK, Mishra BSP, Bagal DK. A Bibliometric Review: Brain Tumor Magnetic Resonance Imagings Using Different Convolutional Neural Network Architectures. World Neurosurg 2023; 170:e681-e694. [PMID: 36442778 DOI: 10.1016/j.wneu.2022.11.091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 11/20/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Numerous scientists and researchers have been developing advanced procedures and methods for diagnosing the kind and phase of a human tumor. Brain tumors, which are neoplastic and abnormal developments of brain cells, are one of the most prominent causes of death. Brain tumors, also known as lesions or neoplasia, may be roughly classified as either primary or metastatic. Primary brain tumors arise from brain tissue and its surrounding environment. The recognition of brain tumors using magnetic resonance images via a deep learning technique such as convolutional neural network (CNN) has garnered significant academic interest over the last few decades. METHODS In this study, a detailed evaluation based on bibliometrics is considered in order to synthesize and organize the available academic literature and to identify current research trends and hotspots. We used bibliometric methodologies and a literature review for the CNN-based brain tumor to synthesize and evaluate prior studies. RESULTS For this bibliometric analysis, we applied the Visualization of Similarity Viewer program to classify the major publications, notable journals, financial sponsors, and affiliations. CONCLUSIONS In conclusion, we suggest that one of the next paths of study will be the incorporation of other databases to advance CNN-based brain tumor identification from magnetic resonance images. No drug dosages are applied in this work.
Collapse
Affiliation(s)
- Arati Rath
- School of Computer Engineering, KIIT Deemed to be University, Odisha, India.
| | | | | | | |
Collapse
|
42
|
Tumor Localization and Classification from MRI of Brain using Deep Convolution Neural Network and Salp Swarm Algorithm. Cognit Comput 2023. [DOI: 10.1007/s12559-022-10096-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
43
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
44
|
Khayretdinova M, Shovkun A, Degtyarev V, Kiryasov A, Pshonkovskaya P, Zakharov I. Predicting age from resting-state scalp EEG signals with deep convolutional neural networks on TD-brain dataset. Front Aging Neurosci 2022; 14:1019869. [PMID: 36561135 PMCID: PMC9764861 DOI: 10.3389/fnagi.2022.1019869] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 11/02/2022] [Indexed: 12/12/2022] Open
Abstract
Introduction Brain age prediction has been shown to be clinically relevant, with errors in its prediction associated with various psychiatric and neurological conditions. While the prediction from structural and functional magnetic resonance imaging data has been feasible with high accuracy, whether the same results can be achieved with electroencephalography is unclear. Methods The current study aimed to create a new deep learning solution for brain age prediction using raw resting-state scalp EEG. To this end, we utilized the TD-BRAIN dataset, including 1,274 subjects (both healthy controls and individuals with various psychiatric disorders, with a total of 1,335 recording sessions). To achieve the best age prediction, we used data augmentation techniques to increase the diversity of the training set and developed a deep convolutional neural network model. Results The model's training was done with 10-fold cross-subject cross-validation, with the EEG recordings of the subjects used for training not considered to test the model. In training, using the relative rather than the absolute loss function led to a better mean absolute error of 5.96 years in cross-validation. We found that the best performance could be achieved when both eyes-open and eyes-closed states are used simultaneously. The frontocentral electrodes played the most important role in age prediction. Discussion The architecture and training method of the proposed deep convolutional neural networks (DCNN) improve state-of-the-art metrics in the age prediction task using raw resting-state EEG data by 13%. Given that brain age prediction might be a potential biomarker of numerous brain diseases, inexpensive and precise EEG-based estimation of brain age will be in demand for clinical practice.
Collapse
|
45
|
Ali MU, Kallu KD, Masood H, Hussain SJ, Ullah S, Byun JH, Zafar A, Kim KS. A Robust Computer-Aided Automated Brain Tumor Diagnosis Approach Using PSO-ReliefF Optimized Gaussian and Non-Linear Feature Space. LIFE (BASEL, SWITZERLAND) 2022; 12:life12122036. [PMID: 36556401 PMCID: PMC9782364 DOI: 10.3390/life12122036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 11/22/2022] [Accepted: 11/28/2022] [Indexed: 12/12/2022]
Abstract
Brain tumors are among the deadliest diseases in the modern world. This study proposes an optimized machine-learning approach for the detection and identification of the type of brain tumor (glioma, meningioma, or pituitary tumor) in brain images recorded using magnetic resonance imaging (MRI). The Gaussian features of the image are extracted using speed-up robust features (SURF), whereas its non-linear features are obtained using KAZE, owing to their high performance against rotation, scaling, and noise problems. To retrieve local-level information, all brain MRI images are segmented into an 8 × 8 pixel grid. To enhance the accuracy and reduce the computational time, the variance-based k-means clustering and PSO-ReliefF algorithms are employed to eliminate the redundant features of the brain MRI images. Finally, the performance of the proposed hybrid optimized feature vector is evaluated using various machine learning classifiers. An accuracy of 96.30% is obtained with 169 features using a support vector machine (SVM). Furthermore, the computational time is also reduced to 1 min compared to the non-optimized features used for training of the SVM. The findings are also compared with previous research, demonstrating that the suggested approach might assist physicians and doctors in the timely detection of brain tumors.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Karam Dad Kallu
- Department of Robotics & Artificial Intelligence (R&AI), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST) H−12, Islamabad 44000, Pakistan
| | - Haris Masood
- Electrical Engineering Department, Wah Engineering College, University of Wah, Wah Cantt 47040, Pakistan
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
| | - Safee Ullah
- Department of Electrical Engineering HITEC University, Taxila 47080, Pakistan
| | - Jong Hyuk Byun
- Department of Mathematics, College of Natural Sciences, Pusan National University, Busan 46241, Republic of Korea
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.Z.); (K.S.K.)
| | - Kawang Su Kim
- Department of Scientific computing, Pukyong National University, Busan 48513, Republic of Korea
- Interdisciplinary Biology Laboratory (iBLab), Division of Biological Science, Graduate School of Science, Nagoya University, Nagoya 464-8602, Japan
- Correspondence: (A.Z.); (K.S.K.)
| |
Collapse
|
46
|
Lin J, Pan Y, Xu J, Bao Y, Zhuo H. A meta-fusion RCNN network for endoscopic visual bladder lesions intelligent detection. Comput Med Imaging Graph 2022; 102:102138. [PMID: 36444783 DOI: 10.1016/j.compmedimag.2022.102138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 09/26/2022] [Accepted: 10/24/2022] [Indexed: 11/11/2022]
Abstract
This study investigates a visual object detection technology in order to help doctors diagnose bladder lesions with endoscopy. A new object detection approach based on deep learning is presented, which derived from the cascade R-CNN and extended the ability of network for adapting insufficient endoscopic lesions samples when training a deep neural network. We propose a feature adaptive fusion model to increase the network's mobility and reduce the possibility of overfitting problems, and use task adaptation meta-learning approach to train the feature fusion process of the entire model and the target network update process in order to complete the task-adaptive classification and detection. The new model has been evaluated on the challenging object detection data set Pascal VOC and its converted format of Microsoft COCO, and the results show that the performance of our proposed method is superior to the original method. Therefore, we apply the proposed method to a custom bladder lesions data set to solve the auxiliary detection problem in the intelligent diagnosis of bladder lesions and demonstrated the effectiveness.
Collapse
Affiliation(s)
- Jie Lin
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China.
| | - Yulong Pan
- Department of Urology, The Third People's Hospital of Chengdu/The Affiliated Hospital of Southwest Jiaotong University, Chengdu, Sichuan, 610014, China.
| | - Jiajun Xu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, 611731, China.
| | - Yige Bao
- West China Hospital, Sichuan University, China.
| | - Hui Zhuo
- Department of Urology, The Third People's Hospital of Chengdu/The Affiliated Hospital of Southwest Jiaotong University, Chengdu, Sichuan, 610014, China.
| |
Collapse
|
47
|
Krishnakumar S, Manivannan K. Detection of meningioma tumor images using Modified Empirical Mode Decomposition (MEMD) and convolutional neural networks. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-222172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The meningioma brain tumor detection is more important than the other tumor detection such as Glioma and Glioblastoma, due to its high severity level. The tumor pixel density of meningioma tumor is high and it leads to sudden death if it is not detected timely. The meningioma images are detected using Modified Empirical Mode Decomposition- Convolutional Neural Networks (MEMD-CNN) classification approach. This method has the following stages data augmentation, spatial-frequency transformation, feature computations, classifications and segmentation. The brain image samples are increased using data augmentation process for improving the meningioma detection rate. The data augmented images are spatially transformed into frequency format using MEMD transformation method. Then, the external empirical mode features are computed from this transformed image and they are fed into CNN architecture to classify the source brain image into either meningioma or non-meningioma. The pixels belonging tumor category are segmented using morphological opening-closing functions. The meningioma detection system obtains 99.4% of Meningioma Classification Rate (MCR) and 99.3% of Non-Meningioma Classification Rate (NMCR) on the meningioma and non-meningioma images. This MEMD-CNN technique for meningioma identification attains 98.93% of SET, 99.13% of SPT, 99.18% of MSA, 99.14% of PR and 99.13% of FS. From the statistical comparative analysis of the proposed MEMD-CNN system with other conventional detection systems, the proposed method provides optimum tumor segmentation results.
Collapse
Affiliation(s)
- S. Krishnakumar
- Department of Electronics and Communication Engineering, Theni Kammavar Sangam College of Technology, Theni, Tamilnadu, India
| | - K. Manivannan
- Department of Computer Science and Engineering, PSNA College of Engineering and Technology, Dindigul, Tamilnadu, India
| |
Collapse
|
48
|
Ullah N, Khan MS, Khan JA, Choi A, Anwar MS. A Robust End-to-End Deep Learning-Based Approach for Effective and Reliable BTD Using MR Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7575. [PMID: 36236674 PMCID: PMC9570935 DOI: 10.3390/s22197575] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/01/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Detection of a brain tumor in the early stages is critical for clinical practice and survival rate. Brain tumors arise in multiple shapes, sizes, and features with various treatment options. Tumor detection manually is challenging, time-consuming, and prone to error. Magnetic resonance imaging (MRI) scans are mostly used for tumor detection due to their non-invasive properties and also avoid painful biopsy. MRI scanning of one patient's brain generates many 3D images from multiple directions, making the manual detection of tumors very difficult, error-prone, and time-consuming. Therefore, there is a considerable need for autonomous diagnostics tools to detect brain tumors accurately. In this research, we have presented a novel TumorResnet deep learning (DL) model for brain detection, i.e., binary classification. The TumorResNet model employs 20 convolution layers with a leaky ReLU (LReLU) activation function for feature map activation to compute the most distinctive deep features. Finally, three fully connected classification layers are used to classify brain tumors MRI into normal and tumorous. The performance of the proposed TumorResNet architecture is evaluated on a standard Kaggle brain tumor MRI dataset for brain tumor detection (BTD), which contains brain tumor and normal MR images. The proposed model achieved a good accuracy of 99.33% for BTD. These experimental results, including the cross-dataset setting, validate the superiority of the TumorResNet model over the contemporary frameworks. This study offers an automated BTD method that aids in the early diagnosis of brain cancers. This procedure has a substantial impact on improving treatment options and patient survival.
Collapse
Affiliation(s)
- Naeem Ullah
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Mohammad Sohail Khan
- Department of Computer Software Engineering, University of Engineering and Technology Mardan, Mardan 23200, Pakistan
| | - Javed Ali Khan
- Department of Software Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Ahyoung Choi
- Department of AI, Software Gachon University, Seongnem-si 13120, Korea
| | | |
Collapse
|
49
|
Masood M, Maham R, Javed A, Tariq U, Khan MA, Kadry S. Brain MRI analysis using deep neural network for medical of internet things applications. COMPUTERS AND ELECTRICAL ENGINEERING 2022; 103:108386. [DOI: 10.1016/j.compeleceng.2022.108386] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|
50
|
Subha Darathy C, Agees Kumar C. A novel deep neural network with adaptive sine cosine crow search (DNN-ASCCS) model for content based medical image reterival. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-222872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Tumor is the second major cause of death in women worldwide. Breast cancer diagnosis and treatment can be difficult for radiologists. As a result, primary care helps to avoid disease and mortality. The study’s main goal is to improve treatment choices and to save lives by detecting breast cancer earlier. For classification problems, we propose a DNN-ASCC architecture in this study. The Fast Non-Local Means Filter completes the initial preprocessing stage. The binary grasshopper optimization algorithm (BGOA) and the grey-level run length matrix are utilized to choose the best features for the feature extraction operation. The suggested hybrid classifier (DNN-ASCCS) is critical for identifying normal and malignant tumors. Breast cancer is accurately detected by the suggested hybrid classifier. The recommended (DNN-ASCCS) was developed using MATLAB and datasets from the BIDCIDRI. The results of the simulation showed that the proposed technique has an accurate results in classification (99.17 percent) and robustness analysis is also done. When compared to alternative approaches, experimental results show that the suggested method is efficient.
Collapse
Affiliation(s)
- C. Subha Darathy
- Department of CSE, Arunachala College of Engineering for Women, Vellichanthai
| | - C. Agees Kumar
- Department of EEE, Arunachala College of Engineering for Women, Vellichanthai
| |
Collapse
|