1
|
Jayaram K, Kumarganesh S, Immanuvel A, Ganesh C. Classifications of meningioma brain images using the novel Convolutional Fuzzy C Means (CFCM) architecture and performance analysis of hardware incorporated tumor segmentation module. NETWORK (BRISTOL, ENGLAND) 2025:1-22. [PMID: 40271969 DOI: 10.1080/0954898x.2025.2491537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 03/03/2025] [Accepted: 04/01/2025] [Indexed: 04/25/2025]
Abstract
In this paper, meningioma detection and segmentation method is proposed. This research work proposes an effective method to locate meningioma pictures through a novel CFCM classification approach. This proposed method consist of Non-Sub sampled Contourlet Transform decomposition module which decomposes the entire brain image into multi-scale sub-band images and then the heuristic and uniqueness features have been computed individually. Then, these heuristic and uniqueness features are trained and classified using Convolutional Fuzzy C Means (CFCM) classifier. This proposed method is applied on two independent brain imaging datasets. The proposed meningioma identification system stated in this work obtained 98.81% of Se, 98.83% of Sp, 99.04% of Acc, 99.12% of pr, and 99.14% of FIS on Nanfang University dataset brain images. The proposed meningioma identification system stated in this work obtained 98.92% of Se, 98.88% of Sp, 98.9% of Acc, 98.88% of pr, and 99.36% of FIS on the BRATS 2021 brain images. Finally, the tumour segmentation module is designed in VLSI, and it is simulated using Xilinx project navigator in this paper.
Collapse
Affiliation(s)
- K Jayaram
- Department of ECE, Kalaignarkarunanidhi Institute of Technology, Coimbatore, India
| | - S Kumarganesh
- Department of ECE, Knowledge Institute of Technology, Salem, India
| | - A Immanuvel
- Department of ECE, Paavai College of Engineering, Namakkal, India
| | - C Ganesh
- Department of CCE, Sri Eshwar College of Engineering, Coimbatore, India
| |
Collapse
|
2
|
Afzal S, Rauf M, Ashraf S, Bin Md Ayob S, Ahmad Arfeen Z. CART-ANOVA-Based Transfer Learning Approach for Seven Distinct Tumor Classification Schemes with Generalization Capability. Diagnostics (Basel) 2025; 15:378. [PMID: 39941307 PMCID: PMC11816775 DOI: 10.3390/diagnostics15030378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2024] [Revised: 12/31/2024] [Accepted: 01/22/2025] [Indexed: 02/16/2025] Open
Abstract
Background/Objectives: Deep transfer learning, leveraging convolutional neural networks (CNNs), has become a pivotal tool for brain tumor detection. However, key challenges include optimizing hyperparameter selection and enhancing the generalization capabilities of models. This study introduces a novel CART-ANOVA (Cartesian-ANOVA) hyperparameter tuning framework, which differs from traditional optimization methods by systematically integrating statistical significance testing (ANOVA) with the Cartesian product of hyperparameter values. This approach ensures robust and precise parameter tuning by evaluating the interaction effects between hyperparameters, such as batch size and learning rate, rather than relying solely on grid or random search. Additionally, it implements seven distinct classification schemes for brain tumors, aimed at improving diagnostic accuracy and robustness. Methods: The proposed framework employs a ResNet18-based knowledge transfer learning (KTL) model trained on a primary dataset, with 20% allocated for testing. Hyperparameters were optimized using CART-ANOVA analysis, and statistical validation ensured robust parameter selection. The model's generalization and robustness were evaluated on an independent second dataset. Performance metrics, including precision, accuracy, sensitivity, and F1 score, were compared against other pre-trained CNN models. Results: The framework achieved exceptional testing accuracy of 99.65% for four-class classification and 98.05% for seven-class classification on the source 1 dataset. It also maintained high generalization capabilities, achieving accuracies of 98.77% and 96.77% on the source 2 datasets for the same tasks. The incorporation of seven distinct classification schemes further enhanced variability and diagnostic capability, surpassing the performance of other pre-trained models. Conclusions: The CART-ANOVA hyperparameter tuning framework, combined with a ResNet18-based KTL approach, significantly improves brain tumor classification accuracy, robustness, and generalization. These advancements demonstrate strong potential for enhancing diagnostic precision and informing effective treatment strategies, contributing to advancements in medical imaging and AI-driven healthcare solutions.
Collapse
Affiliation(s)
- Shiraz Afzal
- Department of Electronic Engineering, Dawood University of Engineering and Technology, Karachi 74800, Pakistan;
| | - Muhammad Rauf
- Department of Electronic Engineering, Dawood University of Engineering and Technology, Karachi 74800, Pakistan;
| | - Shahzad Ashraf
- Department of Computer Science, DHA Suffa University, Karachi 75500, Pakistan
| | - Shahrin Bin Md Ayob
- Faculty of Electrical Engineering, Universiti Teknologi Malaysia, Johor Bahru 81310, Malaysia
| | - Zeeshan Ahmad Arfeen
- Department of Electrical Engineering, The Islamia University of Bahawalpur (IUB), Bahawalpur 63100, Pakistan
| |
Collapse
|
3
|
Biradar S, Virupakshappa. AG-MSTLN-EL: A Multi-source Transfer Learning Approach to Brain Tumor Detection. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:245-261. [PMID: 39060764 PMCID: PMC11810865 DOI: 10.1007/s10278-024-01199-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 06/29/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024]
Abstract
The analysis of medical images (MI) is an important part of advanced medicine as it helps detect and diagnose various diseases early. Classifying brain tumors through magnetic resonance imaging (MRI) poses a challenge demanding accurate models for effective diagnosis and treatment planning. This paper introduces AG-MSTLN-EL, an attention-aided multi-source transfer learning ensemble learning model leveraging multi-source transfer learning (Visual Geometry Group ResNet and GoogLeNet), attention mechanisms, and ensemble learning to achieve robust and accurate brain tumor classification. Multi-source transfer learning allows knowledge extraction from diverse domains, enhancing generalization. The attention mechanism focuses on specific MRI regions, increasing interpretability and classification performance. Ensemble learning combines k-nearest neighbor, Softmax, and support vector machine classifiers, improving both accuracy and reliability. Evaluating the model's performance on a dataset with 3064 brain tumor MRI images, AG-MSTLN-EL outperforms state-of-the-art models in terms of all classification measures. The model's innovative combination of transfer learning, attention mechanism, and ensemble learning provides a reliable solution for brain tumor classification. Its superior performance and high interpretability make AG-MSTLN-EL a valuable tool for clinicians and researchers in medical image analysis.
Collapse
Affiliation(s)
- Shivaprasad Biradar
- Department of Computer Science & Engineering, Sharnbasva University, Kalaburagi, Karnataka, India
| | - Virupakshappa
- Department of Computer Science & Engineering, Sharnbasva University, Kalaburagi, Karnataka, India.
| |
Collapse
|
4
|
Jyothi P, Dhanasekaran S. An attention 3DUNET and visual geometry group-19 based deep neural network for brain tumor segmentation and classification from MRI. J Biomol Struct Dyn 2025; 43:730-741. [PMID: 37979152 DOI: 10.1080/07391102.2023.2283164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/06/2023] [Indexed: 11/20/2023]
Abstract
There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Parvathy Jyothi
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - S Dhanasekaran
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
5
|
Li X, Ouyang X, Zhang J, Ding Z, Zhang Y, Xue Z, Shi F, Shen D. Carotid Vessel Wall Segmentation Through Domain Aligner, Topological Learning, and Segment Anything Model for Sparse Annotation in MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4483-4495. [PMID: 38976464 DOI: 10.1109/tmi.2024.3424884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Medical image analysis poses significant challenges due to limited availability of clinical data, which is crucial for training accurate models. This limitation is further compounded by the specialized and labor-intensive nature of the data annotation process. For example, despite the popularity of computed tomography angiography (CTA) in diagnosing atherosclerosis with an abundance of annotated datasets, magnetic resonance (MR) images stand out with better visualization for soft plaque and vessel wall characterization. However, the higher cost and limited accessibility of MR, as well as time-consuming nature of manual labeling, contribute to fewer annotated datasets. To address these issues, we formulate a multi-modal transfer learning network, named MT-Net, designed to learn from unpaired CTA and sparsely-annotated MR data. Additionally, we harness the Segment Anything Model (SAM) to synthesize additional MR annotations, enriching the training process. Specifically, our method first segments vessel lumen regions followed by precise characterization of carotid artery vessel walls, thereby ensuring both segmentation accuracy and clinical relevance. Validation of our method involved rigorous experimentation on publicly available datasets from COSMOS and CARE-II challenge, demonstrating its superior performance compared to existing state-of-the-art techniques.
Collapse
|
6
|
Ullah Z, Jamjoom M, Thirumalaisamy M, Alajmani SH, Saleem F, Sheikh-Akbari A, Khan UA. A Deep Learning Based Intelligent Decision Support System for Automatic Detection of Brain Tumor. Biomed Eng Comput Biol 2024; 15:11795972241277322. [PMID: 39238891 PMCID: PMC11375672 DOI: 10.1177/11795972241277322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 08/06/2024] [Indexed: 09/07/2024] Open
Abstract
Brain tumor (BT) is an awful disease and one of the foremost causes of death in human beings. BT develops mainly in 2 stages and varies by volume, form, and structure, and can be cured with special clinical procedures such as chemotherapy, radiotherapy, and surgical mediation. With revolutionary advancements in radiomics and research in medical imaging in the past few years, computer-aided diagnostic systems (CAD), especially deep learning, have played a key role in the automatic detection and diagnosing of various diseases and significantly provided accurate decision support systems for medical clinicians. Thus, convolution neural network (CNN) is a commonly utilized methodology developed for detecting various diseases from medical images because it is capable of extracting distinct features from an image under investigation. In this study, a deep learning approach is utilized to extricate distinct features from brain images in order to detect BT. Hence, CNN from scratch and transfer learning models (VGG-16, VGG-19, and LeNet-5) are developed and tested on brain images to build an intelligent decision support system for detecting BT. Since deep learning models require large volumes of data, data augmentation is used to populate the existing dataset synthetically in order to utilize the best fit detecting models. Hyperparameter tuning was conducted to set the optimum parameters for training the models. The achieved results show that VGG models outperformed others with an accuracy rate of 99.24%, average precision of 99%, average recall of 99%, average specificity of 99%, and average f1-score of 99% each. The results of the proposed models compared to the other state-of-the-art models in the literature show better performance of the proposed models in terms of accuracy, sensitivity, specificity, and f1-score. Moreover, comparative analysis shows that the proposed models are reliable in that they can be used for detecting BT as well as helping medical practitioners to diagnose BT.
Collapse
Affiliation(s)
- Zahid Ullah
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia
| | - Mona Jamjoom
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | | | - Samah H Alajmani
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Farrukh Saleem
- School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds, UK
| | - Akbar Sheikh-Akbari
- School of Built Environment, Engineering, and Computing, Leeds Beckett University, Leeds, UK
| | - Usman Ali Khan
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
7
|
Tamilarasi M, Kumarganesh S, Sagayam KM, Andrew J. Detection and Segmentation of Glioma Tumors Utilizing a UNet Convolutional Neural Network Approach with Non-Subsampled Shearlet Transform. J Comput Biol 2024; 31:757-768. [PMID: 38934096 DOI: 10.1089/cmb.2023.0339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2024] Open
Abstract
The prompt and precise identification and delineation of tumor regions within glioma brain images are critical for mitigating the risks associated with this life-threatening ailment. In this study, we employ the UNet convolutional neural network (CNN) architecture for glioma tumor detection. Our proposed methodology comprises a transformation module, a feature extraction module, and a tumor segmentation module. The spatial domain representation of brain magnetic resonance imaging images undergoes decomposition into low- and high-frequency subbands via a non-subsampled shearlet transform. Leveraging the selective and directive characteristics of this transform enhances the classification efficacy of our proposed system. Shearlet features are extracted from both low- and high-frequency subbands and subsequently classified using the UNet-CNN architecture to identify tumor regions within glioma brain images. We validate our proposed glioma tumor detection methodology using publicly available datasets, namely Brain Tumor Segmentation (BRATS) 2019 and The Cancer Genome Atlas (TCGA). The mean classification rates achieved by our system are 99.1% for the BRATS 2019 dataset and 97.8% for the TCGA dataset. Furthermore, our system demonstrates notable performance metrics on the BRATS 2019 dataset, including 98.2% sensitivity, 98.7% specificity, 98.9% accuracy, 98.7% intersection over union, and 98.5% disc similarity coefficient. Similarly, on the TCGA dataset, our system achieves 97.7% sensitivity, 98.2% specificity, 98.7% accuracy, 98.6% intersection over union, and 98.4% disc similarity coefficient. Comparative analysis against state-of-the-art methods underscores the efficacy of our proposed glioma brain tumor detection approach.
Collapse
Affiliation(s)
- M Tamilarasi
- Department of Electronics and Communication Engineering, Sasurie College of Engineering, Tirupur, India
| | - S Kumarganesh
- Department of Electronics and Communication Engineering, Knowledge Institute of Technology, Salem, India
| | - K Martin Sagayam
- Department of Electronics and Communication Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - J Andrew
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
8
|
Amin J, Almas Anjum M, Ahmad A, Sharif MI, Kadry S, Kim J. Microscopic parasite malaria classification using best feature selection based on generalized normal distribution optimization. PeerJ Comput Sci 2024; 10:e1744. [PMID: 38196949 PMCID: PMC10773915 DOI: 10.7717/peerj-cs.1744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 11/16/2023] [Indexed: 01/11/2024]
Abstract
Malaria disease can indeed be fatal if not identified and treated promptly. Due to advancements in the malaria diagnostic process, microscopy techniques are employed for blood cell analysis. Unfortunately, the diagnostic process of malaria via microscopy depends on microscopic skills. To overcome such issues, machine/deep learning algorithms can be proposed for more accurate and efficient detection of malaria. Therefore, a method is proposed for classifying malaria parasites that consist of three phases. The bilateral filter is applied to enhance image quality. After that shape-based and deep features are extracted. In shape-based pyramid histograms of oriented gradients (PHOG) features are derived with the dimension of N × 300. Deep features are derived from the residual network (ResNet)-50, and ResNet-18 at fully connected layers having the dimension of N × 1,000 respectively. The features obtained are fused serially, resulting in a dimensionality of N × 2,300. From this set, N × 498 features are chosen using the generalized normal distribution optimization (GNDO) method. The proposed method is accessed on a microscopic malarial parasite imaging dataset providing 99% classification accuracy which is better than as compared to recently published work.
Collapse
Affiliation(s)
- Javeria Amin
- University of Wah, Department of Computer Science, Wah Cantt, Pakistan
| | | | - Abraz Ahmad
- University of Wah, Department of Computer Science, Wah Cantt, Pakistan
| | - Muhammad Irfan Sharif
- Department of Information Sciences, University of Education Lahore, Jauharabad Campus, Jauharabad, Pakistan
| | - Seifedine Kadry
- Noroff University College, Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman, UAE
- MEU Research Unit, Middle East University, Amman, Jordan
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - Jungeun Kim
- Department of Software, Kongju National University, Cheonan, Korea
| |
Collapse
|
9
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
10
|
Prakash BV, Kannan AR, Santhiyakumari N, Kumarganesh S, Raja DSS, Hephzipah JJ, MartinSagayam K, Pomplun M, Dang H. Meningioma brain tumor detection and classification using hybrid CNN method and RIDGELET transform. Sci Rep 2023; 13:14522. [PMID: 37666922 PMCID: PMC10477173 DOI: 10.1038/s41598-023-41576-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/29/2023] [Indexed: 09/06/2023] Open
Abstract
The detection of meningioma tumors is the most crucial task compared with other tumors because of their lower pixel intensity. Modern medical platforms require a fully automated system for meningioma detection. Hence, this study proposes a novel and highly efficient hybrid Convolutional neural network (HCNN) classifier to distinguish meningioma brain images from non-meningioma brain images. The HCNN classification technique consists of the Ridgelet transform, feature computations, classifier module, and segmentation algorithm. Pixel stability during the decomposition process was improved by the Ridgelet transform, and the features were computed from the coefficient of the Ridgelet. These features were classified using the HCNN classification approach, and tumor pixels were detected using the segmentation algorithm. The experimental results were analyzed for meningioma tumor images by applying the proposed method to the BRATS 2019 and Nanfang dataset. The proposed HCNN-based meningioma detection system achieved 99.31% sensitivity, 99.37% specificity, and 99.24% segmentation accuracy for the BRATS 2019 dataset. The proposed HCNN technique achieved99.35% sensitivity, 99.22% specificity, and 99.04% segmentation accuracy on brain Magnetic Resonance Imaging (MRI) in the Nanfang dataset. The proposed system obtains 99.81% classification accuracy, 99.2% sensitivity, 99.7% specificity and 99.8% segmentation accuracy on BRATS 2022 dataset. The experimental results of the proposed HCNN algorithm were compared with those of the state-of-the-art meningioma detection algorithms in this study.
Collapse
Affiliation(s)
- B V Prakash
- Faculty of Information Technology, Government College of Engineering, Erode, Tamil Nadu, India
| | - A Rajiv Kannan
- Faculty of Computer Science and Engineering, K.S.R College of Engineering, Namakkal, India
| | - N Santhiyakumari
- Department of ECE, Knowledge Institute of Technology, Salem, Tamil Nadu, India
| | - S Kumarganesh
- Department of ECE, Knowledge Institute of Technology, Salem, Tamil Nadu, India
| | - D Siva Sundhara Raja
- Faculty of Electronics and Communication Engineering, SACS MAVMM Engineering College, Madurai, Tamil Nadu, India
| | - J Jasmine Hephzipah
- Faculty of Electronics and Communication Engineering, R.M.K. Engineering College, Kavaraipettai, Tamil Nadu, India
| | - K MartinSagayam
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - Marc Pomplun
- Department of Computer Science, University of Massachusetts Boston, Boston, MA, USA
| | - Hien Dang
- Department of Mathematics and Computer Science, Molloy University, Rockville Centre, NY, USA.
- Faculty of Computer Science and Engineering, Thuyloi University, Hanoi, Vietnam.
| |
Collapse
|
11
|
Amin J, Sharif M, Mallah GA, Fernandes SL. An optimized features selection approach based on Manta Ray Foraging Optimization (MRFO) method for parasite malaria classification. Front Public Health 2022; 10:969268. [PMID: 36148344 PMCID: PMC9486170 DOI: 10.3389/fpubh.2022.969268] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 08/03/2022] [Indexed: 01/25/2023] Open
Abstract
Malaria is a serious and lethal disease that has been reported by the World Health Organization (WHO), with an estimated 219 million new cases and 435,000 deaths globally. The most frequent malaria detection method relies mainly on the specialists who examine the samples under a microscope. Therefore, a computerized malaria diagnosis system is required. In this article, malaria cell segmentation and classification methods are proposed. The malaria cells are segmented using a color-based k-mean clustering approach on the selected number of clusters. After segmentation, deep features are extracted using pre-trained models such as efficient-net-b0 and shuffle-net, and the best features are selected using the Manta-Ray Foraging Optimization (MRFO) method. Two experiments are performed for classification using 10-fold cross-validation, the first experiment is based on the best features selected from the pre-trained models individually, while the second experiment is performed based on the selection of best features from the fusion of extracted features using both pre-trained models. The proposed method provided an accuracy of 99.2% for classification using the linear kernel of the SVM classifier. An empirical study demonstrates that the fused features vector results are better as compared to the individual best-selected features vector and the existing latest methods published so far.
Collapse
Affiliation(s)
- Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan,*Correspondence: Javeria Amin
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan
| | - Ghulam Ali Mallah
- Department of Computer Science, Shah Abdul Latif University, Khairpur, Pakistan
| | - Steven L. Fernandes
- Department of Computer Science, Design and Journalism, Creighton University, Omaha, NE, United States
| |
Collapse
|
12
|
Alsubai S, Khan HU, Alqahtani A, Sha M, Abbas S, Mohammad UG. Ensemble deep learning for brain tumor detection. Front Comput Neurosci 2022; 16:1005617. [PMID: 36118133 PMCID: PMC9480978 DOI: 10.3389/fncom.2022.1005617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 08/18/2022] [Indexed: 11/29/2022] Open
Abstract
With the quick evolution of medical technology, the era of big data in medicine is quickly approaching. The analysis and mining of these data significantly influence the prediction, monitoring, diagnosis, and treatment of tumor disorders. Since it has a wide range of traits, a low survival rate, and an aggressive nature, brain tumor is regarded as the deadliest and most devastating disease. Misdiagnosed brain tumors lead to inadequate medical treatment, reducing the patient's life chances. Brain tumor detection is highly challenging due to the capacity to distinguish between aberrant and normal tissues. Effective therapy and long-term survival are made possible for the patient by a correct diagnosis. Despite extensive research, there are still certain limitations in detecting brain tumors because of the unusual distribution pattern of the lesions. Finding a region with a small number of lesions can be difficult because small areas tend to look healthy. It directly reduces the classification accuracy, and extracting and choosing informative features is challenging. A significant role is played by automatically classifying early-stage brain tumors utilizing deep and machine learning approaches. This paper proposes a hybrid deep learning model Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) for classifying and predicting brain tumors through Magnetic Resonance Images (MRI). We experiment on an MRI brain image dataset. First, the data is preprocessed efficiently, and then, the Convolutional Neural Network (CNN) is applied to extract the significant features from images. The proposed model predicts the brain tumor with a significant classification accuracy of 99.1%, a precision of 98.8%, recall of 98.9%, and F1-measure of 99.0%.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Habib Ullah Khan
- Department of Accounting and Information Systems, College of Business and Economics, Qatar University, Doha, Qatar
- *Correspondence: Habib Ullah Khan
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
- Mohemmed Sha
| | - Sidra Abbas
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
- Sidra Abbas
| | - Uzma Ghulam Mohammad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
13
|
Quesada J, Sathidevi L, Liu R, Ahad N, Jackson JM, Azabou M, Xiao J, Liding C, Jin M, Urzay C, Gray-Roncal W, Johnson EC, Dyer EL. MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:5299-5314. [PMID: 38414814 PMCID: PMC10898440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
There are multiple scales of abstraction from which we can describe the same image, depending on whether we are focusing on fine-grained details or a more global attribute of the image. In brain mapping, learning to automatically parse images to build representations of both small-scale features (e.g., the presence of cells or blood vessels) and global properties of an image (e.g., which brain region the image comes from) is a crucial and open challenge. However, most existing datasets and benchmarks for neuroanatomy consider only a single downstream task at a time. To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image. Our multi-task neuroimaging benchmark (MTNeuro) is built on volumetric, micrometer-resolution X-ray microtomography images spanning a large thalamocortical section of mouse brain, encompassing multiple cortical and subcortical regions. We generated a number of different prediction challenges and evaluated several supervised and self-supervised models for brain-region prediction and pixel-level semantic segmentation of microstructures. Our experiments not only highlight the rich heterogeneity of this dataset, but also provide insights into how self-supervised approaches can be used to learn representations that capture multiple attributes of a single image and perform well on a variety of downstream tasks. Datasets, code, and pre-trained baseline models are provided at: https://mtneuro.github.io/.
Collapse
Affiliation(s)
| | | | - Ran Liu
- Georgia Institute of Technology
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|