1
|
Jyothi P, Dhanasekaran S. An attention 3DUNET and visual geometry group-19 based deep neural network for brain tumor segmentation and classification from MRI. J Biomol Struct Dyn 2025; 43:730-741. [PMID: 37979152 DOI: 10.1080/07391102.2023.2283164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/06/2023] [Indexed: 11/20/2023]
Abstract
There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Parvathy Jyothi
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - S Dhanasekaran
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
2
|
Rai HM, Yoo J, Dashkevych S. Transformative Advances in AI for Precise Cancer Detection: A Comprehensive Review of Non-Invasive Techniques. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING 2025. [DOI: 10.1007/s11831-024-10219-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/07/2024] [Indexed: 03/02/2025]
|
3
|
Saeed T, Khan MA, Hamza A, Shabaz M, Khan WZ, Alhayan F, Jamel L, Baili J. Neuro-XAI: Explainable deep learning framework based on deeplabV3+ and bayesian optimization for segmentation and classification of brain tumor in MRI scans. J Neurosci Methods 2024; 410:110247. [PMID: 39128599 DOI: 10.1016/j.jneumeth.2024.110247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 06/30/2024] [Accepted: 08/05/2024] [Indexed: 08/13/2024]
Abstract
The prevalence of brain tumor disorders is currently a global issue. In general, radiography, which includes a large number of images, is an efficient method for diagnosing these life-threatening disorders. The biggest issue in this area is that it takes a radiologist a long time and is physically strenuous to look at all the images. As a result, research into developing systems based on machine learning to assist radiologists in diagnosis continues to rise daily. Convolutional neural networks (CNNs), one type of deep learning approach, have been pivotal in achieving state-of-the-art results in several medical imaging applications, including the identification of brain tumors. CNN hyperparameters are typically set manually for segmentation and classification, which might take a while and increase the chance of using suboptimal hyperparameters for both tasks. Bayesian optimization is a useful method for updating the deep CNN's optimal hyperparameters. The CNN network, however, can be considered a "black box" model because of how difficult it is to comprehend the information it stores because of its complexity. Therefore, this problem can be solved by using Explainable Artificial Intelligence (XAI) tools, which provide doctors with a realistic explanation of CNN's assessments. Implementation of deep learning-based systems in real-time diagnosis is still rare. One of the causes could be that these methods don't quantify the Uncertainty in the predictions, which could undermine trust in the AI-based diagnosis of diseases. To be used in real-time medical diagnosis, CNN-based models must be realistic and appealing, and uncertainty needs to be evaluated. So, a novel three-phase strategy is proposed for segmenting and classifying brain tumors. Segmentation of brain tumors using the DeeplabV3+ model is first performed with tuning of hyperparameters using Bayesian optimization. For classification, features from state-of-the-art deep learning models Darknet53 and mobilenetv2 are extracted and fed to SVM for classification, and hyperparameters of SVM are also optimized using a Bayesian approach. The second step is to understand whatever portion of the images CNN uses for feature extraction using XAI algorithms. Using confusion entropy, the Uncertainty of the Bayesian optimized classifier is finally quantified. Based on a Bayesian-optimized deep learning framework, the experimental findings demonstrate that the proposed method outperforms earlier techniques, achieving a 97 % classification accuracy and a 0.98 global accuracy.
Collapse
Affiliation(s)
- Tallha Saeed
- Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan.
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, P.O.Box 1664, AlKhobar 31952, Saudi Arabia.
| | - Ameer Hamza
- Department of Computer Science and Mathematics, Lebanese American University, Lebanon; Department of Computer Science, HITEC University, Taxila, 47080, Pakistan.
| | - Mohammad Shabaz
- Model Institute of Engineering and Technology, Jammu, J&K, India.
| | - Wazir Zada Khan
- Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan
| | - Fatimah Alhayan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Leila Jamel
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia.
| | - Jamel Baili
- Department of Computer Engineering, College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia.
| |
Collapse
|
4
|
Ahmed MM, Hossain MM, Islam MR, Ali MS, Nafi AAN, Ahmed MF, Ahmed KM, Miah MS, Rahman MM, Niu M, Islam MK. Brain tumor detection and classification in MRI using hybrid ViT and GRU model with explainable AI in Southern Bangladesh. Sci Rep 2024; 14:22797. [PMID: 39354009 PMCID: PMC11445444 DOI: 10.1038/s41598-024-71893-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 09/02/2024] [Indexed: 10/03/2024] Open
Abstract
Brain tumor, a leading cause of uncontrolled cell growth in the central nervous system, presents substantial challenges in medical diagnosis and treatment. Early and accurate detection is essential for effective intervention. This study aims to enhance the detection and classification of brain tumors in Magnetic Resonance Imaging (MRI) scans using an innovative framework combining Vision Transformer (ViT) and Gated Recurrent Unit (GRU) models. We utilized primary MRI data from Bangabandhu Sheikh Mujib Medical College Hospital (BSMMCH) in Faridpur, Bangladesh. Our hybrid ViT-GRU model extracts essential features via ViT and identifies relationships between these features using GRU, addressing class imbalance and outperforming existing diagnostic methods. We extensively processed the dataset, and then trained the model using various optimizers (SGD, Adam, AdamW) and evaluated through rigorous 10-fold cross-validation. Additionally, we incorporated Explainable Artificial Intelligence (XAI) techniques-Attention Map, SHAP, and LIME-to enhance the interpretability of the model's predictions. For the primary dataset BrTMHD-2023, the ViT-GRU model achieved precision, recall, and F1-score metrics of 97%. The highest accuracies obtained with SGD, Adam, and AdamW optimizers were 81.66%, 96.56%, and 98.97%, respectively. Our model outperformed existing Transfer Learning models by 1.26%, as validated through comparative analysis and cross-validation. The proposed model also shows excellent performances with another Brain Tumor Kaggle Dataset outperforming the existing research done on the same dataset with 96.08% accuracy. The proposed ViT-GRU framework significantly improves the detection and classification of brain tumors in MRI scans. The integration of XAI techniques enhances the model's transparency and reliability, fostering trust among clinicians and facilitating clinical application. Future work will expand the dataset and apply findings to real-time diagnostic devices, advancing the field.
Collapse
Affiliation(s)
- Md Mahfuz Ahmed
- Shaanxi Int'l Innovation Center for Transportation-Energy-Information Fusion and Sustainability, Chang'an University, Xi'an, 710064, China
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Maruf Hossain
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Rakibul Islam
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
- Department of Computer Science and Engineering, Northern University Bangladesh, 1230, Dhaka, Bangladesh
| | - Md Shahin Ali
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| | - Abdullah Al Noman Nafi
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Faisal Ahmed
- Ship International Hospital, 1230, Uttara, Dhaka, Bangladesh
| | - Kazi Mowdud Ahmed
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Sipon Miah
- Shaanxi Int'l Innovation Center for Transportation-Energy-Information Fusion and Sustainability, Chang'an University, Xi'an, 710064, China
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
- Wireless Communications with Machine Learning (WCML) Laboratory, Islamic University, 7003, Kushtia, Bangladesh
| | - Md Mahbubur Rahman
- Department of Information and Communication Technology, Islamic University, 7003, Kushtia, Bangladesh
| | - Mingbo Niu
- Shaanxi Int'l Innovation Center for Transportation-Energy-Information Fusion and Sustainability, Chang'an University, Xi'an, 710064, China.
| | - Md Khairul Islam
- Department of Biomedical Engineering, Islamic University, 7003, Kushtia, Bangladesh
- Bio-Imaging Research Lab, Islamic University, 7003, Kushtia, Bangladesh
| |
Collapse
|
5
|
Shaheema SB, K. SD, Muppalaneni NB. Explainability based Panoptic brain tumor segmentation using a hybrid PA-NET with GCNN-ResNet50. Biomed Signal Process Control 2024; 94:106334. [DOI: 10.1016/j.bspc.2024.106334] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
6
|
Saran Raj S, Surendiran B, Raja SP. Designing a deep hybridized residual and SE model for MRI image-based brain tumor prediction. JOURNAL OF CLINICAL ULTRASOUND : JCU 2024; 52:588-599. [PMID: 38567722 DOI: 10.1002/jcu.23679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 02/27/2024] [Accepted: 03/19/2024] [Indexed: 06/15/2024]
Abstract
Deep learning techniques have become crucial in the detection of brain tumors but classifying numerous images is time-consuming and error-prone, impacting timely diagnosis. This can hinder the effectiveness of these techniques in detecting brain tumors in a timely manner. To address this limitation, this study introduces a novel brain tumor detection system. The main objective is to overcome the challenges associated with acquiring a large and well-classified dataset. The proposed approach involves generating synthetic Magnetic Resonance Imaging (MRI) images that mimic the patterns commonly found in brain MRI images. The system utilizes a dataset consisting of small images that are unbalanced in terms of class distribution. To enhance the accuracy of tumor detection, two deep learning models are employed. Using a hybrid ResNet+SE model, we capture feature distributions within unbalanced classes, creating a more balanced dataset. The second model, a tailored classifier identifies brain tumors in MRI images. The proposed method has shown promising results, achieving a high detection accuracy of 98.79%. This highlights the potential of the model as an efficient and cost-effective system for brain tumor detection.
Collapse
Affiliation(s)
- S Saran Raj
- Computer Science and Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, India
| | - B Surendiran
- Department of Computer Science and Engineering, National Institute of Technology Puducherry, Puducherry, India
| | - S P Raja
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, India
| |
Collapse
|
7
|
Rai HM, Yoo J, Dashkevych S. Two-headed UNetEfficientNets for parallel execution of segmentation and classification of brain tumors: incorporating postprocessing techniques with connected component labelling. J Cancer Res Clin Oncol 2024; 150:220. [PMID: 38684578 PMCID: PMC11058623 DOI: 10.1007/s00432-024-05718-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 03/21/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The purpose of this study is to develop accurate and automated detection and segmentation methods for brain tumors, given their significant fatality rates, with aggressive malignant tumors like Glioblastoma Multiforme (GBM) having a five-year survival rate as low as 5 to 10%. This underscores the urgent need to improve diagnosis and treatment outcomes through innovative approaches in medical imaging and deep learning techniques. METHODS In this work, we propose a novel approach utilizing the two-headed UNetEfficientNets model for simultaneous segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) images. The model combines the strengths of EfficientNets and a modified two-headed Unet model. We utilized a publicly available dataset consisting of 3064 brain MR images classified into three tumor classes: Meningioma, Glioma, and Pituitary. To enhance the training process, we performed 12 types of data augmentation on the training dataset. We evaluated the methodology using six deep learning models, ranging from UNetEfficientNet-B0 to UNetEfficientNet-B5, optimizing the segmentation and classification heads using binary cross entropy (BCE) loss with Dice and BCE with focal loss, respectively. Post-processing techniques such as connected component labeling (CCL) and ensemble models were applied to improve segmentation outcomes. RESULTS The proposed UNetEfficientNet-B4 model achieved outstanding results, with an accuracy of 99.4% after postprocessing. Additionally, it obtained high scores for DICE (94.03%), precision (98.67%), and recall (99.00%) after post-processing. The ensemble technique further improved segmentation performance, with a global DICE score of 95.70% and Jaccard index of 91.20%. CONCLUSION Our study demonstrates the high efficiency and accuracy of the proposed UNetEfficientNet-B4 model in the automatic and parallel detection and segmentation of brain tumors from MRI images. This approach holds promise for improving diagnosis and treatment planning for patients with brain tumors, potentially leading to better outcomes and prognosis.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-Gu, Seongnam-Si, 13120, Gyeonggi-Do, Republic of Korea
| | - Serhii Dashkevych
- Department of Computer Engineering, Vistula University, Stokłosy 3, 02-787, Warszawa, Poland
| |
Collapse
|
8
|
Ilesanmi AE, Ilesanmi TO, Ajayi BO. Reviewing 3D convolutional neural network approaches for medical image segmentation. Heliyon 2024; 10:e27398. [PMID: 38496891 PMCID: PMC10944240 DOI: 10.1016/j.heliyon.2024.e27398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Background Convolutional neural networks (CNNs) assume pivotal roles in aiding clinicians in diagnosis and treatment decisions. The rapid evolution of imaging technology has established three-dimensional (3D) CNNs as a formidable framework for delineating organs and anomalies in medical images. The prominence of 3D CNN frameworks is steadily growing within medical image segmentation and classification. Thus, our proposition entails a comprehensive review, encapsulating diverse 3D CNN algorithms for the segmentation of medical image anomalies and organs. Methods This study systematically presents an exhaustive review of recent 3D CNN methodologies. Rigorous screening of abstracts and titles were carried out to establish their relevance. Research papers disseminated across academic repositories were meticulously chosen, analyzed, and appraised against specific criteria. Insights into the realm of anomalies and organ segmentation were derived, encompassing details such as network architecture and achieved accuracies. Results This paper offers an all-encompassing analysis, unveiling the prevailing trends in 3D CNN segmentation. In-depth elucidations encompass essential insights, constraints, observations, and avenues for future exploration. A discerning examination indicates the preponderance of the encoder-decoder network in segmentation tasks. The encoder-decoder framework affords a coherent methodology for the segmentation of medical images. Conclusion The findings of this study are poised to find application in clinical diagnosis and therapeutic interventions. Despite inherent limitations, CNN algorithms showcase commendable accuracy levels, solidifying their potential in medical image segmentation and classification endeavors.
Collapse
Affiliation(s)
- Ademola E. Ilesanmi
- University of Pennsylvania, 3710 Hamilton Walk, 6th Floor, Philadelphia, PA, 19104, United States
| | | | - Babatunde O. Ajayi
- National Astronomical Research Institute of Thailand, Chiang Mai 50180, Thailand
| |
Collapse
|
9
|
S SP, A S, T K, S D. Self-attention-based generative adversarial network optimized with color harmony algorithm for brain tumor classification. Electromagn Biol Med 2024:1-15. [PMID: 38369844 DOI: 10.1080/15368378.2024.2312363] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 01/25/2024] [Indexed: 02/20/2024]
Abstract
This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.
Collapse
Affiliation(s)
- Senthil Pandi S
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, Tamil Nadu, India
| | - Senthilselvi A
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Kumaragurubaran T
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Dhanasekaran S
- Department of Information Technology, Kalasalingam Academy of Research and Education (Deemed to be University), Srivilliputtur, Tamilnadu, India
| |
Collapse
|
10
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
11
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
12
|
Kaifi R. A Review of Recent Advances in Brain Tumor Diagnosis Based on AI-Based Classification. Diagnostics (Basel) 2023; 13:3007. [PMID: 37761373 PMCID: PMC10527911 DOI: 10.3390/diagnostics13183007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/14/2023] [Accepted: 09/19/2023] [Indexed: 09/29/2023] Open
Abstract
Uncontrolled and fast cell proliferation is the cause of brain tumors. Early cancer detection is vitally important to save many lives. Brain tumors can be divided into several categories depending on the kind, place of origin, pace of development, and stage of progression; as a result, tumor classification is crucial for targeted therapy. Brain tumor segmentation aims to delineate accurately the areas of brain tumors. A specialist with a thorough understanding of brain illnesses is needed to manually identify the proper type of brain tumor. Additionally, processing many images takes time and is tiresome. Therefore, automatic segmentation and classification techniques are required to speed up and enhance the diagnosis of brain tumors. Tumors can be quickly and safely detected by brain scans using imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and others. Machine learning (ML) and artificial intelligence (AI) have shown promise in developing algorithms that aid in automatic classification and segmentation utilizing various imaging modalities. The right segmentation method must be used to precisely classify patients with brain tumors to enhance diagnosis and treatment. This review describes multiple types of brain tumors, publicly accessible datasets, enhancement methods, segmentation, feature extraction, classification, machine learning techniques, deep learning, and learning through a transfer to study brain tumors. In this study, we attempted to synthesize brain cancer imaging modalities with automatically computer-assisted methodologies for brain cancer characterization in ML and DL frameworks. Finding the current problems with the engineering methodologies currently in use and predicting a future paradigm are other goals of this article.
Collapse
Affiliation(s)
- Reham Kaifi
- Department of Radiological Sciences, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Jeddah City 22384, Saudi Arabia;
- King Abdullah International Medical Research Center, Jeddah City 22384, Saudi Arabia
- Medical Imaging Department, Ministry of the National Guard—Health Affairs, Jeddah City 11426, Saudi Arabia
| |
Collapse
|
13
|
Zhou T, Zhu S. Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation. Comput Biol Med 2023; 163:107142. [PMID: 37331100 DOI: 10.1016/j.compbiomed.2023.107142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/17/2023] [Accepted: 06/05/2023] [Indexed: 06/20/2023]
Abstract
Brain tumor is one of the most aggressive cancers in the world, accurate brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning. Although deep learning models have presented remarkable success in medical segmentation, they can only obtain the segmentation map without capturing the segmentation uncertainty. To achieve accurate and safe clinical results, it is necessary to produce extra uncertainty maps to assist the subsequent segmentation revision. To this end, we propose to exploit the uncertainty quantification in the deep learning model and apply it to multi-modal brain tumor segmentation. In addition, we develop an effective attention-aware multi-modal fusion method to learn the complimentary feature information from the multiple MR modalities. First, a multi-encoder-based 3D U-Net is proposed to obtain the initial segmentation results. Then, an estimated Bayesian model is presented to measure the uncertainty of the initial segmentation results. Finally, the obtained uncertainty maps are integrated into a deep learning-based segmentation network, serving as an additional constraint information to further refine the segmentation results. The proposed network is evaluated on publicly available BraTS 2018 and BraTS 2019 datasets. The experimental results demonstrate that the proposed method outperforms the previous state-of-the-art methods on Dice score, Hausdorff distance and Sensitivity metrics. Furthermore, the proposed components could be easily applied to other network architectures and other computer vision fields.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Shan Zhu
- School of Life and Environmental Science, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
14
|
Mowlani K, Jafari Shahbazzadeh M, Hashemipour M. Segmentation and classification of brain tumors using fuzzy 3D highlighting and machine learning. J Cancer Res Clin Oncol 2023; 149:9025-9041. [PMID: 37166578 DOI: 10.1007/s00432-023-04754-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 04/08/2023] [Indexed: 05/12/2023]
Abstract
PURPOSE Brain tumors are among the most lethal forms of cancer, so early diagnosis is crucial. As a result of machine learning algorithms, radiologists can now make accurate diagnoses of tumors without resorting to invasive procedures. There are, however, a number of obstacles to overcome. To begin, classifying brain tumors presents a significant difficulty in developing the most effective deep learning framework. Furthermore, physically dividing the brain tumor is a time-consuming and challenging process that requires the expertise of medical professionals. METHODS Here, we have discussed the use of a fuzzy 3D highlighting method for the segmentation of brain tumors and the selection of suspect tumor areas based on the geometric characteristics of MRI scans. After features were extracted from the brain tumor section, the images were classified using two machine learning methods: a support vector machine technique optimized with the grasshopper optimization algorithm (GOA-SVM), and a deep neural network technique based on features selected with the genetic algorithm (GA-DNN). This classifies brain tumors into benign and malignant. Implemented on the MATLAB platform, the proposed method is evaluated for effectiveness using performance metrics like sensitivity, accuracy, specificity, and Youden index. RESULTS From these results, it is clear that the proposed strategy is significantly superior to the alternatives. The average classification accuracy was determined to be 97.53%, 97.65%, for GA-DNN and GOA-SVM, respectively. CONCLUSION These findings may be a quick and important step to detect the presence of lesions at the same time as cancerous tumors in neurology diagnosis.
Collapse
Affiliation(s)
- Khalil Mowlani
- Department of Computer Engineering, Kerman Branch, Islamic Azad University, Kerman, Iran
| | | | - Maliheh Hashemipour
- Department of Computer Engineering, Kerman Branch, Islamic Azad University, Kerman, Iran
| |
Collapse
|
15
|
Rai HM. Cancer detection and segmentation using machine learning and deep learning techniques: a review. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:27001-27035. [DOI: 10.1007/s11042-023-16520-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 05/12/2023] [Accepted: 08/13/2023] [Indexed: 09/16/2023]
|
16
|
Sreekumar SP, Palanisamy R, Swaminathan R. Semantic Segmentation of Cell Painted Organelles using DeepLabv3plus Model. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082807 DOI: 10.1109/embc40787.2023.10340728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Cell painting based high content fluorescence imaging technique offers deep insight into the functional and biological changes in subcellular structures. However, advanced instrumentation and the limited availability of suitable fluorescent dyes restricts the tool to comprehensively characterize the cell morphology. Therefore, generating fluorescent specific organelle images using transmitted light microscopy provides an alternative solution for clinical applications. In this work, the utility of semantic segmentation deep network for predicting the Endoplasmic Reticulum (ER), cytoplasm and nuclei from a composite image is investigated. To perform this study, a public dataset consisting of 3456 composite images are considered from Broad Bioimage Benchmark collection. The pixel wise labeling is carried out with the generated binary masks for ER, cytoplasm and nuclei. DeepLabv3plus architecture with Atrous Spatial Pyramid Pooling (ASPP) and depth wise separable convolution is used as a learning model to perform semantic segmentation. The accuracy and loss function at different learning rates are analyzed and the segmentation results are validated using Jaccard index, mean Boundary F (BF) score and dice index. The trained model achieved 97.86% accuracy with a loss of 0.07 at the learning rate of 0.01. Mean BF score, dice index and Jaccard index for nuclei, ER and cytoplasm are (0.98, 0.94, 0.88), (0.97, 0.82, 0.7) and (0.95, 0.88, 0.66) respectively. The obtained results indicate that the adopted methodology could delineate the subcellular structures by accurately detecting sharp object boundaries. Therefore, this study could be useful for predicting the cell painted images from transmitted light microscopy without the requirement of fluorescent labeling.
Collapse
|
17
|
Ottom MA, Abdul Rahman H, Alazzam IM, Dinov ID. Multimodal Stereotactic Brain Tumor Segmentation Using 3D-Znet. Bioengineering (Basel) 2023; 10:bioengineering10050581. [PMID: 37237652 DOI: 10.3390/bioengineering10050581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/12/2023] [Accepted: 04/19/2023] [Indexed: 05/28/2023] Open
Abstract
Stereotactic brain tumor segmentation based on 3D neuroimaging data is a challenging task due to the complexity of the brain architecture, extreme heterogeneity of tumor malformations, and the extreme variability of intensity signal and noise distributions. Early tumor diagnosis can help medical professionals to select optimal medical treatment plans that can potentially save lives. Artificial intelligence (AI) has previously been used for automated tumor diagnostics and segmentation models. However, the model development, validation, and reproducibility processes are challenging. Often, cumulative efforts are required to produce a fully automated and reliable computer-aided diagnostic system for tumor segmentation. This study proposes an enhanced deep neural network approach, the 3D-Znet model, based on the variational autoencoder-autodecoder Znet method, for segmenting 3D MR (magnetic resonance) volumes. The 3D-Znet artificial neural network architecture relies on fully dense connections to enable the reuse of features on multiple levels to improve model performance. It consists of four encoders and four decoders along with the initial input and the final output blocks. Encoder-decoder blocks in the network include double convolutional 3D layers, 3D batch normalization, and an activation function. These are followed by size normalization between inputs and outputs and network concatenation across the encoding and decoding branches. The proposed deep convolutional neural network model was trained and validated using a multimodal stereotactic neuroimaging dataset (BraTS2020) that includes multimodal tumor masks. Evaluation of the pretrained model resulted in the following dice coefficient scores: Whole Tumor (WT) = 0.91, Tumor Core (TC) = 0.85, and Enhanced Tumor (ET) = 0.86. The performance of the proposed 3D-Znet method is comparable to other state-of-the-art methods. Our protocol demonstrates the importance of data augmentation to avoid overfitting and enhance model performance.
Collapse
Affiliation(s)
- Mohammad Ashraf Ottom
- Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI 48104, USA
- Department of Information Systems, Yarmouk University, Irbid 21163, Jordan
| | - Hanif Abdul Rahman
- Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI 48104, USA
- PAPRSB Institute of Health Sciences, Universiti Brunei Darussalam, Gadong BE1410, Brunei
| | - Iyad M Alazzam
- Department of Information Systems, Yarmouk University, Irbid 21163, Jordan
| | - Ivo D Dinov
- Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI 48104, USA
| |
Collapse
|
18
|
Anusooya G, Bharathiraja S, Mahdal M, Sathyarajasekaran K, Elangovan M. Self-Supervised Wavelet-Based Attention Network for Semantic Segmentation of MRI Brain Tumor. SENSORS (BASEL, SWITZERLAND) 2023; 23:2719. [PMID: 36904923 PMCID: PMC10007092 DOI: 10.3390/s23052719] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/28/2023] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
To determine the appropriate treatment plan for patients, radiologists must reliably detect brain tumors. Despite the fact that manual segmentation involves a great deal of knowledge and ability, it may sometimes be inaccurate. By evaluating the size, location, structure, and grade of the tumor, automatic tumor segmentation in MRI images aids in a more thorough analysis of pathological conditions. Due to the intensity differences in MRI images, gliomas may spread out, have low contrast, and are therefore difficult to detect. As a result, segmenting brain tumors is a challenging process. In the past, several methods for segmenting brain tumors in MRI scans were created. However, because of their susceptibility to noise and distortions, the usefulness of these approaches is limited. Self-Supervised Wavele- based Attention Network (SSW-AN), a new attention module with adjustable self-supervised activation functions and dynamic weights, is what we suggest as a way to collect global context information. In particular, this network's input and labels are made up of four parameters produced by the two-dimensional (2D) Wavelet transform, which makes the training process simpler by neatly segmenting the data into low-frequency and high-frequency channels. To be more precise, we make use of the channel attention and spatial attention modules of the self-supervised attention block (SSAB). As a result, this method may more easily zero in on crucial underlying channels and spatial patterns. The suggested SSW-AN has been shown to outperform the current state-of-the-art algorithms in medical image segmentation tasks, with more accuracy, more promising dependability, and less unnecessary redundancy.
Collapse
Affiliation(s)
| | | | - Miroslav Mahdal
- Department of Control Systems and Instrumentation, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | | | | |
Collapse
|
19
|
Padma Usha M, Kannan G, Ramamoorthy M. An efficient Berkeley’s wavelet convolutional transfer learning and local binary Gabor fuzzy C-means clustering for brain tumour detection. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2166805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/04/2023]
Affiliation(s)
- M. Padma Usha
- Department of Electronics and Communication Engineering, B.S. Abdur Rahman, Crescent Institute of Science and Technology, Chennai, India
| | - G. Kannan
- Department of Electronics and Communication Engineering, B.S. Abdur Rahman, Crescent Institute of Science and Technology, Chennai, India
| | - M. Ramamoorthy
- Department of Artificial Intelligence and Machine Learning, Saveetha School of Engineering, SIMATS, Chennai, India
| |
Collapse
|
20
|
Poonkodi S, Kanchana M. 3D-MedTranCSGAN: 3D Medical Image Transformation using CSGAN. Comput Biol Med 2023; 153:106541. [PMID: 36652868 DOI: 10.1016/j.compbiomed.2023.106541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/30/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computer vision techniques are a rapidly growing area of transforming medical images for various specific medical applications. In an end-to-end application, this paper proposes a 3D Medical Image Transformation Using a CSGAN model named a 3D-MedTranCSGAN. The 3D-MedTranCSGAN model is an integration of non-adversarial loss components and the Cyclic Synthesized Generative Adversarial Networks. The proposed model utilizes PatchGAN's discriminator network, to penalize the difference between the synthesized image and the original image. The model also computes the non-adversary loss functions such as content, perception, and style transfer losses. 3DCascadeNet is a new generator architecture introduced in the paper, which is used to enhance the perceptiveness of the transformed medical image by encoding-decoding pairs. We use the 3D-MedTranCSGAN model to do various tasks without modifying specific applications: PET to CT image transformation; reconstruction of CT to PET; modification of movement artefacts in MR images; and removing noise in PET images. We found that 3D-MedTranCSGAN outperformed other transformation methods in our experiments. For the first task, the proposed model yields SSIM is 0.914, PSNR is 26.12, MSE is 255.5, VIF is 0.4862, UQI is 0.9067 and LPIPs is 0.2284. For the second task, the model yields 0.9197, 25.7, 257.56, 0.4962, 0.9027, 0.2262. For the third task, the model yields 0.8862, 24.94, 0.4071, 0.6410, 0.2196. For the final task, the model yields 0.9521, 33.67, 33.57, 0.6091, 0.9255, 0.0244. Based on the result analysis, the proposed model outperforms the other techniques.
Collapse
Affiliation(s)
- S Poonkodi
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India
| | - M Kanchana
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India.
| |
Collapse
|
21
|
Rehman MU, Ryu J, Nizami IF, Chong KT. RAAGR2-Net: A brain tumor segmentation network using parallel processing of multiple spatial frames. Comput Biol Med 2023; 152:106426. [PMID: 36565485 DOI: 10.1016/j.compbiomed.2022.106426] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 11/16/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022]
Abstract
Brain tumors are one of the most fatal cancers. Magnetic Resonance Imaging (MRI) is a non-invasive method that provides multi-modal images containing important information regarding the tumor. Many contemporary techniques employ four modalities: T1-weighted (T1), T1-weighted with contrast (T1c), T2-weighted (T2), and fluid-attenuation-inversion-recovery (FLAIR), each of which provides unique and important characteristics for the location of each tumor. Although several modern procedures provide decent segmentation results on the multimodal brain tumor image segmentation benchmark (BraTS) dataset, they lack performance when evaluated simultaneously on all the regions of MRI images. Furthermore, there is still room for improvement due to parameter limitations and computational complexity. Therefore, in this work, a novel encoder-decoder-based architecture is proposed for the effective segmentation of brain tumor regions. Data pre-processing is performed by applying N4 bias field correction, z-score, and 0 to 1 resampling to facilitate model training. To minimize the loss of location information in different modules, a residual spatial pyramid pooling (RASPP) module is proposed. RASPP is a set of parallel layers using dilated convolution. In addition, an attention gate (AG) module is used to efficiently emphasize and restore the segmented output from extracted feature maps. The proposed modules attempt to acquire rich feature representations by combining knowledge from diverse feature maps and retaining their local information. The performance of the proposed deep network based on RASPP, AG, and recursive residual (R2) block termed RAAGR2-Net is evaluated on the BraTS benchmarks. The experimental results show that the suggested network outperforms existing networks that exhibit the usefulness of the proposed modules for "fine" segmentation. The code for this work is made available online at: https://github.com/Rehman1995/RAAGR2-Net.
Collapse
Affiliation(s)
- Mobeen Ur Rehman
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, South Korea.
| | - Jihyoung Ryu
- Electronics and Telecommunications Research Institute, 176-11 Cheomdan Gwagi-ro, Buk-gu, Gwangju 61012, Republic of Korea.
| | - Imran Fareed Nizami
- Department of Electrical Engineering, Bahria University, Islamabad, Pakistan.
| | - Kil To Chong
- Department of Electronics and Information Engineering, Jeonbuk National University, Jeonju 54896, South Korea; Advances Electronics and Information Research Center, Jeonbuk National University, Jeonju 54896, South Korea.
| |
Collapse
|
22
|
Sahli H, Ben Slama A, Zeraii A, Labidi S, Sayadi M. ResNet-SVM: Fusion based glioblastoma tumor segmentation and classification. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:27-48. [PMID: 36278391 DOI: 10.3233/xst-221240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Computerized segmentation of brain tumor based on magnetic resonance imaging (MRI) data presents an important challenging act in computer vision. In image segmentation, numerous studies have explored the feasibility and advantages of employing deep neural network methods to automatically detect and segment brain tumors depicting on MRI. For training the deeper neural network, the procedure usually requires extensive computational power and it is also very time-consuming due to the complexity and the gradient diffusion difficulty. In order to address and help solve this challenge, we in this study present an automatic approach for Glioblastoma brain tumor segmentation based on deep Residual Learning Network (ResNet) to get over the gradient problem of deep Convolutional Neural Networks (CNNs). Using the extra layers added to a deep neural network, ResNet algorithm can effectively improve the accuracy and the performance, which is useful in solving complex problems with a much rapid training process. An additional method is then proposed to fully automatically classify different brain tumor categories (necrosis, edema, and enhancing regions). Results confirm that the proposed fusion method (ResNet-SVM) has an increased classification results of accuracy (AC = 89.36%), specificity (SP = 92.52%) and precision (PR = 90.12%) using 260 MRI data for the training and 112 data used for testing and validation of Glioblastoma tumor cases. Compared to the state-of-the art methods, the proposed scheme provides a higher performance by identifying Glioblastoma tumor type.
Collapse
Affiliation(s)
- Hanene Sahli
- Laboratory of Signal Image and Energy Mastery, National Higher Engineering School of Tunis, University of Tunis, Tunis, Tunisia
| | - Amine Ben Slama
- Laboratory of Biophysics and Medical Technologies, Higher Institute of Medical Technologies of Tunis, University of Tunis El Manar, Tunis, Tunisia
| | - Abderrazek Zeraii
- Laboratory of Biophysics and Medical Technologies, Higher Institute of Medical Technologies of Tunis, University of Tunis El Manar, Tunis, Tunisia
| | - Salam Labidi
- Laboratory of Biophysics and Medical Technologies, Higher Institute of Medical Technologies of Tunis, University of Tunis El Manar, Tunis, Tunisia
| | - Mounir Sayadi
- Laboratory of Signal Image and Energy Mastery, National Higher Engineering School of Tunis, University of Tunis, Tunis, Tunisia
| |
Collapse
|
23
|
Brain tumor detection using deep ensemble model with wavelet features. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00699-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
24
|
Sekaran R, Munnangi AK, Ramachandran M, Gandomi AH. 3D brain slice classification and feature extraction using Deformable Hierarchical Heuristic Model. Comput Biol Med 2022; 149:105990. [PMID: 36030723 DOI: 10.1016/j.compbiomed.2022.105990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 08/07/2022] [Accepted: 08/14/2022] [Indexed: 11/18/2022]
Abstract
Brain tumors are the most frequently occurring and severe type of cancer, with a life expectancy of only a few months in most advanced stages. As a result, planning the best course of therapy is critical to improve a patient's ability to fight cancer and their quality of life. Various imaging modalities, such as computed tomography (CT), magnetic resonance imaging (MRI) and ultrasound imaging, are commonly employed to assess a brain tumor. This research proposes a novel technique for extracting and classifying tumor features in 3D brain slice images. After input images are processed for noise removal, resizing, and smoothening, features of brain tumor are extracted using Volume of Interest (VOI). The extracted features are then classified using the Deformable Hierarchical Heuristic Model-Deep Deconvolutional Residual Network (DHHM-DDRN) based on surfaces, curves, and geometric patterns. Experimental results show that proposed approach obtained an accuracy of 95%, DSC of 83%, precision of 80%, recall of 85%, and F1 score of 55% for classifying brain cancer features.
Collapse
Affiliation(s)
- Ramesh Sekaran
- Department of Information Technology, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India
| | - Ashok Kumar Munnangi
- Department of Information Technology, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India
| | | | - Amir H Gandomi
- Faculty of Engineering & Information Systems, University of Technology Sydney, Sydney, Australia.
| |
Collapse
|
25
|
An Efficient Plant Disease Recognition System Using Hybrid Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs) for Smart IoT Applications in Agriculture. INT J COMPUT INT SYS 2022. [DOI: 10.1007/s44196-022-00129-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
AbstractIn recent times, the Internet of Things (IoT) and Deep Learning Models (DLMs) can be utilized for developing smart agriculture to determine the exact location of the diseased part of the leaf on farmland in an efficient manner. There is no exception that convolutional neural networks (CNNs) have achieved the latest accomplishment in many aspects of human life and the farming sector. Semantic image segmentation is considered the main problem in computer vision. Despite tremendous progress in applications, approximately all semantic image segmentation algorithms fail to achieve sufficient hash results because of the absence of details sensitivity, problems in assessing the global similarity of image pixels, or both. Methods of post-processing improvement, as a wonderfully critical means of improving the underlying flaws mentioned above from algorithms, depend almost on Conditional Random Fields (CRFs). Therefore, plant disease prediction plays important role in the premature notification of the disease to alleviate its effects on disease forecast investigation purposes in the smart farming arena. Hence, this work proposes an efficient IoT-based plant disease recognition system using semantic segmentation methods such as FCN-8 s, CED-Net, SegNet, DeepLabv3, and U-Net with the CRF method to allocate disease parts in leaf crops. Evaluation of this network and comparison with other networks of the state art. The experimental results and their comparisons proclaim over F1-score, sensitivity, and intersection over union (IoU). The proposed system with SegNet and CRFs gives high results compared with other methods. The superiority and effectiveness of the mentioned improvement method, as well as its range of implementation, are confirmed through experiments.
Collapse
|
26
|
Kursad Poyraz A, Dogan S, Akbal E, Tuncer T. Automated brain disease classification using exemplar deep features. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103448] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
27
|
Liu Y, Du J, Vong CM, Yue G, Yu J, Wang Y, Lei B, Wang T. Scale-adaptive super-feature based MetricUNet for brain tumor segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103442] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
28
|
Interpretable Model Based on Pyramid Scene Parsing Features for Brain Tumor MRI Image Segmentation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8000781. [PMID: 35140806 PMCID: PMC8820931 DOI: 10.1155/2022/8000781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 01/15/2022] [Indexed: 12/05/2022]
Abstract
Due to the black box model nature of convolutional neural networks, computer-aided diagnosis methods based on depth learning are usually poorly interpretable. Therefore, the diagnosis results obtained by these unexplained methods are difficult to gain the trust of patients and doctors, which limits their application in the medical field. To solve this problem, an interpretable depth learning image segmentation framework is proposed in this paper for processing brain tumor magnetic resonance images. A gradient-based class activation mapping method is introduced into the segmentation model based on pyramid structure to visually explain it. The pyramid structure constructs global context information with features after multiple pooling layers to improve image segmentation performance. Therefore, class activation mapping is used to visualize the features concerned by each layer of pyramid structure and realize the interpretation of PSPNet. After training and testing the model on the public dataset BraTS2018, several sets of visualization results were obtained. By analyzing these visualization results, the effectiveness of pyramid structure in brain tumor segmentation task is proved, and some improvements are made to the structure of pyramid model based on the shortcomings of the model shown in the visualization results. In summary, the interpretable brain tumor image segmentation method proposed in this paper can well explain the role of pyramid structure in brain tumor image segmentation, which provides a certain idea for the application of interpretable method in brain tumor segmentation and has certain practical value for the evaluation and optimization of brain tumor segmentation model.
Collapse
|
29
|
Aju D., Joseph SS. 3D Reconstruction Methods Purporting 3D Visualization and Volume Estimation of Brain Tumors. INTERNATIONAL JOURNAL OF E-COLLABORATION 2022. [DOI: 10.4018/ijec.290296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This work proposes the Crust algorithm for 3D reconstruction of brain tumor, an effective mechanism in the visualization of tumors for presurgical planning, radiation dose calculation. Despite the promising performance of Crust algorithm in reconstruction of Stanford models, it has not yet been considered in 3D reconstruction of brain tumor. Validation of the results is done using the comparison of the 3D models from two cutting edge techniques namely the Marching Cube and the Alpha shape algorithm. The obtained result shows that Crust algorithm provides the brain tumor model with an average quality of triangle meshes ranging from 0.85 to 0.95. Concerning the visual realism, the quality of Crust algorithm models is higher on comparison to the other models. Precision of tumor volume measurement by convex hull method is analysed by repeatability and reproducibility. The standard deviations of repeatability were between 2.03 % and 3.97 %. The experimental results show that Linear Crust algorithm produces high quality meshes with average quality of equilateral triangles close to 1.
Collapse
Affiliation(s)
- Aju D.
- Vellore Institute of Technology, India
| | | |
Collapse
|
30
|
Luo D, Zeng W, Chen J, Tang W. Deep Learning for Automatic Image Segmentation in Stomatology and Its Clinical Application. FRONTIERS IN MEDICAL TECHNOLOGY 2021; 3:767836. [PMID: 35047964 PMCID: PMC8757832 DOI: 10.3389/fmedt.2021.767836] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning has become an active research topic in the field of medical image analysis. In particular, for the automatic segmentation of stomatological images, great advances have been made in segmentation performance. In this paper, we systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning, and their clinical applications. We categorized them into different tasks and analyze their advantages and disadvantages. The main categories that we explored were the data sources, backbone network, and task formulation. We categorized data sources into panoramic radiography, dental X-rays, cone-beam computed tomography, multi-slice spiral computed tomography, and methods based on intraoral scan images. For the backbone network, we distinguished methods based on convolutional neural networks from those based on transformers. We divided task formulations into semantic segmentation tasks and instance segmentation tasks. Toward the end of the paper, we discussed the challenges and provide several directions for further research on the automatic segmentation of stomatological images.
Collapse
Affiliation(s)
| | | | | | - Wei Tang
- The State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases & Department of Oral and Maxillofacial Surgery, West China College of Stomatology, Sichuan University, Chengdu, China
| |
Collapse
|
31
|
Valizadeh A, Shariatee M. The Progress of Medical Image Semantic Segmentation Methods for Application in COVID-19 Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:7265644. [PMID: 34840563 PMCID: PMC8611358 DOI: 10.1155/2021/7265644] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 10/18/2021] [Indexed: 11/17/2022]
Abstract
Image medical semantic segmentation has been employed in various areas, including medical imaging, computer vision, and intelligent transportation. In this study, the method of semantic segmenting images is split into two sections: the method of the deep neural network and previous traditional method. The traditional method and the published dataset for segmentation are reviewed in the first step. The presented aspects, including all-convolution network, sampling methods, FCN connector with CRF methods, extended convolutional neural network methods, improvements in network structure, pyramid methods, multistage and multifeature methods, supervised methods, semiregulatory methods, and nonregulatory methods, are then thoroughly explored in current methods based on the deep neural network. Finally, a general conclusion on the use of developed advances based on deep neural network concepts in semantic segmentation is presented.
Collapse
Affiliation(s)
- Amin Valizadeh
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Morteza Shariatee
- Department of Mechanical Engineering, Iowa State University, Ames, IA, USA
| |
Collapse
|
32
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|