1
|
Verma A, Yadav AK. Brain tumor segmentation with deep learning: Current approaches and future perspectives. J Neurosci Methods 2025; 418:110424. [PMID: 40122469 DOI: 10.1016/j.jneumeth.2025.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2024] [Revised: 02/21/2025] [Accepted: 03/09/2025] [Indexed: 03/25/2025]
Abstract
BACKGROUND Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. METHOD This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. SCOPE AND COVERAGE This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. COMPARISON WITH EXISTING METHODS The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. CONCLUSION The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.
Collapse
Affiliation(s)
- Akash Verma
- Department of Computer Science & Engineering, NIT Hamirpur (HP), India.
| | - Arun Kumar Yadav
- Department of Computer Science & Engineering, NIT Hamirpur (HP), India.
| |
Collapse
|
2
|
Zhou J, Wang S, Wang H, Li Y, Li X. Multi-Modality Fusion and Tumor Sub-Component Relationship Ensemble Network for Brain Tumor Segmentation. Bioengineering (Basel) 2025; 12:159. [PMID: 40001679 PMCID: PMC11851405 DOI: 10.3390/bioengineering12020159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2024] [Revised: 01/21/2025] [Accepted: 01/27/2025] [Indexed: 02/27/2025] Open
Abstract
Deep learning technology has been widely used in brain tumor segmentation with multi-modality magnetic resonance imaging, helping doctors achieve faster and more accurate diagnoses. Previous studies have demonstrated that the weighted fusion segmentation method effectively extracts modality importance, laying a solid foundation for multi-modality magnetic resonance imaging segmentation. However, the challenge of fusing multi-modality features with single-modality features remains unresolved, which motivated us to explore an effective fusion solution. We propose a multi-modality and single-modality feature recalibration network for magnetic resonance imaging brain tumor segmentation. Specifically, we designed a dual recalibration module that achieves accurate feature calibration by integrating the complementary features of multi-modality with the specific features of a single modality. Experimental results on the BraTS 2018 dataset showed that the proposed method outperformed existing multi-modal network methods across multiple evaluation metrics, with spatial recalibration significantly improving the results, including Dice score increases of 1.7%, 0.5%, and 1.6% for the enhanced tumor core, whole tumor, and tumor core regions, respectively.
Collapse
Affiliation(s)
- Jinyan Zhou
- Basic Medical College, Heilongjiang University of Chinese Medicine, Harbin 150040, China; (J.Z.); (S.W.)
| | - Shuwen Wang
- Basic Medical College, Heilongjiang University of Chinese Medicine, Harbin 150040, China; (J.Z.); (S.W.)
| | - Hao Wang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China;
| | - Yaxue Li
- Basic Medical College, Heilongjiang University of Chinese Medicine, Harbin 150040, China; (J.Z.); (S.W.)
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China;
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
- Hebei Key Laboratory of Micro-Nano Precision Optical Sensing and Measurement Technology, Qinhuangdao 066004, China
| |
Collapse
|
3
|
Li R, Liao Y, Huang Y, Ma X, Zhao G, Wang Y, Song C. DeepGlioSeg: advanced glioma MRI data segmentation with integrated local-global representation architecture. Front Oncol 2025; 15:1449911. [PMID: 39968077 PMCID: PMC11832817 DOI: 10.3389/fonc.2025.1449911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2024] [Accepted: 01/13/2025] [Indexed: 02/20/2025] Open
Abstract
Introduction Glioma segmentation is vital for diagnostic decision-making, monitoring disease progression, and surgical planning. However, this task is hindered by substantial heterogeneity within gliomas and imbalanced region distributions, posing challenges to existing segmentation methods. Methods To address these challenges, we propose the DeepGlioSeg network, a U-shaped architecture with skip connections for continuous contextual feature integration. The model includes two primary components. First, a CTPC (CNN-Transformer Parallel Combination) module leverages parallel branches of CNN and Transformer networks to fuse local and global features of glioma images, enhancing feature representation. Second, the model computes a region-based probability by comparing the number of pixels in tumor and background regions and assigns greater weight to regions with lower probabilities, thereby focusing on the tumor segment. Test-time augmentation (TTA) and volume-constrained (VC) post-processing are subsequently applied to refine the final segmentation outputs. Results Extensive experiments were conducted on three publicly available glioma MRI datasets and one privately owned clinical dataset. The quantitative and qualitative findings consistently show that DeepGlioSeg achieves superior segmentation performance over other state-of-the-art methods. Discussion By integrating CNN- and Transformer-based features in parallel and adaptively emphasizing underrepresented tumor regions, DeepGlioSeg effectively addresses the challenges associated with glioma heterogeneity and imbalanced region distributions. The final pipeline, augmented with TTA and VC post-processing, demonstrates robust segmentation capabilities. The source code for this work is publicly available at https://github.com/smallboy-code/Brain-tumor-segmentation.
Collapse
Affiliation(s)
- Ruipeng Li
- Department of Urology, Hangzhou Third People’s Hospital, Hangzhou, China
| | - Yuehui Liao
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yueqi Huang
- Department of Psychiatry, Hangzhou Seventh People’s Hospital, Hangzhou, China
| | - Xiaofei Ma
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| | - Guohua Zhao
- Department of Magnetic Resonance Imaging, First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yanbin Wang
- Department of Urology, Hangzhou Third People’s Hospital, Hangzhou, China
| | - Chen Song
- Department of Urology, Hangzhou Third People’s Hospital, Hangzhou, China
| |
Collapse
|
4
|
Zhou T. M2GCNet: Multi-Modal Graph Convolution Network for Precise Brain Tumor Segmentation Across Multiple MRI Sequences. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4896-4910. [PMID: 39236123 DOI: 10.1109/tip.2024.3451936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
Accurate segmentation of brain tumors across multiple MRI sequences is essential for diagnosis, treatment planning, and clinical decision-making. In this paper, I propose a cutting-edge framework, named multi-modal graph convolution network (M2GCNet), to explore the relationships across different MR modalities, and address the challenge of brain tumor segmentation. The core of M2GCNet is the multi-modal graph convolution module (M2GCM), a pivotal component that represents MR modalities as graphs, with nodes corresponding to image pixels and edges capturing latent relationships between pixels. This graph-based representation enables the effective utilization of both local and global contextual information. Notably, M2GCM comprises two important modules: the spatial-wise graph convolution module (SGCM), adept at capturing extensive spatial dependencies among distinct regions within an image, and the channel-wise graph convolution module (CGCM), dedicated to modelling intricate contextual dependencies among different channels within the image. Additionally, acknowledging the intrinsic correlation present among different MR modalities, a multi-modal correlation loss function is introduced. This novel loss function aims to capture specific nonlinear relationships between correlated modality pairs, enhancing the model's ability to achieve accurate segmentation results. The experimental evaluation on two brain tumor datasets demonstrates the superiority of the proposed M2GCNet over other state-of-the-art segmentation methods. Furthermore, the proposed method paves the way for improved tumor diagnosis, multi-modal information fusion, and a deeper understanding of brain tumor pathology.
Collapse
|
5
|
Huang L, Zhang N, Yi Y, Zhou W, Zhou B, Dai J, Wang J. SAMCF: Adaptive global style alignment and multi-color spaces fusion for joint optic cup and disc segmentation. Comput Biol Med 2024; 178:108639. [PMID: 38878394 DOI: 10.1016/j.compbiomed.2024.108639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 04/21/2024] [Accepted: 05/18/2024] [Indexed: 07/24/2024]
Abstract
The optic cup (OC) and optic disc (OD) are two critical structures in retinal fundus images, and their relative positions and sizes are essential for effectively diagnosing eye diseases. With the success of deep learning in computer vision, deep learning-based segmentation models have been widely used for joint optic cup and disc segmentation. However, there are three prominent issues that impact the segmentation performance. First, significant differences among datasets collecting from various institutions, protocols, and devices lead to performance degradation of models. Second, we find that images with only RGB information struggle to counteract the interference caused by brightness variations, affecting color representation capability. Finally, existing methods typically ignored the edge perception, facing the challenges in obtaining clear and smooth edge segmentation results. To address these drawbacks, we propose a novel framework based on Style Alignment and Multi-Color Fusion (SAMCF) for joint OC and OD segmentation. Initially, we introduce a domain generalization method to generate uniformly styled images without damaged image content for mitigating domain shift issues. Next, based on multiple color spaces, we propose a feature extraction and fusion network aiming to handle brightness variation interference and improve color representation capability. Lastly, an edge aware loss is designed to generate fine edge segmentation results. Our experiments conducted on three public datasets, DGS, RIM, and REFUGE, demonstrate that our proposed SAMCF achieves superior performance to existing state-of-the-art methods. Moreover, SAMCF exhibits remarkable generalization ability across multiple retinal fundus image datasets, showcasing its outstanding generality.
Collapse
Affiliation(s)
- Longjun Huang
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Ningyi Zhang
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Yugen Yi
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China.
| | - Wei Zhou
- College of Computer Science, Shenyang Aerospace University, Shenyang, 110136, China
| | - Bin Zhou
- School of Software, Nanchang Key Laboratory for Blindness and Visual Impairment Prevention Technology and Equipment, Jiangxi Normal University, Nanchang, 330022, China
| | - Jiangyan Dai
- School of Computer Engineering, Weifang University, 261061, China.
| | - Jianzhong Wang
- College of Information Science and Technology, Northeast Normal University, Changchun, 130117, China
| |
Collapse
|
6
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
7
|
Wang X, Liu S, Yang N, Chen F, Ma L, Ning G, Zhang H, Qiu X, Liao H. A Segmentation Framework With Unsupervised Learning-Based Label Mapper for the Ventricular Target of Intracranial Germ Cell Tumor. IEEE J Biomed Health Inform 2023; 27:5381-5392. [PMID: 37651479 DOI: 10.1109/jbhi.2023.3310492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.
Collapse
|
8
|
Berkley A, Saueressig C, Shukla U, Chowdhury I, Munoz-Gauna A, Shehu O, Singh R, Munbodh R. Clinical capability of modern brain tumor segmentation models. Med Phys 2023; 50:4943-4959. [PMID: 36847185 DOI: 10.1002/mp.16321] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 10/27/2022] [Accepted: 10/27/2022] [Indexed: 03/01/2023] Open
Abstract
PURPOSE State-of-the-art automated segmentation methods achieve exceptionally high performance on the Brain Tumor Segmentation (BraTS) challenge, a dataset of uniformly processed and standardized magnetic resonance generated images (MRIs) of gliomas. However, a reasonable concern is that these models may not fare well on clinical MRIs that do not belong to the specially curated BraTS dataset. Research using the previous generation of deep learning models indicates significant performance loss on cross-institutional predictions. Here, we evaluate the cross-institutional applicability and generalzsability of state-of-the-art deep learning models on new clinical data. METHODS We train a state-of-the-art 3D U-Net model on the conventional BraTS dataset comprising low- and high-grade gliomas. We then evaluate the performance of this model for automatic tumor segmentation of brain tumors on in-house clinical data. This dataset contains MRIs of different tumor types, resolutions, and standardization than those found in the BraTS dataset. Ground truth segmentations to validate the automated segmentation for in-house clinical data were obtained from expert radiation oncologists. RESULTS We report average Dice scores of 0.764, 0.648, and 0.61 for the whole tumor, tumor core, and enhancing tumor, respectively, in the clinical MRIs. These means are higher than numbers reported previously on same institution and cross-institution datasets of different origin using different methods. There is no statistically significant difference when comparing the dice scores to the inter-annotation variability between two expert clinical radiation oncologists. Although performance on the clinical data is lower than on the BraTS data, these numbers indicate that models trained on the BraTS dataset have impressive segmentation performance on previously unseen images obtained at a separate clinical institution. These images differ in the imaging resolutions, standardization pipelines, and tumor types from the BraTS data. CONCLUSIONS State-of-the-art deep learning models demonstrate promising performance on cross-institutional predictions. They considerably improve on previous models and can transfer knowledge to new types of brain tumors without additional modeling.
Collapse
Affiliation(s)
- Adam Berkley
- Department of Computer Science, Brown University, Providence, Rhode Island, USA
| | - Camillo Saueressig
- Department of Computer Science, Brown University, Providence, Rhode Island, USA
| | - Utkarsh Shukla
- Department of Radiation Oncology, Rhode Island Hospital, Providence, Rhode Island, USA
- The Department of Radiation Oncology at Tufts Medical Center, Boston, Massachusetts, USA
- Department of Radiation Oncology, Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Imran Chowdhury
- Department of Radiation Oncology, Rhode Island Hospital, Providence, Rhode Island, USA
- The Department of Radiation Oncology at Tufts Medical Center, Boston, Massachusetts, USA
- Department of Radiation Oncology, Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Anthony Munoz-Gauna
- Department of Radiation Oncology, Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Olalekan Shehu
- Department of Physics, University of Rhode Island, Kingston, Rhode Island, USA
| | - Ritambhara Singh
- Department of Computer Science, Brown University, Providence, Rhode Island, USA
- Center for Computational Molecular Biology, Brown University, Providence, Rhode Island, USA
| | - Reshma Munbodh
- Department of Radiation Oncology, Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
- Department of Radiation Oncology, Columbia University Irving Medical Center, New York, New York, USA
| |
Collapse
|
9
|
Yang H, Zhou T, Zhou Y, Zhang Y, Fu H. Flexible Fusion Network for Multi-Modal Brain Tumor Segmentation. IEEE J Biomed Health Inform 2023; 27:3349-3359. [PMID: 37126623 DOI: 10.1109/jbhi.2023.3271808] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Automated brain tumor segmentation is crucial for aiding brain disease diagnosis and evaluating disease progress. Currently, magnetic resonance imaging (MRI) is a routinely adopted approach in the field of brain tumor segmentation that can provide different modality images. It is critical to leverage multi-modal images to boost brain tumor segmentation performance. Existing works commonly concentrate on generating a shared representation by fusing multi-modal data, while few methods take into account modality-specific characteristics. Besides, how to efficiently fuse arbitrary numbers of modalities is still a difficult task. In this study, we present a flexible fusion network (termed F 2Net) for multi-modal brain tumor segmentation, which can flexibly fuse arbitrary numbers of multi-modal information to explore complementary information while maintaining the specific characteristics of each modality. Our F 2Net is based on the encoder-decoder structure, which utilizes two Transformer-based feature learning streams and a cross-modal shared learning network to extract individual and shared feature representations. To effectively integrate the knowledge from the multi-modality data, we propose a cross-modal feature-enhanced module (CFM) and a multi-modal collaboration module (MCM), which aims at fusing the multi-modal features into the shared learning network and incorporating the features from encoders into the shared decoder, respectively. Extensive experimental results on multiple benchmark datasets demonstrate the effectiveness of our F 2Net over other state-of-the-art segmentation methods.
Collapse
|
10
|
Ali MU, Hussain SJ, Zafar A, Bhutta MR, Lee SW. WBM-DLNets: Wrapper-Based Metaheuristic Deep Learning Networks Feature Optimization for Enhancing Brain Tumor Detection. Bioengineering (Basel) 2023; 10:bioengineering10040475. [PMID: 37106662 PMCID: PMC10135892 DOI: 10.3390/bioengineering10040475] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/07/2023] [Accepted: 04/11/2023] [Indexed: 04/29/2023] Open
Abstract
This study presents wrapper-based metaheuristic deep learning networks (WBM-DLNets) feature optimization algorithms for brain tumor diagnosis using magnetic resonance imaging. Herein, 16 pretrained deep learning networks are used to compute the features. Eight metaheuristic optimization algorithms, namely, the marine predator algorithm, atom search optimization algorithm (ASOA), Harris hawks optimization algorithm, butterfly optimization algorithm, whale optimization algorithm, grey wolf optimization algorithm (GWOA), bat algorithm, and firefly algorithm, are used to evaluate the classification performance using a support vector machine (SVM)-based cost function. A deep-learning network selection approach is applied to determine the best deep-learning network. Finally, all deep features of the best deep learning networks are concatenated to train the SVM model. The proposed WBM-DLNets approach is validated based on an available online dataset. The results reveal that the classification accuracy is significantly improved by utilizing the features selected using WBM-DLNets relative to those obtained using the full set of deep features. DenseNet-201-GWOA and EfficientNet-b0-ASOA yield the best results, with a classification accuracy of 95.7%. Additionally, the results of the WBM-DLNets approach are compared with those reported in the literature.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Raheel Bhutta
- Department of Electrical and Computer Engineering, University of UTAH Asia Campus, Incheon 21985, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, Sungkyunkwan University School of Medicine, Suwon 16419, Republic of Korea
| |
Collapse
|
11
|
K V S, Sugitha N. BirCat Optimization for Automatic Segmentation of Brain Tumors and Pixel Change Detection Using Post-operative MRI Images. J Digit Imaging 2023; 36:647-665. [PMID: 36544068 PMCID: PMC10039146 DOI: 10.1007/s10278-022-00704-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 09/07/2022] [Accepted: 09/15/2022] [Indexed: 12/24/2022] Open
Abstract
There is an emerging need for medical imaging data to provide patients with timely diagnosis. Magnetic resonance imaging (MRI) images based on brain tumor segmentation approaches possess greater importance in planning treatment. Though, mechanizing the process with different imaging conditions and accuracy is a major challenge due to variations in tumor structures. Hence, an efficient optimization-driven classifier, called BirCat optimization-based deep belief network (BirCat-based DBN) is developed to detect brain tumors. The introduced BirCat is devised by incorporating birdswarm algorithm (BSA) into cat swarm optimization (CSO) algorithm and is employed in tuning the DBN classifier. Here, the first step is pre-processing, where noises, as well as artifacts in input image, are eliminated by means of ROI extraction and filtering method. Then, for segmentation, region growing algorithm is used in which the distance is calculated by the modified Bhattacharya measure. Afterward, each segment is adapted for mining the segment-based features and pixel-based features used for classification. Then, the feature vector is formed and given to the DBN classifier, which is tuned with the help of the introduced BirCat for brain tumor detection. The introduced technique effectively determines the regions with the tumor in the input MRI image. Finally, the change detection is evaluated by analyzing the post-operative MRI image and the segmented image by means of pixel mapping strategy with respect to SURF features. The pixel mapping is utilized to evaluate the percentage change in tumor pixels. The proposed BirCat surpassed other prevailing approaches by producing maximal values of specificity, accuracy, sensitivity, F1-score, and Dice score at 0.92, 0.927, 0.938, 0.909, and 0.937, correspondingly, for dataset 2.
Collapse
Affiliation(s)
- Shiny K V
- Research Scholar, Department of Computer Science and Engineering, Noorul Islam Centre for Higher Education, Kumaracoil, Kanyakumari, Tamil Nadu, 629173, India.
| | - N Sugitha
- Professor, Department of Electronics and Communication Engineering, Sri Krishna College of Technology, Kovaipudur, Coimbatore, Tamil Nadu, 641042, India
| |
Collapse
|
12
|
Li X, Jiang Y, Li M, Zhang J, Yin S, Luo H. MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation. Med Phys 2023; 50:2249-2262. [PMID: 35962724 DOI: 10.1002/mp.15933] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/16/2022] [Accepted: 06/14/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Accurate and automated brain tumor segmentation from multi-modality MR images plays a significant role in tumor treatment. However, the existing approaches mainly focus on the fusion of multi-modality while ignoring the correlation between single-modality and tumor subcomponents. For example, T2-weighted images show good visualization of edema, and T1-contrast images have a good contrast between enhancing tumor core and necrosis. In the actual clinical process, professional physicians also label tumors according to these characteristics. We design a method for brain tumors segmentation that utilizes both multi-modality fusion and single-modality characteristics. METHODS A multi-modality and single-modality feature recalibration network (MSFR-Net) is proposed for brain tumor segmentation from MR images. Specifically, multi-modality information and single-modality information are assigned to independent pathways. Multi-modality network explicitly learns the relationship between all modalities and all tumor sub-components. Single-modality network learns the relationship between single-modality and its highly correlated tumor subcomponents. Then, a dual recalibration module (DRM) is designed to connect the parallel single-modality network and multi-modality network at multiple stages. The function of the DRM is to unify the two types of features into the same feature space. RESULTS Experiments on BraTS 2015 dataset and BraTS 2018 dataset show that the proposed method is competitive and superior to other state-of-the-art methods. The proposed method achieved the segmentation results with Dice coefficients of 0.86 and Hausdorff distance of 4.82 on BraTS 2018 dataset, with dice coefficients of 0.80, positive predictive value of 0.76, and sensitivity of 0.78 on BraTS 2015 dataset. CONCLUSIONS This work combines the manual labeling process of doctors and introduces the correlation between single-modality and the tumor subcomponents into the segmentation network. The method improves the segmentation performance of brain tumors and can be applied in the clinical practice. The code of the proposed method is available at: https://github.com/xiangQAQ/MSFR-Net.
Collapse
Affiliation(s)
- Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
13
|
Xie Y, Sun J. Robust lockwire segmentation with multiscale boundary-driven regional stability. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:397-410. [PMID: 37133006 DOI: 10.1364/josaa.472215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Lockwire segmentation plays a vital role in ensuring mechanical safety in industrial fields. Aiming at the missed detection problem encountered in blurred and low-contrast situations, we propose a robust lockwire segmentation method based on multiscale boundary-driven regional stability. We first design a novel multiscale boundary-driven stability criterion to generate a blur-robustness stability map. Then, the curvilinear structure enhancement metric and linearity measurement function are defined to compute the likeliness of stable regions to belong to lockwires. Finally, the closed boundaries of lockwires are determined to achieve accurate segmentation. Experimental results demonstrate that our proposed method outperforms state-of-the-art object segmentation methods.
Collapse
|
14
|
Sundarasekar R, Appathurai A. FMTM-feature-map-based transform model for brain image segmentation in tumor detection. NETWORK (BRISTOL, ENGLAND) 2023; 34:1-25. [PMID: 36514820 DOI: 10.1080/0954898x.2022.2110620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 07/14/2022] [Accepted: 08/02/2022] [Indexed: 06/17/2023]
Abstract
The segmentation of brain images is a leading quantitative measure for detecting physiological changes and for analysing structural functions. Based on trends and dimensions of brain, the images indicate heterogeneity. Accurate brain tumour segmentation remains a critical challenge despite the persistent efforts of researchers were owing to a variety of obstacles. This impacts the outcome of tumour detection, causing errors. For addressing this issue, a Feature-Map based Transform Model (FMTM) is introduced to focus on heterogeneous features of input picture to map differences and intensity based on transition Fourier. Unchecked machine learning is used for reliable characteristic map recognition in this mapping process. For the determination of severity and variability, the method of identification depends on symmetry and texture. Learning instances are taught to improve precision using predefined data sets, regardless of loss of labels. The process is recurring until the maximum precision of tumour detection is achieved in low convergence. In this research, FMTM has been applied to brain tumour segmentation to automatically extract feature representations and produce accurate and steady performance because of promising performance made by powerful transition Fourier methods. The suggested model's performance is shown by the metrics processing time, precision, accuracy, and F1-Score.
Collapse
|
15
|
Liu S, Xin J, Wu J, Deng Y, Su R, Niessen WJ, Zheng N, van Walsum T. Multi-view Contour-constrained Transformer Network for Thin-cap Fibroatheroma Identification. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
16
|
Popat M, Patel S. Research perspective and review towards brain tumour segmentation and classification using different image modalities. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2022. [DOI: 10.1080/21681163.2022.2124546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Mayuri Popat
- U & P.U. Patel Department of Computer Engineering, Chandubhai S Patel Institute of Technology (CSPIT), Charotar University of Science and Technology (CHARUSAT), Gujarat, India
| | - Sanskruti Patel
- Smt. Chandaben Mohanbhai Patel Institute of Computer Applications (CMPICA), Charotar University of Science and Technology (CHARUSAT), Gujarat, India
| |
Collapse
|
17
|
Leena B, Jayanthi AN. Automatic Brain Tumor Classification via Lion Plus Dragonfly Algorithm. J Digit Imaging 2022; 35:1382-1408. [PMID: 35711072 PMCID: PMC9582188 DOI: 10.1007/s10278-022-00635-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 04/01/2022] [Accepted: 04/09/2022] [Indexed: 10/18/2022] Open
Abstract
Denoising, skull stripping, segmentation, feature extraction, and classification are five important processes in this paper's development of a brain tumor classification model. The brain tumor image will be imposed first using the entropy-based trilateral filter to de-noising and this image is imposed to skull stripping by means of morphological partition and Otsu thresholding. Adaptive contrast limited fuzzy adaptive histogram equalization (CLFAHE) is also used in the segmentation process. The gray-level co-occurrence matrix (GLCM) characteristics are derived from the segmented image. The collected GLCM features are used in a hybrid classifier that combines the neural network (NN) and deep belief network (DBN) ideas. As an innovation, the hidden neurons of the two classifiers are modified ideally to improve the prediction model's accuracy. The hidden neurons are optimized using a unique hybrid optimization technique known as lion with dragonfly separation update (L-DSU), which integrates the approaches from both DA and LA. Finally, the suggested model's performance is compared to that of the standard models concerning certain performance measures.
Collapse
Affiliation(s)
- B Leena
- KGiSL Institute of Technology, Coimbatore, India.
| | - A N Jayanthi
- Sri Ramakrishna Institute of Technology, Coimbatore, India
| |
Collapse
|
18
|
Particle Swarm Optimization and Two-Way Fixed-Effects Analysis of Variance for Efficient Brain Tumor Segmentation. Cancers (Basel) 2022; 14:cancers14184399. [PMID: 36139559 PMCID: PMC9496881 DOI: 10.3390/cancers14184399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/04/2022] [Accepted: 09/07/2022] [Indexed: 11/29/2022] Open
Abstract
Simple Summary Segmentation of brain tumor images from magnetic resonance imaging (MRI) is a challenging topic in medical image analysis. The brain tumor can take many shapes, and MRI images vary considerably in intensity, making lesion detection difficult for radiologists. This paper proposes a three-step approach to solving this problem: (1) pre-processing, based on morphological operations, is applied to remove the skull bone from the image; (2) the particle swarm optimization (PSO) algorithm, with a two-way fixed-effects analysis of variance (ANOVA)-based fitness function, is used to find the optimal block containing the brain lesion; (3) the K-means clustering algorithm is adopted, to classify the detected block as tumor or non-tumor. An extensive experimental analysis, including visual and statistical evaluations, was conducted, using two MRI databases: a private database provided by the Kouba imaging center—Algiers (KICA)—and the multimodal brain tumor segmentation challenge (BraTS) 2015 database. The results show that the proposed methodology achieved impressive performance, compared to several competing approaches. Abstract Segmentation of brain tumor images, to refine the detection and understanding of abnormal masses in the brain, is an important research topic in medical imaging. This paper proposes a new segmentation method, consisting of three main steps, to detect brain lesions using magnetic resonance imaging (MRI). In the first step, the parts of the image delineating the skull bone are removed, to exclude insignificant data. In the second step, which is the main contribution of this study, the particle swarm optimization (PSO) technique is applied, to detect the block that contains the brain lesions. The fitness function, used to determine the best block among all candidate blocks, is based on a two-way fixed-effects analysis of variance (ANOVA). In the last step of the algorithm, the K-means segmentation method is used in the lesion block, to classify it as a tumor or not. A thorough evaluation of the proposed algorithm was performed, using: (1) a private MRI database provided by the Kouba imaging center—Algiers (KICA); (2) the multimodal brain tumor segmentation challenge (BraTS) 2015 database. Estimates of the selected fitness function were first compared to those based on the sum-of-absolute-differences (SAD) dissimilarity criterion, to demonstrate the efficiency and robustness of the ANOVA. The performance of the optimized brain tumor segmentation algorithm was then compared to the results of several state-of-the-art techniques. The results obtained, by using the Dice coefficient, Jaccard distance, correlation coefficient, and root mean square error (RMSE) measurements, demonstrated the superiority of the proposed optimized segmentation algorithm over equivalent techniques.
Collapse
|
19
|
Sunsuhi G, Albin Jose S. An Adaptive Eroded Deep Convolutional neural network for brain image segmentation and classification using Inception ResnetV2. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
20
|
RVLSM: Robust variational level set method for image segmentation with intensity inhomogeneity and high noise. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.03.035] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
21
|
Zhang T, Zhang J, Xue T, Rashid MH. A Brain Tumor Image Segmentation Method Based on Quantum Entanglement and Wormhole Behaved Particle Swarm Optimization. Front Med (Lausanne) 2022; 9:794126. [PMID: 35620714 PMCID: PMC9127532 DOI: 10.3389/fmed.2022.794126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 03/17/2022] [Indexed: 12/26/2022] Open
Abstract
Purpose Although classical techniques for image segmentation may work well for some images, they may perform poorly or not work at all for others. It often depends on the properties of the particular image segmentation task under study. The reliable segmentation of brain tumors in medical images represents a particularly challenging and essential task. For example, some brain tumors may exhibit complex so-called “bottle-neck” shapes which are essentially circles with long indistinct tapering tails, known as a “dual tail.” Such challenging conditions may not be readily segmented, particularly in the extended tail region or around the so-called “bottle-neck” area. In those cases, existing image segmentation techniques often fail to work well. Methods Existing research on image segmentation using wormhole and entangle theory is first analyzed. Next, a random positioning search method that uses a quantum-behaved particle swarm optimization (QPSO) approach is improved by using a hyperbolic wormhole path measure for seeding and linking particles. Finally, our novel quantum and wormhole-behaved particle swarm optimization (QWPSO) is proposed. Results Experimental results show that our QWPSO algorithm can better cluster complex “dual tail” regions into groupings with greater adaptability than conventional QPSO. Experimental work also improves operational efficiency and segmentation accuracy compared with current competing reference methods. Conclusion Our QWPSO method appears extremely promising for isolating smeared/indistinct regions of complex shape typical of medical image segmentation tasks. The technique is especially advantageous for segmentation in the so-called “bottle-neck” and “dual tail”-shaped regions appearing in brain tumor images.
Collapse
Affiliation(s)
- Tianchi Zhang
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China
| | - Jing Zhang
- School of Information Science and Engineering, University of Jinan, Jinan, China.,Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
| | - Teng Xue
- School of Information Science and Engineering, University of Jinan, Jinan, China.,Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
| | - Mohammad Hasanur Rashid
- School of Information Science and Engineering, University of Jinan, Jinan, China.,Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
| |
Collapse
|
22
|
A novel 2-phase residual U-net algorithm combined with optimal mass transportation for 3D brain tumor detection and segmentation. Sci Rep 2022; 12:6452. [PMID: 35440793 PMCID: PMC9018750 DOI: 10.1038/s41598-022-10285-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 04/06/2022] [Indexed: 01/10/2023] Open
Abstract
Utilizing the optimal mass transportation (OMT) technique to convert an irregular 3D brain image into a cube, a required input format for a U-net algorithm, is a brand new idea for medical imaging research. We develop a cubic volume-measure-preserving OMT (V-OMT) model for the implementation of this conversion. The contrast-enhanced histogram equalization grayscale of fluid-attenuated inversion recovery (FLAIR) in a brain image creates the corresponding density function. We then propose an effective two-phase residual U-net algorithm combined with the V-OMT algorithm for training and validation. First, we use the residual U-net and V-OMT algorithms to precisely predict the whole tumor (WT) region. Second, we expand this predicted WT region with dilation and create a smooth function by convolving the step-like function associated with the WT region in the brain image with a \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$5\times 5\times 5$$\end{document}5×5×5 blur tensor. Then, a new V-OMT algorithm with mesh refinement is constructed to allow the residual U-net algorithm to effectively train Net1–Net3 models. Finally, we propose ensemble voting postprocessing to validate the final labels of brain images. We randomly chose 1000 and 251 brain samples from the Brain Tumor Segmentation (BraTS) 2021 training dataset, which contains 1251 samples, for training and validation, respectively. The Dice scores of the WT, tumor core (TC) and enhanced tumor (ET) regions for validation computed by Net1–Net3 were 0.93705, 0.90617 and 0.87470, respectively. A significant improvement in brain tumor detection and segmentation with higher accuracy is achieved.
Collapse
|
23
|
Cao T, Wang G, Ren L, Li Y, Wang H. Brain tumor magnetic resonance image segmentation by a multiscale contextual attention module combined with a deep residual UNet (MCA-ResUNet). Phys Med Biol 2022; 67. [PMID: 35294935 DOI: 10.1088/1361-6560/ac5e5c] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/16/2022] [Indexed: 11/12/2022]
Abstract
Background and Objective. Automatic segmentation of MRI brain tumor area is a key step in the diagnosis and treatment of brain tumor. In recent years, the improved network based on UNet encoding and decoding structure has been widely used in brain tumor segmentation. However, due to continuous convolution and pooling operations, some spatial context information in existing networks will be discontinuous or even missing. It will affect the segmentation accuracy of the model. Therefore, the method proposed in this paper is to alleviate the lack of spatial context information and improve the accuracy of the model.Approach. This paper proposes a context attention module (multiscale contextual attention) to capture and filter out high-level features with spatial context information, which solves the problem of context information loss in feature extraction. The channel attention mechanism is introduced into the decoding structure to realize the fusion of high-level features and low-level features. The standard convolution block in the encoding and decoding structure is replaced by the pre-activated residual block to optimize the network training and improve the network performance.Results. This paper uses two public data sets (BraTs 2017 and BraTs 2019) to evaluate and verify the proposed method. Experimental results show that the proposed method can effectively alleviate the lack of spatial context information, and the segmentation performance is better than other existing methods.Significance. The method improves the segmentation performance of the model. It will assist doctors in making accurate diagnosis and provide reference basis for tumor resection. As a result, the proposed method will reduce the operation risk of patients and the postoperative recurrence rate.
Collapse
Affiliation(s)
- Tianyi Cao
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei 071002, People's Republic of China
| | - Guanglei Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei 071002, People's Republic of China
| | - Lili Ren
- The Affiliated Hospital of Hebei University, Baoding, Hebei 071002, People's Republic of China
| | - Yan Li
- College of Electronic and Information Engineering, Hebei University, Baoding, Hebei 071002, People's Republic of China
| | - Hongrui Wang
- Hebei University, Baoding, Hebei 071002, People's Republic of China
| |
Collapse
|
24
|
Zhang TC, Zhang J, Chen SC, Saada B. A Novel Prediction Model for Brain Glioma Image Segmentation Based on the Theory of Bose-Einstein Condensate. Front Med (Lausanne) 2022; 9:794125. [PMID: 35372409 PMCID: PMC8971582 DOI: 10.3389/fmed.2022.794125] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 01/14/2022] [Indexed: 11/26/2022] Open
Abstract
Background The input image of a blurry glioma image segmentation is, usually, very unclear. It is difficult to obtain the accurate contour line of image segmentation. The main challenge facing the researchers is to correctly determine the area where the points on the contour line belong to the glioma image. This article highlights the mechanism of formation of glioma and provides an image segmentation prediction model to assist in the accurate division of glioma contour points. The proposed prediction model of segmentation associated with the process of the formation of glioma is innovative and challenging. Bose-Einstein Condensate (BEC) is a microscopic quantum phenomenon in which atoms condense to the ground state of energy as the temperature approaches absolute zero. In this article, we propose a BEC kernel function and a novel prediction model based on the BEC kernel to detect the relationship between the process of the BEC and the formation of a brain glioma. Furthermore, the theoretical derivation and proof of the prediction model are given from micro to macro through quantum mechanics, wave, oscillation of glioma, and statistical distribution of laws. The prediction model is a distinct segmentation model that is guided by BEC theory for blurry glioma image segmentation. Results Our approach is based on five tests. The first three tests aimed at confirming the measuring range of T and μ in the BEC kernel. The results are extended from −10 to 10, approximating the standard range to T ≤ 0, and μ from 0 to 6.7. Tests 4 and 5 are comparison tests. The comparison in Test 4 was based on various established cluster methods. The results show that our prediction model in image evaluation parameters of P, R, and F is the best amongst all the existent ten forms except for only one reference with the mean value of F that is between 0.88 and 0.93, while our approach returns between 0.85 and 0.99. Test 5 aimed to further compare our results, especially with CNN (Convolutional Neural Networks) methods, by challenging Brain Tumor Segmentation (BraTS) and clinic patient datasets. Our results were also better than all reference tests. In addition, the proposed prediction model with the BEC kernel is feasible and has a comparative validity in glioma image segmentation. Conclusions Theoretical derivation and experimental verification show that the prediction model based on the BEC kernel can solve the problem of accurate segmentation of blurry glioma images. It demonstrates that the BEC kernel is a more feasible, valid, and accurate approach than a lot of the recent year segmentation methods. It is also an advanced and innovative model of prediction deducing from micro BEC theory to macro glioma image segmentation.
Collapse
Affiliation(s)
- Tian Chi Zhang
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing, China
| | - Jing Zhang
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
- *Correspondence: Jing Zhang
| | - Shou Cun Chen
- School of Information Science and Engineering, University of Jinan, Jinan, China
- Shandong Provincial Key Laboratory of Network-Based Intelligent Computing, Jinan, China
| | - Bacem Saada
- Cancer Institute, Eighth Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
- Department of Animal Biosciences, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
25
|
Guo S, Wang L, Chen Q, Wang L, Zhang J, Zhu Y. Multimodal MRI Image Decision Fusion-Based Network for Glioma Classification. Front Oncol 2022; 12:819673. [PMID: 35280828 PMCID: PMC8907622 DOI: 10.3389/fonc.2022.819673] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 01/24/2022] [Indexed: 12/12/2022] Open
Abstract
Purpose Glioma is the most common primary brain tumor, with varying degrees of aggressiveness and prognosis. Accurate glioma classification is very important for treatment planning and prognosis prediction. The main purpose of this study is to design a novel effective algorithm for further improving the performance of glioma subtype classification using multimodal MRI images. Method MRI images of four modalities for 221 glioma patients were collected from Computational Precision Medicine: Radiology-Pathology 2020 challenge, including T1, T2, T1ce, and fluid-attenuated inversion recovery (FLAIR) MRI images, to classify astrocytoma, oligodendroglioma, and glioblastoma. We proposed a multimodal MRI image decision fusion-based network for improving the glioma classification accuracy. First, the MRI images of each modality were input into a pre-trained tumor segmentation model to delineate the regions of tumor lesions. Then, the whole tumor regions were centrally clipped from original MRI images followed by max-min normalization. Subsequently, a deep learning-based network was designed based on a unified DenseNet structure, which extracts features through a series of dense blocks. After that, two fully connected layers were used to map the features into three glioma subtypes. During the training stage, we used the images of each modality after tumor segmentation to train the network to obtain its best accuracy on our testing set. During the inferring stage, a linear weighted module based on a decision fusion strategy was applied to assemble the predicted probabilities of the pre-trained models obtained in the training stage. Finally, the performance of our method was evaluated in terms of accuracy, area under the curve (AUC), sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), etc. Results The proposed method achieved an accuracy of 0.878, an AUC of 0.902, a sensitivity of 0.772, a specificity of 0.930, a PPV of 0.862, an NPV of 0.949, and a Cohen's Kappa of 0.773, which showed a significantly higher performance than existing state-of-the-art methods. Conclusion Compared with current studies, this study demonstrated the effectiveness and superiority in the overall performance of our proposed multimodal MRI image decision fusion-based network method for glioma subtype classification, which would be of enormous potential value in clinical practice.
Collapse
Affiliation(s)
- Shunchao Guo
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China.,College of Computer and Information, Qiannan Normal University for Nationalities, Duyun, China
| | - Lihui Wang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Qijian Chen
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Li Wang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Jian Zhang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Yuemin Zhu
- CREATIS, CNRS UMR 5220, Inserm U1044, INSA Lyon, University of Lyon, Lyon, France
| |
Collapse
|
26
|
A Robust Accuracy Weighted Random Forests Algorithm for IGBTs Fault Diagnosis in PWM Converters without Additional Sensors. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
When an insulated-gate bipolar transistor (IGBT) open-circuit fault occurs, a three-phase pulse-width modulated (PWM) converter can usually keep working, which will lead to system instability and more serious secondary faults. The fault detection and diagnosis of the converter is extremely necessary to improve the reliability of the power supply system. In order to solve the problem of fault misdiagnosis caused by parameters disturbance, this paper proposes a robust accuracy weighted random forests online fault diagnosis model to accurately locate various IGBTs open-circuit faults. Firstly, the fault signal features are preprocessed by using the three-phase current signal and normalization method. Based on the test accuracy of the perturbed out-of-bag data and the multiple converters test data, a robust accuracy weighted random forests algorithm is proposed for extracting a mapping relationship between fault modes and current signal. In order to further improve the fault diagnosis performance, a parameter optimization model is built to optimize hyper-parameters of the proposed method. Finally, comparative simulation and online fault diagnosis experiments are carried out, and the results demonstrate the effectiveness and superiority of the method.
Collapse
|
27
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
28
|
He X, Xu W, Yang J, Mao J, Chen S, Wang Z. Deep Convolutional Neural Network With a Multi-Scale Attention Feature Fusion Module for Segmentation of Multimodal Brain Tumor. Front Neurosci 2021; 15:782968. [PMID: 34899175 PMCID: PMC8662724 DOI: 10.3389/fnins.2021.782968] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Accepted: 11/02/2021] [Indexed: 12/21/2022] Open
Abstract
As a non-invasive, low-cost medical imaging technology, magnetic resonance imaging (MRI) has become an important tool for brain tumor diagnosis. Many scholars have carried out some related researches on MRI brain tumor segmentation based on deep convolutional neural networks, and have achieved good performance. However, due to the large spatial and structural variability of brain tumors and low image contrast, the segmentation of MRI brain tumors is challenging. Deep convolutional neural networks often lead to the loss of low-level details as the network structure deepens, and they cannot effectively utilize the multi-scale feature information. Therefore, a deep convolutional neural network with a multi-scale attention feature fusion module (MAFF-ResUNet) is proposed to address them. The MAFF-ResUNet consists of a U-Net with residual connections and a MAFF module. The combination of residual connections and skip connections fully retain low-level detailed information and improve the global feature extraction capability of the encoding block. Besides, the MAFF module selectively extracts useful information from the multi-scale hybrid feature map based on the attention mechanism to optimize the features of each layer and makes full use of the complementary feature information of different scales. The experimental results on the BraTs 2019 MRI dataset show that the MAFF-ResUNet can learn the edge structure of brain tumors better and achieve high accuracy.
Collapse
Affiliation(s)
- Xueqin He
- School of Informatics, Xiamen University, Xiamen, China
| | - Wenjie Xu
- School of Informatics, Xiamen University, Xiamen, China
| | - Jane Yang
- Department of Cognitive Science, University of California, San Diego, San Diego, CA, United States
| | - Jianyao Mao
- Department of Neurosurgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Sifang Chen
- Department of Neurosurgery, The First Affiliated Hospital of Xiamen University, Xiamen, China
| | - Zhanxiang Wang
- Xiamen Key Laboratory of Brain Center, Department of Neurosurgery, The First Affiliated Hospital of Xiamen University, Xiamen, China.,Department of Neuroscience, School of Medicine, Institute of Neurosurgery, Xiamen University, Xiamen, China
| |
Collapse
|
29
|
Cai Q, Qian Y, Zhou S, Li J, Yang YH, Wu F, Zhang D. AVLSM: Adaptive Variational Level Set Model for Image Segmentation in the Presence of Severe Intensity Inhomogeneity and High Noise. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 31:43-57. [PMID: 34793300 DOI: 10.1109/tip.2021.3127848] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Intensity inhomogeneity and noise are two common issues in images but inevitably lead to significant challenges for image segmentation and is particularly pronounced when the two issues simultaneously appear in one image. As a result, most existing level set models yield poor performance when applied to this images. To this end, this paper proposes a novel hybrid level set model, named adaptive variational level set model (AVLSM) by integrating an adaptive scale bias field correction term and a denoising term into one level set framework, which can simultaneously correct the severe inhomogeneous intensity and denoise in segmentation. Specifically, an adaptive scale bias field correction term is first defined to correct the severe inhomogeneous intensity by adaptively adjusting the scale according to the degree of intensity inhomogeneity while segmentation. More importantly, the proposed adaptive scale truncation function in the term is model-agnostic, which can be applied to most off-the-shelf models and improves their performance for image segmentation with severe intensity inhomogeneity. Then, a denoising energy term is constructed based on the variational model, which can remove not only common additive noise but also multiplicative noise often occurred in medical image during segmentation. Finally, by integrating the two proposed energy terms into a variational level set framework, the AVLSM is proposed. The experimental results on synthetic and real images demonstrate the superiority of AVLSM over most state-of-the-art level set models in terms of accuracy, robustness and running time.
Collapse
|
30
|
Wang P, Chung ACS. Relax and focus on brain tumor segmentation. Med Image Anal 2021; 75:102259. [PMID: 34800788 DOI: 10.1016/j.media.2021.102259] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 03/15/2021] [Accepted: 09/28/2021] [Indexed: 11/25/2022]
Abstract
In this paper, we present a Deep Convolutional Neural Networks (CNNs) for fully automatic brain tumor segmentation for both high- and low-grade gliomas in MRI images. Unlike normal tissues or organs that usually have a fixed location or shape, brain tumors with different grades have shown great variation in terms of the location, size, structure, and morphological appearance. Moreover, the severe data imbalance exists not only between the brain tumor and non-tumor tissues, but also among the different sub-regions inside brain tumor (e.g., enhancing tumor, necrotic, edema, and non-enhancing tumor). Therefore, we introduce a hybrid model to address the challenges in the multi-modality multi-class brain tumor segmentation task. First, we propose the dynamic focal Dice loss function that is able to focus more on the smaller tumor sub-regions with more complex structures during training, and the learning capacity of the model is dynamically distributed to each class independently based on its training performance in different training stages. Besides, to better recognize the overall structure of the brain tumor and the morphological relationship among different tumor sub-regions, we relax the boundary constraints for the inner tumor regions in coarse-to-fine fashion. Additionally, a symmetric attention branch is proposed to highlight the possible location of the brain tumor from the asymmetric features caused by growth and expansion of the abnormal tissues in the brain. Generally, to balance the learning capacity of the model between spatial details and high-level morphological features, the proposed model relaxes the constraints of the inner boundary and complex details and enforces more attention on the tumor shape, location, and the harder classes of the tumor sub-regions. The proposed model is validated on the publicly available brain tumor dataset from real patients, BRATS 2019. The experimental results reveal that our model improves the overall segmentation performance in comparison with the state-of-the-art methods, with major progress on the recognition of the tumor shape, the structural relationship of tumor sub-regions, and the segmentation of more challenging tumor sub-regions, e.g., the tumor core, and enhancing tumor.
Collapse
Affiliation(s)
- Pei Wang
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Albert C S Chung
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
31
|
Research Trends of Human-Computer Interaction Studies in Construction Hazard Recognition: A Bibliometric Review. SENSORS 2021; 21:s21186172. [PMID: 34577380 PMCID: PMC8471763 DOI: 10.3390/s21186172] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 09/03/2021] [Accepted: 09/04/2021] [Indexed: 11/17/2022]
Abstract
Human-computer interaction, an interdisciplinary discipline, has become a frontier research topic in recent years. In the fourth industrial revolution, human-computer interaction has been increasingly applied to construction safety management, which has significantly promoted the progress of hazard recognition in the construction industry. However, limited scholars have yet systematically reviewed the development of human-computer interaction in construction hazard recognition. In this study, we analyzed 274 related papers published in ACM Digital Library, Web of Science, Google Scholar, and Scopus between 2000 and 2021 using bibliometric methods, systematically identified the research progress, key topics, and future research directions in this field, and proposed a research framework for human-computer interaction in construction hazard recognition (CHR-HCI). The results showed that, in the past 20 years, the application of human-computer interaction not only made significant contributions to the development of hazard recognition, but also generated a series of new research subjects, such as multimodal physiological data analysis in hazard recognition experiments, development of intuitive devices and sensors, and the human-computer interaction safety management platform based on big data. Future research modules include computer vision, computer simulation, virtual reality, and ergonomics. In this study, we drew a theoretical map reflecting the existing research results and the relationship between them, and provided suggestions for the future development of human-computer interaction in the field of hazard recognition from a practical perspective.
Collapse
|
32
|
Biratu ES, Schwenker F, Ayano YM, Debelee TG. A Survey of Brain Tumor Segmentation and Classification Algorithms. J Imaging 2021; 7:jimaging7090179. [PMID: 34564105 PMCID: PMC8465364 DOI: 10.3390/jimaging7090179] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 01/16/2023] Open
Abstract
A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor classification from an MRI scan is non-invasive so that it avoids biopsy and make the diagnosis process safer. Since the beginning of this millennia and late nineties, the effort of the research community to come-up with automatic brain tumor segmentation and classification method has been tremendous. As a result, there are ample literature on the area focusing on segmentation using region growing, traditional machine learning and deep learning methods. Similarly, a number of tasks have been performed in the area of brain tumor classification into their respective histological type, and an impressive performance results have been obtained. Considering state of-the-art methods and their performance, the purpose of this paper is to provide a comprehensive survey of three, recently proposed, major brain tumor segmentation and classification model techniques, namely, region growing, shallow machine learning and deep learning. The established works included in this survey also covers technical aspects such as the strengths and weaknesses of different approaches, pre- and post-processing techniques, feature extraction, datasets, and models' performance evaluation metrics.
Collapse
Affiliation(s)
- Erena Siyoum Biratu
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia; (E.S.B.); (T.G.D.)
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081 Ulm, Germany
- Correspondence:
| | | | - Taye Girma Debelee
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia; (E.S.B.); (T.G.D.)
- Ethiopian Artificial Intelligence Center, Addis Ababa 40782, Ethiopia;
| |
Collapse
|
33
|
Fang L, Zhang L, Yao Y. Integrating a learned probabilistic model with energy functional for ultrasound image segmentation. Med Biol Eng Comput 2021; 59:1917-1931. [PMID: 34383220 DOI: 10.1007/s11517-021-02411-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 07/03/2021] [Indexed: 11/26/2022]
Abstract
The segmentation of ultrasound (US) images is steadily growing in popularity, owing to the necessity of computer-aided diagnosis (CAD) systems and the advantages that this technique shows, such as safety and efficiency. The objective of this work is to separate the lesion from its background in US images. However, most US images contain poor quality, which is affected by the noise, ambiguous boundary, and heterogeneity. Moreover, the lesion region may be not salient amid the other normal tissues, which makes its segmentation a challenging problem. In this paper, an US image segmentation algorithm that combines the learned probabilistic model with energy functionals is proposed. Firstly, a learned probabilistic model based on the generalized linear model (GLM) reduces the false positives and increases the likelihood energy term of the lesion region. It yields a new probability projection that attracts the energy functional toward the desired region of interest. Then, boundary indicator and probability statistical-based energy functional are used to provide a reliable boundary for the lesion. Integrating probabilistic information into the energy functional framework can effectively overcome the impact of poor quality and further improve the accuracy of segmentation. To verify the performance of the proposed algorithm, 40 images are randomly selected in three databases for evaluation. The values of DICE coefficient, the Jaccard distance, root-mean-square error, and mean absolute error are 0.96, 0.91, 0.059, and 0.042, respectively. Besides, the initialization of the segmentation algorithm and the influence of noise are also analyzed. The experiment shows a significant improvement in performance. A. Description of the proposed paper. B. The main steps involved in the proposed method.
Collapse
Affiliation(s)
- Lingling Fang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China.
- Nanchang Institute of Technology, City, Nanchang, Jiangxi Province, China.
| | - Lirong Zhang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| | - Yibo Yao
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
34
|
Fawzi A, Achuthan A, Belaton B. Brain Image Segmentation in Recent Years: A Narrative Review. Brain Sci 2021; 11:1055. [PMID: 34439674 PMCID: PMC8392552 DOI: 10.3390/brainsci11081055] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 07/10/2021] [Accepted: 07/19/2021] [Indexed: 11/17/2022] Open
Abstract
Brain image segmentation is one of the most time-consuming and challenging procedures in a clinical environment. Recently, a drastic increase in the number of brain disorders has been noted. This has indirectly led to an increased demand for automated brain segmentation solutions to assist medical experts in early diagnosis and treatment interventions. This paper aims to present a critical review of the recent trend in segmentation and classification methods for brain magnetic resonance images. Various segmentation methods ranging from simple intensity-based to high-level segmentation approaches such as machine learning, metaheuristic, deep learning, and hybridization are included in the present review. Common issues, advantages, and disadvantages of brain image segmentation methods are also discussed to provide a better understanding of the strengths and limitations of existing methods. From this review, it is found that deep learning-based and hybrid-based metaheuristic approaches are more efficient for the reliable segmentation of brain tumors. However, these methods fall behind in terms of computation and memory complexity.
Collapse
Affiliation(s)
| | - Anusha Achuthan
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia; (A.F.); (B.B.)
| | | |
Collapse
|
35
|
MSDS-UNet: A multi-scale deeply supervised 3D U-Net for automatic segmentation of lung tumor in CT. Comput Med Imaging Graph 2021; 92:101957. [PMID: 34325225 DOI: 10.1016/j.compmedimag.2021.101957] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 03/05/2021] [Accepted: 07/08/2021] [Indexed: 11/20/2022]
Abstract
Lung cancer is one of the most common and deadly malignant cancers. Accurate lung tumor segmentation from CT is therefore very important for correct diagnosis and treatment planning. The automated lung tumor segmentation is challenging due to the high variance in appearance and shape of the targeting tumors. To overcome the challenge, we present an effective 3D U-Net equipped with ResNet architecture and a two-pathway deep supervision mechanism to increase the network's capacity for learning richer representations of lung tumors from global and local perspectives. Extensive experiments on two real medical datasets: the lung CT dataset from Liaoning Cancer Hospital in China with 220 cases and the public dataset of TCIA with 422 cases. Our experiments demonstrate that our model achieves an average dice score (0.675), sensitivity (0.731) and F1-score (0.682) on the dataset from Liaoning Cancer Hospital, and an average dice score (0.691), sensitivity (0.746) and F1-score (0.724) on the TCIA dataset, respectively. The results demonstrate that the proposed 3D MSDS-UNet outperforms the state-of-the-art segmentation models for segmenting all scales of tumors, especially for small tumors. Moreover, we evaluated our proposed MSDS-UNet on another challenging volumetric medical image segmentation task: COVID-19 lung infection segmentation, which shows consistent improvement in the segmentation performance.
Collapse
|
36
|
Morphological active contour model for automatic brain tumor extraction from multimodal magnetic resonance images. J Neurosci Methods 2021; 362:109296. [PMID: 34302860 DOI: 10.1016/j.jneumeth.2021.109296] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2019] [Revised: 07/18/2021] [Accepted: 07/19/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Brain tumor extraction from magnetic resonance (MR) images is challenging due to variations in the location, shape, size and intensity of tumors. Manual delineation of brain tumors from MR images is time-consuming and prone to human errors. METHOD In this paper, we present a method for automatic tumor extraction from multimodal MR images. Brain tumors are first detected using k-means clustering. A morphological region-based active contour model is then used for tumor extraction using an initial contour defined based on the boundary of the detected brain tumor regions. The contour evolution for tumor extraction was performed using successive application of morphological operators. In our model, a Gaussian distribution was used to model local image intensities. The spatial correlation between neighboring voxels was also modeled using Markov random field. RESULTS The proposed method was evaluated on BraTS 2013 dataset including patients with high-grade and low-grade tumors. In comparison with other active contour based methods, the proposed method yielded better performance on tumor segmentation with mean Dice similarity coefficients of 0.9179 ( ± 0.025) and 0.8910 ( ± 0.042) obtained on high-grade and low-grade tumors, respectively. CONCLUSION The proposed method achieved higher accuracies for brain tumor extraction in comparison to other contour-based methods.
Collapse
|
37
|
Fu X, Fang B, Zhou M, Kwong S. Active contour driven by adaptively weighted signed pressure force combined with Legendre polynomial for image segmentation. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2021.02.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
38
|
Chen H, Ban D, Qi XS, Pan X, Qiang Y, Yang Q. A Hybrid Feature Selection based Brain Tumor Detection and Segmentation in Multiparametric Magnetic Resonance Imaging. Med Phys 2021; 48:6614-6626. [PMID: 34089524 DOI: 10.1002/mp.15026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 03/29/2021] [Accepted: 05/24/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE To develop a novel method based on feature selection, combining convolutional neural network (CNN) and ensemble learning (EL), to achieve high accuracy and efficiency of glioma detection and segmentation using multiparametric MRIs. METHODS We proposed an evolutionary feature selection-based hybrid approach for glioma detection and segmentation on 4 MR sequences (T2-FLAIR, T1, T1Gd, and T2). First, we trained a lightweight CNN to detect glioma and mask the suspected region to process large batch of MRI images. Second, we employed a differential evolution algorithm to search a feature space, which composed of 416-dimensions radiomics features extracted from 4 sequences of MRIs and 128-dimensions high-order features extracted by the CNN, to generate an optimal feature combination for pixel classification. Finally, we trained an EL classifier using the optimal feature combination to segment whole tumor (WT) and its subregions including non-enhancing tumor (NET), peritumoral edema (ED), and enhancing tumor (ET) in the suspected region. Experiments were carried out on 300 glioma patients from the BraTS2019 dataset using 5-fold cross-validation, the model was independently validated using the rest 35 patients from the same database. RESULTS The approach achieved a detection accuracy of 98.8% using four MRIs. The Dice coefficients (and standard deviations) were 0.852±0.057, 0.844±0.046, and 0.799±0.053 for segmentation of WT (NET+ET+ED), tumor core (NET+ET), and ET, respectively. The sensitivities and specificities were 0.873±0.074, 0.863±0.072, and 0.852±0.082; the specificities were 0.994±0.005, 0.994±0.005, and 0.995±0.004 for the WT, tumor core and ET, respectively. The performances and calculation times were compared with the state-of-the-art approaches, our approach yielded a better overall performance with average processing time of 139.5 sec per set of four sequence MRIs. CONCLUSIONS We demonstrated a robust and computational cost-effective hybrid segmentation approach for glioma and its subregions on multi-sequence MR images. The proposed approach can be used for automated target delineation for glioma patients.
Collapse
Affiliation(s)
- Hao Chen
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.,Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, University of Posts and Telecommunications, Xi'an, 710121, China
| | - Duo Ban
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, China
| | - X Sharon Qi
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, 90095, United States
| | - Xiaoying Pan
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an 710121, China.,Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, University of Posts and Telecommunications, Xi'an, 710121, China.,First Affiliated Hospital of Xi`an Jiaotong University, Xi`an 710061, China
| | - Yongqian Qiang
- First Affiliated Hospital of Xi`an Jiaotong University, Xi`an 710061, China
| | - Qing Yang
- School of Sport and Health Sciences, Xi'an Physical Education University, Xi'an, 710068, China
| |
Collapse
|
39
|
Peng B, Liu B, Bin Y, Shen L, Lei J. Multi-Modality MR Image Synthesis via Confidence-Guided Aggregation and Cross-Modality Refinement. IEEE J Biomed Health Inform 2021; 26:27-35. [PMID: 34018939 DOI: 10.1109/jbhi.2021.3082541] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Magnetic resonance imaging (MRI) can provide multi-modality MR images by setting task-specific scan parameters, and has been widely used in various disease diagnosis and planned treatments. However, in practical clinical applications, it is often difficult to obtain multi-modality MR images simultaneously due to patient discomfort, and scanning costs, etc. Therefore, how to effectively utilize the existing modality images to synthesize missing modality image has become a hot research topic. In this paper, we propose a novel confidence-guided aggregation and cross-modality refinement network (CACR-Net) for multi-modality MR image synthesis, which effectively utilizes complementary and correlative information of multiple modalities to synthesize high-quality target-modality images. Specifically, to effectively utilize the complementary modality-specific characteristics, a confidence-guided aggregation module is proposed to adaptively aggregate the multiple target-modality images generated from multiple source-modality images by using the corresponding confidence maps. Based on the aggregated target-modality image, a cross-modality refinement module is presented to further refine the target-modality image by mining correlative information among the multiple source-modality images and aggregated target-modality image. By training the proposed CACR-Net in an end-to-end manner, high-quality and sharp target-modality MR images are effectively synthesized. Experimental results on the widely used benchmark demonstrate that the proposed method outperforms state-of-the-art methods.
Collapse
|
40
|
Buchlak QD, Esmaili N, Leveque JC, Bennett C, Farrokhi F, Piccardi M. Machine learning applications to neuroimaging for glioma detection and classification: An artificial intelligence augmented systematic review. J Clin Neurosci 2021; 89:177-198. [PMID: 34119265 DOI: 10.1016/j.jocn.2021.04.043] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 04/30/2021] [Indexed: 12/13/2022]
Abstract
Glioma is the most common primary intraparenchymal tumor of the brain and the 5-year survival rate of high-grade glioma is poor. Magnetic resonance imaging (MRI) is essential for detecting, characterizing and monitoring brain tumors but definitive diagnosis still relies on surgical pathology. Machine learning has been applied to the analysis of MRI data in glioma research and has the potential to change clinical practice and improve patient outcomes. This systematic review synthesizes and analyzes the current state of machine learning applications to glioma MRI data and explores the use of machine learning for systematic review automation. Various datapoints were extracted from the 153 studies that met inclusion criteria and analyzed. Natural language processing (NLP) analysis involved keyword extraction, topic modeling and document classification. Machine learning has been applied to tumor grading and diagnosis, tumor segmentation, non-invasive genomic biomarker identification, detection of progression and patient survival prediction. Model performance was generally strong (AUC = 0.87 ± 0.09; sensitivity = 0.87 ± 0.10; specificity = 0.0.86 ± 0.10; precision = 0.88 ± 0.11). Convolutional neural network, support vector machine and random forest algorithms were top performers. Deep learning document classifiers yielded acceptable performance (mean 5-fold cross-validation AUC = 0.71). Machine learning tools and data resources were synthesized and summarized to facilitate future research. Machine learning has been widely applied to the processing of MRI data in glioma research and has demonstrated substantial utility. NLP and transfer learning resources enabled the successful development of a replicable method for automating the systematic review article screening process, which has potential for shortening the time from discovery to clinical application in medicine.
Collapse
Affiliation(s)
- Quinlan D Buchlak
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW, Australia.
| | - Nazanin Esmaili
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW, Australia; Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW, Australia
| | | | - Christine Bennett
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW, Australia
| | - Farrokh Farrokhi
- Neuroscience Institute, Virginia Mason Medical Center, Seattle, WA, USA
| | - Massimo Piccardi
- Faculty of Engineering and IT, University of Technology Sydney, Ultimo, NSW, Australia
| |
Collapse
|
41
|
Aswani K, Menaka D. A dual autoencoder and singular value decomposition based feature optimization for the segmentation of brain tumor from MRI images. BMC Med Imaging 2021; 21:82. [PMID: 33985449 PMCID: PMC8117624 DOI: 10.1186/s12880-021-00614-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 04/28/2021] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND The brain tumor is the growth of abnormal cells inside the brain. These cells can be grown into malignant or benign tumors. Segmentation of tumor from MRI images using image processing techniques started decades back. Image processing based brain tumor segmentation can be divided in to three categories conventional image processing methods, Machine Learning methods and Deep Learning methods. Conventional methods lacks the accuracy in segmentation due to complex spatial variation of tumor. Machine Learning methods stand as a good alternative to conventional methods. Methods like SVM, KNN, Fuzzy and a combination of either of these provide good accuracy with reasonable processing speed. The difficulty in processing the various feature extraction methods and maintain accuracy as per the medical standards still exist as a limitation for machine learning methods. In Deep Learning features are extracted automatically in various stages of the network and maintain accuracy as per the medical standards. Huge database requirement and high computational time is still poses a problem for deep learning. To overcome the limitations specified above we propose an unsupervised dual autoencoder with latent space optimization here. The model require only normal MRI images for its training thus reducing the huge tumor database requirement. With a set of normal class data, an autoencoder can reproduce the feature vector into an output layer. This trained autoencoder works well with normal data while it fails to reproduce an anomaly to the output layer. But a classical autoencoder suffer due to poor latent space optimization. The Latent space loss of classical autoencoder is reduced using an auxiliary encoder along with the feature optimization based on singular value decomposition (SVD). The patches used for training are not traditional square patches but we took both horizontal and vertical patches to keep both local and global appearance features on the training set. An Autoencoder is applied separately for learning both horizontal and vertical patches. While training a logistic sigmoid transfer function is used for both encoder and decoder parts. SGD optimizer is used for optimization with an initial learning rate of .001 and the maximum epochs used are 4000. The network is trained in MATLAB 2018a with a processor capacity of 3.7 GHz with NVIDIA GPU and 16 GB of RAM. RESULTS The results are obtained using a patch size of 16 × 64, 64 × 16 for horizontal and vertical patches respectively. In Glioma images tumor is not grown from a point rather it spreads randomly. Region filling and connectivity operations are performed to get the final tumor segmentation. Overall the method segments Meningioma better than Gliomas. Three evaluation metrics are considered to measure the performance of the proposed system such as Dice Similarity Coefficient, Positive Predictive Value, and Sensitivity. CONCLUSION An unsupervised method for the segmentation of brain tumor from MRI images is proposed here. The proposed dual autoencoder with SVD based feature optimization reduce the latent space loss in the classical autoencoder. The proposed method have advantages in computational efficiency, no need of huge database requirement and better accuracy than machine learning methods. The method is compared Machine Learning methods Like SVM, KNN and supervised deep learning methods like CNN and commentable results are obtained.
Collapse
Affiliation(s)
- K Aswani
- Noorul Islam Centre for Higher Education, Kumrancoil, Tamil Nadu, India.
- , Malappuram, India.
| | - D Menaka
- Department of Applied Electronics, Noorul Islam Center for Higher Education, Kumrancoil, Tamil Nadu, India
| |
Collapse
|
42
|
Computational Complexity Reduction of Neural Networks of Brain Tumor Image Segmentation by Introducing Fermi-Dirac Correction Functions. ENTROPY 2021; 23:e23020223. [PMID: 33670368 PMCID: PMC7918890 DOI: 10.3390/e23020223] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/29/2021] [Accepted: 02/07/2021] [Indexed: 11/16/2022]
Abstract
Nowadays, deep learning methods with high structural complexity and flexibility inevitably lean on the computational capability of the hardware. A platform with high-performance GPUs and large amounts of memory could support neural networks having large numbers of layers and kernels. However, naively pursuing high-cost hardware would probably drag the technical development of deep learning methods. In the article, we thus establish a new preprocessing method to reduce the computational complexity of the neural networks. Inspired by the band theory of solids in physics, we map the image space into a noninteraction physical system isomorphically and then treat image voxels as particle-like clusters. Then, we reconstruct the Fermi-Dirac distribution to be a correction function for the normalization of the voxel intensity and as a filter of insignificant cluster components. The filtered clusters at the circumstance can delineate the morphological heterogeneity of the image voxels. We used the BraTS 2019 datasets and the dimensional fusion U-net for the algorithmic validation, and the proposed Fermi-Dirac correction function exhibited comparable performance to other employed preprocessing methods. By comparing to the conventional z-score normalization function and the Gamma correction function, the proposed algorithm can save at least 38% of computational time cost under a low-cost hardware architecture. Even though the correction function of global histogram equalization has the lowest computational time among the employed correction functions, the proposed Fermi-Dirac correction function exhibits better capabilities of image augmentation and segmentation.
Collapse
|
43
|
Muhammad K, Khan S, Ser JD, Albuquerque VHCD. Deep Learning for Multigrade Brain Tumor Classification in Smart Healthcare Systems: A Prospective Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:507-522. [PMID: 32603291 DOI: 10.1109/tnnls.2020.2995800] [Citation(s) in RCA: 94] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Brain tumor is one of the most dangerous cancers in people of all ages, and its grade recognition is a challenging problem for radiologists in health monitoring and automated diagnosis. Recently, numerous methods based on deep learning have been presented in the literature for brain tumor classification (BTC) in order to assist radiologists for a better diagnostic analysis. In this overview, we present an in-depth review of the surveys published so far and recent deep learning-based methods for BTC. Our survey covers the main steps of deep learning-based BTC methods, including preprocessing, features extraction, and classification, along with their achievements and limitations. We also investigate the state-of-the-art convolutional neural network models for BTC by performing extensive experiments using transfer learning with and without data augmentation. Furthermore, this overview describes available benchmark data sets used for the evaluation of BTC. Finally, this survey does not only look into the past literature on the topic but also steps on it to delve into the future of this area and enumerates some research directions that should be followed in the future, especially for personalized and smart healthcare.
Collapse
|
44
|
Gryska E, Schneiderman J, Björkman-Burtscher I, Heckemann RA. Automatic brain lesion segmentation on standard magnetic resonance images: a scoping review. BMJ Open 2021; 11:e042660. [PMID: 33514580 PMCID: PMC7849889 DOI: 10.1136/bmjopen-2020-042660] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/09/2021] [Accepted: 01/12/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field. DESIGN Scoping review. SETTING Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison. RESULTS Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity. CONCLUSIONS The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
Collapse
Affiliation(s)
- Emilia Gryska
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| | - Justin Schneiderman
- Sektionen för klinisk neurovetenskap, Goteborgs Universitet Institutionen for Neurovetenskap och fysiologi, Goteborg, Sweden
| | | | - Rolf A Heckemann
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| |
Collapse
|
45
|
A level set method based on domain transformation and bias correction for MRI brain tumor segmentation. J Neurosci Methods 2021; 352:109091. [PMID: 33515604 DOI: 10.1016/j.jneumeth.2021.109091] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 01/18/2021] [Accepted: 01/21/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Intensity inhomogeneity is one of the common artifacts in image processing. This artifact makes image segmentation more challenging and adversely affects the performance of intensity-based image processing algorithms. NEW METHOD In this paper, a novel region-based level set method is proposed for segmenting the images with intensity inhomogeneity with applications to brain tumor segmentation in magnetic resonance imaging (MRI) scans. For this purpose, the inhomogeneous regions are first modeled as Gaussian distributions with different means and variances, and then transferred into a new domain, where preserves the Gaussian intensity distribution of each region but with better separation. Moreover, our method can perform bias field correction. To this end, the bias field is represented by a linear combination of smooth base functions that enables better intensity inhomogeneity modeling. Therefore, level set fundamental formulation and bias field are modified in the proposed approach. RESULTS To assess the performance of the proposed method, different inhomogeneous images, including synthetic images as well as real brain magnetic resonance images from BraTS 2017 dataset are segmented. Being evaluated by Dice, Jaccard, Sensitivity, and Specificity metrics, the results show that the proposed method suppresses the side effect of the over-smoothing object boundary and it has good accuracy in the segmentation of images with extreme intensity non-uniformity. The mean values of these metrics in brain tumor segmentation are 0.86 ± 0.03, 0.77 ± 0.05, 0.94 ± 0.04, 0.99 ± 0.003, respectively. COMPARISON WITH EXISTING METHOD(S) Our method were compared with six state-of-the-art image segmentation methods: Chan-Vese (CV), Local Intensity Clustering (LIC), Local iNtensity Clustering (LINC), Global inhomogeneous intensity clustering (GINC), Multiplicative Intrinsic Component Optimization (MICO), and Local Statistical Active Contour Model (LSACM) models. We used qualitative and quantitative comparison methods for segmenting synthetic and real images. Experiments indicate that our proposed method is robust to noise and intensity non-uniformity and outperforms other state-of-the-art segmentation methods in terms of bias field correction, noise resistance, and segmentation accuracy. CONCLUSIONS Experimental results show that the proposed model is capable of accurate segmentation and bias field estimation simultaneously. The proposed model suppresses the side effect of the over-smoothing object boundary. Moreover, our model has good accuracy in the segmentation of images with extreme intensity non-uniformity.
Collapse
|
46
|
Kadry S, Rajinikanth V, Raja NSM, Jude Hemanth D, Hannon NMS, Raj ANJ. Evaluation of brain tumor using brain MRI with modified-moth-flame algorithm and Kapur’s thresholding: a study. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-020-00539-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
47
|
Bennai MT, Guessoum Z, Mazouzi S, Cormier S, Mezghiche M. A stochastic multi-agent approach for medical-image segmentation: Application to tumor segmentation in brain MR images. Artif Intell Med 2020; 110:101980. [DOI: 10.1016/j.artmed.2020.101980] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/04/2020] [Accepted: 10/25/2020] [Indexed: 10/23/2022]
|
48
|
Zhou T, Fu H, Chen G, Shen J, Shao L. Hi-Net: Hybrid-Fusion Network for Multi-Modal MR Image Synthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2772-2781. [PMID: 32086202 DOI: 10.1109/tmi.2020.2975344] [Citation(s) in RCA: 106] [Impact Index Per Article: 21.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Magnetic resonance imaging (MRI) is a widely used neuroimaging technique that can provide images of different contrasts (i.e., modalities). Fusing this multi-modal data has proven particularly effective for boosting model performance in many tasks. However, due to poor data quality and frequent patient dropout, collecting all modalities for every patient remains a challenge. Medical image synthesis has been proposed as an effective solution, where any missing modalities are synthesized from the existing ones. In this paper, we propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis, which learns a mapping from multi-modal source images (i.e., existing modalities) to target images (i.e., missing modalities). In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality, and a fusion network is employed to learn the common latent representation of multi-modal data. Then, a multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality, acting as a generator to synthesize the target images. Moreover, a layer-wise multi-modal fusion strategy effectively exploits the correlations among multiple modalities, where a Mixed Fusion Block (MFB) is proposed to adaptively weight different fusion strategies. Extensive experiments demonstrate the proposed model outperforms other state-of-the-art medical image synthesis methods.
Collapse
|
49
|
ZHANG CHONG, SHEN XUANJING, CHEN HAIPENG. BRAIN TUMOR SEGMENTATION BASED ON SUPERPIXELS AND HYBRID CLUSTERING WITH FAST GUIDED FILTER. J MECH MED BIOL 2020. [DOI: 10.1142/s0219519420500323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Brain tumor segmentation from magnetic resonance (MR) image is vital for both the diagnosis and treatment of brain cancers. To alleviate noise sensitivity and improve stability of segmentation, an effective hybrid clustering algorithm combined with fast guided filter is proposed for brain tumor segmentation in this paper. Preprocessing is performed using adaptive Wiener filtering combined with a fast guided filter. Then simple linear iterative clustering (SLIC) is utilized for pre-segmentation to effectively remove scatter. During the clustering, K-means[Formula: see text] and Gaussian kernel-based fuzzy C-means (K[Formula: see text]GKFCM) clustering algorithm are combined to segment, and the fast-guided filter is introduced into the clustering. The proposed algorithm not only improves the robustness of the algorithm to noise, but also improves the stability of the segmentation. In addition, the proposed algorithm is compared with other current segmentation algorithms. The results show that the proposed algorithm performs better in terms of accuracy, sensitivity, specificity and recall.
Collapse
Affiliation(s)
- CHONG ZHANG
- College of Software, Jilin University, Changchun, P. R. China
- College of Computer Science and Technology, Jilin University, Changchun, P. R. China
| | - XUANJING SHEN
- College of Computer Science and Technology, Jilin University, Changchun, P. R. China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, P. R. China
| | - HAIPENG CHEN
- College of Computer Science and Technology, Jilin University, Changchun, P. R. China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, P. R. China
| |
Collapse
|
50
|
3D-MRI Brain Tumor Detection Model Using Modified Version of Level Set Segmentation Based on Dragonfly Algorithm. Symmetry (Basel) 2020. [DOI: 10.3390/sym12081256] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Accurate brain tumor segmentation from 3D Magnetic Resonance Imaging (3D-MRI) is an important method for obtaining information required for diagnosis and disease therapy planning. Variation in the brain tumor’s size, structure, and form is one of the main challenges in tumor segmentation, and selecting the initial contour plays a significant role in reducing the segmentation error and the number of iterations in the level set method. To overcome this issue, this paper suggests a two-step dragonfly algorithm (DA) clustering technique to extract initial contour points accurately. The brain is extracted from the head in the preprocessing step, then tumor edges are extracted using the two-step DA, and these extracted edges are used as an initial contour for the MRI sequence. Lastly, the tumor region is extracted from all volume slices using a level set segmentation method. The results of applying the proposed technique on 3D-MRI images from the multimodal brain tumor segmentation challenge (BRATS) 2017 dataset show that the proposed method for brain tumor segmentation is comparable to the state-of-the-art methods.
Collapse
|