1
|
Dong J, Zhang G, Hu Y, Wu Y, Rong H. An Optimization Numerical Spiking Neural Membrane System with Adaptive Multi-Mutation Operators for Brain Tumor Segmentation. Int J Neural Syst 2024:2450036. [PMID: 38686911 DOI: 10.1142/s0129065724500369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Magnetic Resonance Imaging (MRI) is an important diagnostic technique for brain tumors due to its ability to generate images without tissue damage or skull artifacts. Therefore, MRI images are widely used to achieve the segmentation of brain tumors. This paper is the first attempt to discuss the use of optimization spiking neural P systems to improve the threshold segmentation of brain tumor images. To be specific, a threshold segmentation approach based on optimization numerical spiking neural P systems with adaptive multi-mutation operators (ONSNPSamos) is proposed to segment brain tumor images. More specifically, an ONSNPSamo with a multi-mutation strategy is introduced to balance exploration and exploitation abilities. At the same time, an approach combining the ONSNPSamo and connectivity algorithms is proposed to address the brain tumor segmentation problem. Our experimental results from CEC 2017 benchmarks (basic, shifted and rotated, hybrid, and composition function optimization problems) demonstrate that the ONSNPSamo is better than or close to 12 optimization algorithms. Furthermore, case studies from BraTS 2019 show that the approach combining the ONSNPSamo and connectivity algorithms can more effectively segment brain tumor images than most algorithms involved.
Collapse
Affiliation(s)
- Jianping Dong
- School of Automation, Chengdu University of Information Technology, Chengdu 610225, China
| | - Gexiang Zhang
- School of Automation, Chengdu University of Information Technology, Chengdu 610225, China
| | - Yangheng Hu
- School of Automation, Chengdu University of Information Technology, Chengdu 610225, China
| | - Yijin Wu
- School of Automation, Chengdu University of Information Technology, Chengdu 610225, China
| | - Haina Rong
- School of Electrical Engineering, Southwest Jiaotong University, Chengdu 611756, China
| |
Collapse
|
2
|
Batool A, Byun YC. Brain tumor detection with integrating traditional and computational intelligence approaches across diverse imaging modalities - Challenges and future directions. Comput Biol Med 2024; 175:108412. [PMID: 38691914 DOI: 10.1016/j.compbiomed.2024.108412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/18/2024] [Accepted: 04/02/2024] [Indexed: 05/03/2024]
Abstract
Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.
Collapse
Affiliation(s)
- Amreen Batool
- Department of Electronic Engineering, Institute of Information Science & Technology, Jeju National University, Jeju, 63243, South Korea
| | - Yung-Cheol Byun
- Department of Computer Engineering, Major of Electronic Engineering, Jeju National University, Institute of Information Science Technology, Jeju, 63243, South Korea.
| |
Collapse
|
3
|
Zhu Z, Sun M, Qi G, Li Y, Gao X, Liu Y. Sparse Dynamic Volume TransUNet with multi-level edge fusion for brain tumor segmentation. Comput Biol Med 2024; 172:108284. [PMID: 38503086 DOI: 10.1016/j.compbiomed.2024.108284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/19/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
3D MRI Brain Tumor Segmentation is of great significance in clinical diagnosis and treatment. Accurate segmentation results are critical for localization and spatial distribution of brain tumors using 3D MRI. However, most existing methods mainly focus on extracting global semantic features from the spatial and depth dimensions of a 3D volume, while ignoring voxel information, inter-layer connections, and detailed features. A 3D brain tumor segmentation network SDV-TUNet (Sparse Dynamic Volume TransUNet) based on an encoder-decoder architecture is proposed to achieve accurate segmentation by effectively combining voxel information, inter-layer feature connections, and intra-axis information. Volumetric data is fed into a 3D network consisting of extended depth modeling for dense prediction by using two modules: sparse dynamic (SD) encoder-decoder module and multi-level edge feature fusion (MEFF) module. The SD encoder-decoder module is utilized to extract global spatial semantic features for brain tumor segmentation, which employs multi-head self-attention and sparse dynamic adaptive fusion in a 3D extended shifted window strategy. In the encoding stage, dynamic perception of regional connections and multi-axis information interactions are realized through local tight correlations and long-range sparse correlations. The MEFF module achieves the fusion of multi-level local edge information in a layer-by-layer incremental manner and connects the fusion to the decoder module through skip connections to enhance the propagation ability of spatial edge information. The proposed method is applied to the BraTS2020 and BraTS2021 benchmarks, and the experimental results show its superior performance compared with state-of-the-art brain tumor segmentation methods. The source codes of the proposed method are available at https://github.com/SunMengw/SDV-TUNet.
Collapse
Affiliation(s)
- Zhiqin Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Mengwei Sun
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Guanqiu Qi
- Computer Information Systems Department, State University of New York at Buffalo State, Buffalo, NY 14222, USA.
| | - Yuanyuan Li
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Xinbo Gao
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Yu Liu
- Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China.
| |
Collapse
|
4
|
Kim DD, Chandra RS, Yang L, Wu J, Feng X, Atalay M, Bettegowda C, Jones C, Sair H, Liao WH, Zhu C, Zou B, Kazerooni AF, Nabavizadeh A, Jiao Z, Peng J, Bai HX. Active Learning in Brain Tumor Segmentation with Uncertainty Sampling and Annotation Redundancy Restriction. J Imaging Inform Med 2024:10.1007/s10278-024-01037-6. [PMID: 38514595 DOI: 10.1007/s10278-024-01037-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 01/30/2024] [Accepted: 02/01/2024] [Indexed: 03/23/2024]
Abstract
Deep learning models have demonstrated great potential in medical imaging but are limited by the expensive, large volume of annotations required. To address this, we compared different active learning strategies by training models on subsets of the most informative images using real-world clinical datasets for brain tumor segmentation and proposing a framework that minimizes the data needed while maintaining performance. Then, 638 multi-institutional brain tumor magnetic resonance imaging scans were used to train three-dimensional U-net models and compare active learning strategies. Uncertainty estimation techniques including Bayesian estimation with dropout, bootstrapping, and margins sampling were compared to random query. Strategies to avoid annotating similar images were also considered. We determined the minimum data necessary to achieve performance equivalent to the model trained on the full dataset (α = 0.05). Bayesian approximation with dropout at training and testing showed results equivalent to that of the full data model (target) with around 30% of the training data needed by random query to achieve target performance (p = 0.018). Annotation redundancy restriction techniques can reduce the training data needed by random query to achieve target performance by 20%. We investigated various active learning strategies to minimize the annotation burden for three-dimensional brain tumor segmentation. Dropout uncertainty estimation achieved target performance with the least annotated data.
Collapse
Affiliation(s)
- Daniel D Kim
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Rajat S Chandra
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Li Yang
- Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China
- Clinical Medical Research Center for Stroke Prevention and Treatment of Hunan Province, Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China
| | - Jing Wu
- Department of Radiology, Second Xiangya Hospital, Central South University, Changsha, China
| | - Xue Feng
- Biomedical Engineering, University of Virginia, Charlottesville, VA, USA
| | - Michael Atalay
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Chetan Bettegowda
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - Craig Jones
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Haris Sair
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| | - Wei-Hua Liao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, China
| | - Chengzhang Zhu
- College of Literature and Journalism, Central South University, Changsha, China
| | - Beiji Zou
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Anahita Fathi Kazerooni
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Zhicheng Jiao
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Jian Peng
- Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China.
- Clinical Medical Research Center for Stroke Prevention and Treatment of Hunan Province, Department of Neurology, Second Xiangya Hospital, Central South University, Changsha, China.
| | - Harrison X Bai
- Department of Neurosurgery, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
5
|
Zhang D, Wang C, Chen T, Chen W, Shen Y. Scalable Swin Transformer network for brain tumor segmentation from incomplete MRI modalities. Artif Intell Med 2024; 149:102788. [PMID: 38462288 DOI: 10.1016/j.artmed.2024.102788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 12/19/2023] [Accepted: 01/25/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND Deep learning methods have shown great potential in processing multi-modal Magnetic Resonance Imaging (MRI) data, enabling improved accuracy in brain tumor segmentation. However, the performance of these methods can suffer when dealing with incomplete modalities, which is a common issue in clinical practice. Existing solutions, such as missing modality synthesis, knowledge distillation, and architecture-based methods, suffer from drawbacks such as long training times, high model complexity, and poor scalability. METHOD This paper proposes IMS2Trans, a novel lightweight scalable Swin Transformer network by utilizing a single encoder to extract latent feature maps from all available modalities. This unified feature extraction process enables efficient information sharing and fusion among the modalities, resulting in efficiency without compromising segmentation performance even in the presence of missing modalities. RESULTS Two datasets, BraTS 2018 and BraTS 2020, containing incomplete modalities for brain tumor segmentation are evaluated against popular benchmarks. On the BraTS 2018 dataset, our model achieved higher average Dice similarity coefficient (DSC) scores for the whole tumor, tumor core, and enhancing tumor regions (86.57, 75.67, and 58.28, respectively), in comparison with a state-of-the-art model, i.e. mmFormer (86.45, 75.51, and 57.79, respectively). Similarly, on the BraTS 2020 dataset, our model scored higher DSC scores in these three brain tumor regions (87.33, 79.09, and 62.11, respectively) compared to mmFormer (86.17, 78.34, and 60.36, respectively). We also conducted a Wilcoxon test on the experimental results, and the generated p-value confirmed that our model's performance was statistically significant. Moreover, our model exhibits significantly reduced complexity with only 4.47 M parameters, 121.89G FLOPs, and a model size of 77.13 MB, whereas mmFormer comprises 34.96 M parameters, 265.79 G FLOPs, and a model size of 559.74 MB. These indicate our model, being light-weighted with significantly reduced parameters, is still able to achieve better performance than a state-of-the-art model. CONCLUSION By leveraging a single encoder for processing the available modalities, IMS2Trans offers notable scalability advantages over methods that rely on multiple encoders. This streamlined approach eliminates the need for maintaining separate encoders for each modality, resulting in a lightweight and scalable network architecture. The source code of IMS2Trans and the associated weights are both publicly available at https://github.com/hudscomdz/IMS2Trans.
Collapse
Affiliation(s)
- Dongsong Zhang
- School of Big Data and Artificial Intelligence, Xinyang College, Xinyang, 464000, Henan, China; School of Computing and Engineering, University of Huddersfield, Huddersfield, HD13DH, UK
| | - Changjian Wang
- National Key Laboratory of Parallel and Distributed Computing, Changsha, 410073, Hunan, China
| | - Tianhua Chen
- School of Computing and Engineering, University of Huddersfield, Huddersfield, HD13DH, UK
| | - Weidao Chen
- Beijing Infervision Technology Co., Ltd., Beijing, 100020, China
| | - Yiqing Shen
- Department of Computer Science, Johns Hopkins University, Baltimore, 21218, MD, USA.
| |
Collapse
|
6
|
Zhang W, Chen S, Ma Y, Liu Y, Cao X. ETUNet:Exploring efficient transformer enhanced UNet for 3D brain tumor segmentation. Comput Biol Med 2024; 171:108005. [PMID: 38340437 DOI: 10.1016/j.compbiomed.2024.108005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 01/03/2024] [Accepted: 01/13/2024] [Indexed: 02/12/2024]
Abstract
Medical image segmentation is a crucial topic in medical image processing. Accurately segmenting brain tumor regions from multimodal MRI scans is essential for clinical diagnosis and survival prediction. However, similar intensity distributions, variable tumor shapes, and fuzzy boundaries pose severe challenges for brain tumor segmentation. Traditional segmentation networks based on UNet struggle to establish explicit long-range dependencies from the feature space due to the limitations of the CNN receptive field. This is particularly crucial for dense prediction tasks such as brain tumor segmentation. Recent works have incorporated the powerful global modeling capability of Transformer into UNet to achieve more precise segmentation results. Nevertheless, these methods encounter some issues: (1) the global information is often modeled by simply stacking Transformer layers for a specific module, resulting in high computational complexity and underutilization of the potential of the UNet architecture; (2) the rich boundary information of tumor subregions in multi-scale features is often overlooked. Motivated by these challenges, we propose an advanced fusion of Transformer with UNet by reexamining the core three parts (encoder, bottleneck, and skip connections). Firstly, we introduce a CNN-Transformer module in the encoder to replace the traditional CNN module, enabling the capture of deep spatial dependencies from input images. To address high-level semantic information, we incorporate a computationally efficient spatial-channel attention layer in the bottleneck for global interaction, highlighting important semantic features from the encoder path output. For irregular lesions, we fuse the multi-scale features from the encoder output and the decoder features in the skip connections by calculating cross-attention. This adaptive querying of valuable information from multi-scale features enhances the boundary localization ability of the decoder path and suppresses redundant features with low correlation. Compared to existing methods, our model further enhances the learning capacity of the overall UNet architecture while maintaining low computational complexity. Experimental results on the BraTS2018 and BraTS2020 datasets for brain tumor segmentation tasks demonstrate that our model achieves comparable or superior results compared to recent CNN or Transformer-based models. The average DSC and HD95 on the two datasets are 0.854, 6.688, and 0.862, 5.455 respectively. At the same time, our model achieves optimal segmentation of Enhancing tumors, showcasing the effectiveness of our method. Our code will be made publicly available at https://github.com/wzhangck/ETUnet.
Collapse
Affiliation(s)
- Wang Zhang
- School of Computer and Information Science, SouthWest University, China.
| | - Shanxiong Chen
- School of Computer and Information Science, SouthWest University, China.
| | - Yuqi Ma
- School of Computer and Information Science, SouthWest University, China.
| | - Yu Liu
- School of Electronic Information and Electrical Engineering, TianShui Normal University, China.
| | - Xu Cao
- Department of Radiology, Shifang People's Hospital, China.
| |
Collapse
|
7
|
Khalil YA, Ayaz A, Lorenz C, Weese J, Pluim J, Breeuwer M. Multi-modal brain tumor segmentation via conditional synthesis with Fourier domain adaptation. Comput Med Imaging Graph 2024; 112:102332. [PMID: 38245925 DOI: 10.1016/j.compmedimag.2024.102332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/31/2023] [Accepted: 12/13/2023] [Indexed: 01/23/2024]
Abstract
Accurate brain tumor segmentation is critical for diagnosis and treatment planning, whereby multi-modal magnetic resonance imaging (MRI) is typically used for analysis. However, obtaining all required sequences and expertly labeled data for training is challenging and can result in decreased quality of segmentation models developed through automated algorithms. In this work, we examine the possibility of employing a conditional generative adversarial network (GAN) approach for synthesizing multi-modal images to train deep learning-based neural networks aimed at high-grade glioma (HGG) segmentation. The proposed GAN is conditioned on auxiliary brain tissue and tumor segmentation masks, allowing us to attain better accuracy and control of tissue appearance during synthesis. To reduce the domain shift between synthetic and real MR images, we additionally adapt the low-frequency Fourier space components of synthetic data, reflecting the style of the image, to those of real data. We demonstrate the impact of Fourier domain adaptation (FDA) on the training of 3D segmentation networks and attain significant improvements in both the segmentation performance and prediction confidence. Similar outcomes are seen when such data is used as a training augmentation alongside the available real images. In fact, experiments on the BraTS2020 dataset reveal that models trained solely with synthetic data exhibit an improvement of up to 4% in Dice score when using FDA, while training with both real and FDA-processed synthetic data through augmentation results in an improvement of up to 5% in Dice compared to using real data alone. This study highlights the importance of considering image frequency in generative approaches for medical image synthesis and offers a promising approach to address data scarcity in medical imaging segmentation.
Collapse
Affiliation(s)
- Yasmina Al Khalil
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Aymen Ayaz
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | | | - Jürgen Weese
- Philips Research Laboratories, Hamburg, Germany.
| | - Josien Pluim
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Marcel Breeuwer
- Biomedical Engineering Department, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Healthcare, Best, The Netherlands.
| |
Collapse
|
8
|
Li P, Li Z, Wang Z, Li C, Wang M. mResU-Net: multi-scale residual U-Net-based brain tumor segmentation from multimodal MRI. Med Biol Eng Comput 2024; 62:641-651. [PMID: 37981627 DOI: 10.1007/s11517-023-02965-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Accepted: 11/01/2023] [Indexed: 11/21/2023]
Abstract
Brain tumor segmentation is an important direction in medical image processing, and its main goal is to accurately mark the tumor part in brain MRI. This study proposes a brand new end-to-end model for brain tumor segmentation, which is a multi-scale deep residual convolutional neural network called mResU-Net. The semantic gap between the encoder and decoder is bridged by using skip connections in the U-Net structure. The residual structure is used to alleviate the vanishing gradient problem during training and ensure sufficient information in deep networks. On this basis, multi-scale convolution kernels are used to improve the segmentation accuracy of targets of different sizes. At the same time, we also integrate channel attention modules into the network to improve its accuracy. The proposed model has an average dice score of 0.9289, 0.9277, and 0.8965 for tumor core (TC), whole tumor (WT), and enhanced tumor (ET) on the BraTS 2021 dataset, respectively. Comparing the segmentation results of this method with existing techniques shows that mResU-Net can significantly improve the segmentation performance of brain tumor subregions.
Collapse
Affiliation(s)
- Pengcheng Li
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, Heilongjiang, 150000, China.
| | - Zhihao Li
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, Heilongjiang, 150000, China
| | - Zijian Wang
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, Heilongjiang, 150000, China
| | - Chaoxiang Li
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, Heilongjiang, 150000, China
| | - Monan Wang
- School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, Heilongjiang, 150000, China
| |
Collapse
|
9
|
Ghazouani F, Vera P, Ruan S. Efficient brain tumor segmentation using Swin transformer and enhanced local self-attention. Int J Comput Assist Radiol Surg 2024; 19:273-281. [PMID: 37796413 DOI: 10.1007/s11548-023-03024-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 09/12/2023] [Indexed: 10/06/2023]
Abstract
PURPOSE Fully convolutional neural networks architectures have proven to be useful for brain tumor segmentation tasks. However, their performance in learning long-range dependencies is limited to their localized receptive fields. On the other hand, vision transformers (ViTs), essentially based on a multi-head self-attention mechanism, which generates attention maps to aggregate spatial information dynamically, have outperformed convolutional neural networks (CNNs). Inspired by the recent success of ViT models for the medical images segmentation, we propose in this paper a new network based on Swin transformer for semantic brain tumor segmentation. METHODS The proposed method for brain tumor segmentation combines Transformer and CNN modules as an encoder-decoder structure. The encoder incorporates ELSA transformer blocks used to enhance local detailed feature extraction. The extracted feature representations are fed to the decoder part via skip connections. The encoder part includes channel squeeze and spatial excitation blocks, which enable the extracted features to be more informative both spatially and channel-wise. RESULTS The method is evaluated on the public BraTS 2021 datasets containing 1251 cases of brain images, each with four 3D MRI modalities. Our proposed approach achieved excellent segmentation results with an average Dice score of 89.77% and an average Hausdorff distance of 8.90 mm. CONCLUSION We developed an automated framework for brain tumor segmentation using Swin transformer and enhanced local self-attention. Experimental results show that our method outperforms state-of-th-art 3D algorithms for brain tumor segmentation.
Collapse
Affiliation(s)
- Fethi Ghazouani
- Department of Nuclear Medicine, Henri Becquerel Center, 76038, Rouen, France.
- LITIS-QuantIF Laboratory, University of Rouen Normandy, 76183, Rouen, France.
| | - Pierre Vera
- Department of Nuclear Medicine, Henri Becquerel Center, 76038, Rouen, France
- LITIS-QuantIF Laboratory, University of Rouen Normandy, 76183, Rouen, France
| | - Su Ruan
- LITIS-QuantIF Laboratory, University of Rouen Normandy, 76183, Rouen, France
| |
Collapse
|
10
|
Liu H, Huang J, Li Q, Guan X, Tseng M. A deep convolutional neural network for the automatic segmentation of glioblastoma brain tumor: Joint spatial pyramid module and attention mechanism network. Artif Intell Med 2024; 148:102776. [PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 12/20/2023] [Accepted: 01/14/2024] [Indexed: 02/09/2024]
Abstract
This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
Collapse
Affiliation(s)
- Hengxin Liu
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Jingteng Huang
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Qiang Li
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Xin Guan
- School of Microelectronics, Tianjin University, Tianjin, China.
| | - Minglang Tseng
- Institute of Innovation and Circular Economy, Asia University, Taichung, Taiwan; Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan; UKM-Graduate School of Business, Universiti Kebangsaan Malaysia, 43000 Bangi, Selangor, Malaysia; Department of Industrial Engineering, Khon Kaen University, 40002, Thailand.
| |
Collapse
|
11
|
Deng Z, Huang G, Yuan X, Zhong G, Lin T, Pun CM, Huang Z, Liang Z. QMLS: quaternion mutual learning strategy for multi-modal brain tumor segmentation. Phys Med Biol 2023; 69:015014. [PMID: 38061066 DOI: 10.1088/1361-6560/ad135e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 12/07/2023] [Indexed: 12/27/2023]
Abstract
Objective.Due to non-invasive imaging and the multimodality of magnetic resonance imaging (MRI) images, MRI-based multi-modal brain tumor segmentation (MBTS) studies have attracted more and more attention in recent years. With the great success of convolutional neural networks in various computer vision tasks, lots of MBTS models have been proposed to address the technical challenges of MBTS. However, the problem of limited data collection usually exists in MBTS tasks, making existing studies typically have difficulty in fully exploring the multi-modal MRI images to mine complementary information among different modalities.Approach.We propose a novel quaternion mutual learning strategy (QMLS), which consists of a voxel-wise lesion knowledge mutual learning mechanism (VLKML mechanism) and a quaternion multi-modal feature learning module (QMFL module). Specifically, the VLKML mechanism allows the networks to converge to a robust minimum so that aggressive data augmentation techniques can be applied to expand the limited data fully. In particular, the quaternion-valued QMFL module treats different modalities as components of quaternions to sufficiently learn complementary information among different modalities on the hypercomplex domain while significantly reducing the number of parameters by about 75%.Main results.Extensive experiments on the dataset BraTS 2020 and BraTS 2019 indicate that QMLS achieves superior results to current popular methods with less computational cost.Significance.We propose a novel algorithm for brain tumor segmentation task that achieves better performance with fewer parameters, which helps the clinical application of automatic brain tumor segmentation.
Collapse
Affiliation(s)
- Zhengnan Deng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Xiaochen Yuan
- Faculty of Applied Sciences, Macao Polytechnic University, Macao, People's Republic of China
| | - Guo Zhong
- School of Information Science and Technology, Guangdong University of Foreign Studies, Guangzhou, 510006, People's Republic of China
| | - Tongxu Lin
- School of Automation, Guangdong University of Technology, Guangzhou, 510006, People's Republic of China
| | - Chi-Man Pun
- Department of Computer and Information Science, University of Macau, Macao, People's Republic of China
| | - Zhixin Huang
- Department of Neurology, Guangdong Second Provincial General Hospital, Guangzhou, 510317, People's Republic of China
| | - Zhixin Liang
- Department of Nuclear Medicine, Jinshazhou Hospital, Guangzhou University of Chinese Medicine, Guangzhou, 510168, People's Republic of China
| |
Collapse
|
12
|
Jyothi P, Dhanasekaran S. An attention 3DUNET and visual geometry group-19 based deep neural network for brain tumor segmentation and classification from MRI. J Biomol Struct Dyn 2023:1-12. [PMID: 37979152 DOI: 10.1080/07391102.2023.2283164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/06/2023] [Indexed: 11/20/2023]
Abstract
There has been an abrupt increase in brain tumor (BT) related medical cases during the past ten years. The tenth most typical type of tumor affecting millions of people is the BT. The cure rate can, however, rise if it is found early. When evaluating BT diagnosis and treatment options, MRI is a crucial tool. However, segmenting the tumors from magnetic resonance (MR) images is complex. The advancement of deep learning (DL) has led to the development of numerous automatic segmentation and classification approaches. However, most need improvement since they are limited to 2D images. So, this article proposes a novel and optimal DL system for segmenting and classifying the BTs from 3D brain MR images. Preprocessing, segmentation, feature extraction, feature selection, and tumor classification are the main phases of the proposed work. Preprocessing, such as noise removal, is performed on the collected brain MR images using bilateral filtering. The tumor segmentation uses spatial and channel attention-based three-dimensional u-shaped network (SC3DUNet) to segment the tumor lesions from the preprocessed data. After that, the feature extraction is done based on dilated convolution-based visual geometry group-19 (DCVGG-19), making the classification task more manageable. The optimal features are selected from the extracted feature sets using diagonal linear uniform and tangent flight included butterfly optimization algorithm. Finally, the proposed system applies an optimal hyperparameters-based deep neural network to classify the tumor classes. The experiments conducted on the BraTS2020 dataset show that the suggested method can segment tumors and categorize them more accurately than the existing state-of-the-art mechanisms.Communicated by Ramaswamy H. Sarma.
Collapse
Affiliation(s)
- Parvathy Jyothi
- Department of Computer Science and Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, India
| | - S Dhanasekaran
- Department of Information Technology, Kalasalingam Academy of Research and Education, Krishnankoil, India
| |
Collapse
|
13
|
Sun H, Yang S, Chen L, Liao P, Liu X, Liu Y, Wang N. Brain tumor image segmentation based on improved FPN. BMC Med Imaging 2023; 23:172. [PMID: 37904116 PMCID: PMC10617057 DOI: 10.1186/s12880-023-01131-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 10/19/2023] [Indexed: 11/01/2023] Open
Abstract
PURPOSE Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor. MATERIALS AND METHODS Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features. RESULTS Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. CONCLUSIONS The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors.
Collapse
Affiliation(s)
- Haitao Sun
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Shuai Yang
- Department of Radiotherapy and Minimally Invasive Surgery, The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519020, China
| | - Lijuan Chen
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Pingyan Liao
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Xiangping Liu
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Ying Liu
- Department of the Radiotherapy, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510060, China
| | - Ning Wang
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China.
| |
Collapse
|
14
|
Feng X, Ghimire K, Kim DD, Chandra RS, Zhang H, Peng J, Han B, Huang G, Chen Q, Patel S, Bettagowda C, Sair HI, Jones C, Jiao Z, Yang L, Bai H. Brain Tumor Segmentation for Multi-Modal MRI with Missing Information. J Digit Imaging 2023; 36:2075-2087. [PMID: 37340197 PMCID: PMC10501967 DOI: 10.1007/s10278-023-00860-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 05/22/2023] [Accepted: 05/24/2023] [Indexed: 06/22/2023] Open
Abstract
Deep convolutional neural networks (DCNNs) have shown promise in brain tumor segmentation from multi-modal MRI sequences, accommodating heterogeneity in tumor shape and appearance. The fusion of multiple MRI sequences allows networks to explore complementary tumor information for segmentation. However, developing a network that maintains clinical relevance in situations where certain MRI sequence(s) might be unavailable or unusual poses a significant challenge. While one solution is to train multiple models with different MRI sequence combinations, it is impractical to train every model from all possible sequence combinations. In this paper, we propose a DCNN-based brain tumor segmentation framework incorporating a novel sequence dropout technique in which networks are trained to be robust to missing MRI sequences while employing all other available sequences. Experiments were performed on the RSNA-ASNR-MICCAI BraTS 2021 Challenge dataset. When all MRI sequences were available, there were no significant differences in performance of the model with and without dropout for enhanced tumor (ET), tumor (TC), and whole tumor (WT) (p-values 1.000, 1.000, 0.799, respectively), demonstrating that the addition of dropout improves robustness without hindering overall performance. When key sequences were unavailable, the network with sequence dropout performed significantly better. For example, when tested on only T1, T2, and FLAIR sequences together, DSC for ET, TC, and WT increased from 0.143 to 0.486, 0.431 to 0.680, and 0.854 to 0.901, respectively. Sequence dropout represents a relatively simple yet effective approach for brain tumor segmentation with missing MRI sequences.
Collapse
Affiliation(s)
- Xue Feng
- Biomedical Engineering, University of Virginia, 22903, Charlottesville, VA, USA
- Carina Medical LLC, Lexington, KY, 40513, USA
| | | | - Daniel D Kim
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Rajat S Chandra
- Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, USA
| | - Helen Zhang
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Jian Peng
- Department of Neurology, Second Xiangya Hospital, Changsha, China
| | - Binghong Han
- Department of Neurology, Second Xiangya Hospital, Changsha, China
| | | | - Quan Chen
- Carina Medical LLC, Lexington, KY, 40513, USA
- Radiation Medicine, University of Kentucky, Lexington, KY, 40536, USA
| | - Sohil Patel
- Radiology and Medical Imaging, University of Virginia, 22903, Charlottesville, VA, USA
| | - Chetan Bettagowda
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA
| | - Haris I Sair
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Craig Jones
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA
- The Malone Center for Engineering in Healthcare, The Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Zhicheng Jiao
- Warren Alpert Medical School of Brown University, Providence, RI, USA
- Department of Diagnostic Imaging, Rhode Island Hospital, Providence, RI, USA
| | - Li Yang
- Department of Neurology, Second Xiangya Hospital, Changsha, China.
| | - Harrison Bai
- Department of Radiology and Radiological Science, Johns Hopkins University, 601 N Caroline St, Baltimore, MD, 21287, USA.
| |
Collapse
|
15
|
Choi Y, Al-Masni MA, Jung KJ, Yoo RE, Lee SY, Kim DH. A single stage knowledge distillation network for brain tumor segmentation on limited MR image modalities. Comput Methods Programs Biomed 2023; 240:107644. [PMID: 37307766 DOI: 10.1016/j.cmpb.2023.107644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 05/14/2023] [Accepted: 06/03/2023] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Precisely segmenting brain tumors using multimodal Magnetic Resonance Imaging (MRI) is an essential task for early diagnosis, disease monitoring, and surgical planning. Unfortunately, the complete four image modalities utilized in the well-known BraTS benchmark dataset: T1, T2, Fluid-Attenuated Inversion Recovery (FLAIR), and T1 Contrast-Enhanced (T1CE) are not regularly acquired in clinical practice due to the high cost and long acquisition time. Rather, it is common to utilize limited image modalities for brain tumor segmentation. METHODS In this paper, we propose a single stage learning of knowledge distillation algorithm that derives information from the missing modalities for better segmentation of brain tumors. Unlike the previous works that adopted a two-stage framework to distill the knowledge from a pre-trained network into a student network, where the latter network is trained on limited image modality, we train both models simultaneously using a single-stage knowledge distillation algorithm. We transfer the information by reducing the redundancy from a teacher network trained on full image modalities to the student network using Barlow Twins loss on a latent-space level. To distill the knowledge on the pixel level, we further employ a deep supervision idea that trains the backbone networks of both teacher and student paths using Cross-Entropy loss. RESULTS We demonstrate that the proposed single-stage knowledge distillation approach enables improving the performance of the student network in each tumor category with overall dice scores of 91.11% for Tumor Core, 89.70% for Enhancing Tumor, and 92.20% for Whole Tumor in the case of only using the FLAIR and T1CE images, outperforming the state-of-the-art segmentation methods. CONCLUSIONS The outcomes of this work prove the feasibility of exploiting the knowledge distillation in segmenting brain tumors using limited image modalities and hence make it closer to clinical practices.
Collapse
Affiliation(s)
- Yoonseok Choi
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Kyu-Jin Jung
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea
| | - Seong-Yeong Lee
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea.
| |
Collapse
|
16
|
Diao Y, Li F, Li Z. Joint learning-based feature reconstruction and enhanced network for incomplete multi-modal brain tumor segmentation. Comput Biol Med 2023; 163:107234. [PMID: 37450967 DOI: 10.1016/j.compbiomed.2023.107234] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 06/12/2023] [Accepted: 07/01/2023] [Indexed: 07/18/2023]
Abstract
Multimodal Magnetic Resonance Imaging (MRI) can provide valuable complementary information and substantially enhance the performance of brain tumor segmentation. However, it is common for certain modalities to be absent or missing during clinical diagnosis, which can significantly impair segmentation techniques that rely on complete modalities. Current advanced methods attempt to address this challenge by developing shared feature representations via modal fusion to handle different missing modality situations. Considering the importance of missing modality information in multimodal segmentation, this paper utilize a feature reconstruction method to recover the missing information, and proposes a joint learning-based feature reconstruction and enhancement method for incomplete modality brain tumor segmentation. The method leverages an information learning mechanism to transfer information from the complete modality to a single modality, enabling it to obtain complete brain tumor information, even without the support of other modalities. Additionally, the method incorporates a module for reconstructing missing modality features, which recovers fused features of the absent modality through utilizing the abundant potential information obtained from the available modalities. Furthermore, the feature enhancement mechanism improves shared feature representation by utilizing the information obtained from the missing modalities that have been reconstructed. These processes enable the method to obtain more comprehensive information regarding brain tumors in various missing modality circumstances, thereby enhancing the model's robustness. The performance of the proposed model was evaluated on BraTS datasets and compared with other deep learning algorithms using Dice similarity scores. On the BraTS2018 dataset, the proposed algorithm achieved a Dice similarity score of 86.28%, 77.02%, and 59.64% for whole tumors, tumor cores, and enhanced tumors, respectively. These results demonstrate the superiority of our framework over state-of-the-art methods in missing modalities situations.
Collapse
Affiliation(s)
- Yueqin Diao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| | - Fan Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| | - Zhiyuan Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China; Yunnan Key Laboratory of Artificial Intelligence, Kunming 650500, China.
| |
Collapse
|
17
|
Zhou T, Zhu S. Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation. Comput Biol Med 2023; 163:107142. [PMID: 37331100 DOI: 10.1016/j.compbiomed.2023.107142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/17/2023] [Accepted: 06/05/2023] [Indexed: 06/20/2023]
Abstract
Brain tumor is one of the most aggressive cancers in the world, accurate brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning. Although deep learning models have presented remarkable success in medical segmentation, they can only obtain the segmentation map without capturing the segmentation uncertainty. To achieve accurate and safe clinical results, it is necessary to produce extra uncertainty maps to assist the subsequent segmentation revision. To this end, we propose to exploit the uncertainty quantification in the deep learning model and apply it to multi-modal brain tumor segmentation. In addition, we develop an effective attention-aware multi-modal fusion method to learn the complimentary feature information from the multiple MR modalities. First, a multi-encoder-based 3D U-Net is proposed to obtain the initial segmentation results. Then, an estimated Bayesian model is presented to measure the uncertainty of the initial segmentation results. Finally, the obtained uncertainty maps are integrated into a deep learning-based segmentation network, serving as an additional constraint information to further refine the segmentation results. The proposed network is evaluated on publicly available BraTS 2018 and BraTS 2019 datasets. The experimental results demonstrate that the proposed method outperforms the previous state-of-the-art methods on Dice score, Hausdorff distance and Sensitivity metrics. Furthermore, the proposed components could be easily applied to other network architectures and other computer vision fields.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Shan Zhu
- School of Life and Environmental Science, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
18
|
Zhang G, Zhou J, He G, Zhu H. Deep fusion of multi-modal features for brain tumor image segmentation. Heliyon 2023; 9:e19266. [PMID: 37664757 PMCID: PMC10468380 DOI: 10.1016/j.heliyon.2023.e19266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/09/2023] [Accepted: 08/17/2023] [Indexed: 09/05/2023] Open
Abstract
Accurate segmentation of pathological regions in brain magnetic resonance images (MRI) is essential for the diagnosis and treatment of brain tumors. Multi-modality MRIs, which offer diverse feature information, are commonly utilized in brain tumor image segmentation. Deep neural networks have become prevalent in this field; however, many approaches simply concatenate different modalities and input them directly into the neural network for segmentation, disregarding the unique characteristics and complementarity of each modality. In this study, we propose a brain tumor image segmentation method that leverages deep residual learning with multi-modality image feature fusion. Our approach involves extracting and fusing distinct and complementary features from various modalities, fully exploiting the multi-modality information within a deep convolutional neural network to enhance the performance of brain tumor image segmentation. We evaluate the effectiveness of our proposed method using the BraTS2021 dataset and demonstrate that deep residual learning with multi-modality image feature fusion significantly improves segmentation accuracy. Our method achieves competitive segmentation results, with Dice values of 83.3, 89.07, and 91.44 for enhanced tumor, tumor core, and whole tumor, respectively. These findings highlight the potential of our method in improving brain tumor diagnosis and treatment through accurate segmentation of pathological regions in brain MRIs.
Collapse
Affiliation(s)
- Guying Zhang
- School of Mathematics, Physics and Information, Shaoxing University, Shaoxing, Zhejiang, 312000, China
| | - Jia Zhou
- Cancer Center, Gamma Knife Treatment Center, Zhejiang Provincial People's Hospital, Affiliated People's Hospital, Hangzhou Medical College, Hangzhou, Zhejiang, 310014, China
| | - Guanghua He
- School of Mathematics, Physics and Information, Shaoxing University, Shaoxing, Zhejiang, 312000, China
- Institute of Artificial Intelligence, Shaoxing University, Shaoxing, Zhejiang, 312000, China
| | - Hancan Zhu
- School of Mathematics, Physics and Information, Shaoxing University, Shaoxing, Zhejiang, 312000, China
- Institute of Artificial Intelligence, Shaoxing University, Shaoxing, Zhejiang, 312000, China
| |
Collapse
|
19
|
Jiang Y, Zhang S, Chi J. Multi-Modal Brain Tumor Data Completion Based on Reconstruction Consistency Loss. J Digit Imaging 2023; 36:1794-1807. [PMID: 36856903 PMCID: PMC10406787 DOI: 10.1007/s10278-022-00697-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 07/12/2022] [Accepted: 07/24/2022] [Indexed: 03/02/2023] Open
Abstract
Multi-modal brain magnetic resonance imaging (MRI) data has been widely applied in vison-based brain tumor segmentation methods due to its complementary diagnostic information from different modalities. Since the multi-modal image data is likely to be corrupted by noise or artifacts during the practical scanning process, making it difficult to build a universal model for the subsequent segmentation and diagnosis with incomplete input data, image completion has become one of the most attractive fields in the medical image pre-processing. It can not only assist clinicians to observe the patient's lesion area more intuitively and comprehensively, but also realize the desire to save costs for patients and reduce the psychological pressure of patients during tedious pathological examinations. Recently, many deep learning-based methods have been proposed to complement the multi-modal image data and provided good performance. However, current methods cannot fully reflect the continuous semantic information between the adjacent slices and the structural information of the intra-slice features, resulting in limited complementation effects and efficiencies. To solve these problems, in this work, we propose a novel generative adversarial network (GAN) framework, named as random generative adversarial network (RAGAN), to complete the missing T1, T1ce, and FLAIR data from the given T2 modal data in real brain MRI, which consists of the following parts: (1) For the generator, we use T2 modal images and multi-modal classification labels from the same sample for cyclically supervised training of image generation, so as to realize the restoration of arbitrary modal images. (2) For the discriminator, a multi-branch network is proposed where the primary branch is designed to judge whether the certain generated modal image is similar to the target modal image, while the auxiliary branch is to judge whether its essential visual features are similar to those of the target modal image. We conduct qualitative and quantitative experimental validations on the BraTs2018 dataset, generating 10,686 MRI data in each missing modality. Real brain tumor morphology images were compared with synthetic brain tumor morphology images using PSNR and SSIM as evaluation metrics. Experiments demonstrate that the brightness, resolution, location, and morphology of brain tissue under different modalities are well reconstructed. Meanwhile, we also use the segmentation network as a further validation experiment. Blend synthetic and real images into a segmentation network. Our segmentation network adopts the classic segmentation network UNet. The segmentation result is 77.58%. In order to prove the value of our proposed method, we use the better segmentation network RES_UNet with depth supervision as the segmentation model, and the segmentation accuracy rate is 88.76%. Although our method does not significantly outperform other algorithms, the DICE value is 2% higher than the current state-of-the-art data completion algorithm TC-MGAN.
Collapse
Affiliation(s)
- Yang Jiang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China
| | - Shuang Zhang
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China
| | - Jianning Chi
- Faculty of Robot Science and Engineering, Northeastern University, Shenyang, 110167, China.
- Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, 110167, China.
| |
Collapse
|
20
|
Jia Z, Zhu H, Zhu J, Ma P. Two-Branch network for brain tumor segmentation using attention mechanism and super-resolution reconstruction. Comput Biol Med 2023; 157:106751. [PMID: 36934534 DOI: 10.1016/j.compbiomed.2023.106751] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 02/12/2023] [Accepted: 03/06/2023] [Indexed: 03/17/2023]
Abstract
Accurate segmentation of brain tumor plays an important role in MRI diagnosis and treatment monitoring of brain tumor. However, the degree of lesions in each patient's brain tumor region is usually inconsistent, with large structural differences, and brain tumor MR images are characterized by low contrast and blur, current deep learning algorithms often cannot achieve accurate segmentation. To address this problem, we propose a novel end-to-end brain tumor segmentation algorithm by integrating the improved 3D U-Net network and super-resolution image reconstruction into one framework. In addition, the coordinate attention module is embedded before the upsampling operation of the backbone network, which enhances the capture ability of local texture feature information and global location feature information. To demonstrate the segmentation results of the proposed algorithm in different brain tumor MR images, we have trained and evaluated the proposed algorithm on BraTS datasets, and compared with other deep learning algorithms by dice similarity scores. On the BraTS2021 dataset, the proposed algorithm achieves the dice similarity score of 89.61%, 88.30%, 91.05%, and the Hausdorff distance (95%) of 1.414 mm, 7.810 mm, 4.583 mm for the enhancing tumors, tumor cores and whole tumors, respectively. The experimental results illuminate that our method outperforms the baseline 3D U-Net method and yields good performance on different datasets. It indicated that it is robust to segmentation of brain tumor MR images with structures vary considerably.
Collapse
Affiliation(s)
- Zhaohong Jia
- School of Internet, Anhui University, Hefei 230039, China
| | - Hongxin Zhu
- School of Internet, Anhui University, Hefei 230039, China
| | - Junan Zhu
- School of Internet, Anhui University, Hefei 230039, China.
| | - Ping Ma
- School of Internet, Anhui University, Hefei 230039, China
| |
Collapse
|
21
|
Liu Z, Wei J, Li R, Zhou J. Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning. Comput Biol Med 2023; 159:106927. [PMID: 37105113 DOI: 10.1016/j.compbiomed.2023.106927] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 04/02/2023] [Accepted: 04/13/2023] [Indexed: 04/29/2023]
Abstract
Since the brain is the human body's primary command and control center, brain cancer is one of the most dangerous cancers. Automatic segmentation of brain tumors from multi-modal images is important in diagnosis and treatment. Due to the difficulties in obtaining multi-modal paired images in clinical practice, recent studies segment brain tumors solely relying on unpaired images and discarding the available paired images. Although these models solve the dependence on paired images, they cannot fully exploit the complementary information from different modalities, resulting in low unimodal segmentation accuracy. Hence, this work studies the unimodal segmentation with privileged semi-paired images, i.e., limited paired images are introduced to the training phase. Specifically, we present a novel two-step (intra-modality and inter-modality) curriculum disentanglement learning framework. The modality-specific style codes describe the attenuation of tissue features and image contrast, and modality-invariant content codes contain anatomical and functional information extracted from the input images. Besides, we address the problem of unthorough decoupling by introducing constraints on the style and content spaces. Experiments on the BraTS2020 dataset highlight that our model outperforms the competing models on unimodal segmentation, achieving average dice scores of 82.91%, 72.62%, and 54.80% for WT (the whole tumor), TC (the tumor core), and ET (the enhancing tumor), respectively. Finally, we further evaluate our model's variable multi-modal brain tumor segmentation performance by introducing a fusion block (TFusion). The experimental results reveal that our model achieves the best WT segmentation performance for all 15 possible modality combinations with 87.31% average accuracy. In summary, we propose a curriculum disentanglement learning framework for unimodal segmentation with privileged semi-paired images. Moreover, the benefits of the improved unimodal segmentation extend to variable multi-modal segmentation, demonstrating that improving the unimodal segmentation performance is significant for brain tumor segmentation with missing modalities. Our code is available at https://github.com/scut-cszcl/SpBTS.
Collapse
Affiliation(s)
- Zecheng Liu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Jia Wei
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Rui Li
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY, USA.
| | - Jianlong Zhou
- Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia.
| |
Collapse
|
22
|
Cardone D, Trevisi G, Perpetuini D, Filippini C, Merla A, Mangiola A. Intraoperative thermal infrared imaging in neurosurgery: machine learning approaches for advanced segmentation of tumors. Phys Eng Sci Med 2023; 46:325-337. [PMID: 36715852 PMCID: PMC10030394 DOI: 10.1007/s13246-023-01222-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 01/17/2023] [Indexed: 01/31/2023]
Abstract
Surgical resection is one of the most relevant practices in neurosurgery. Finding the correct surgical extent of the tumor is a key question and so far several techniques have been employed to assist the neurosurgeon in preserving the maximum amount of healthy tissue. Some of these methods are invasive for patients, not always allowing high precision in the detection of the tumor area. The aim of this study is to overcome these limitations, developing machine learning based models, relying on features obtained from a contactless and non-invasive technique, the thermal infrared (IR) imaging. The thermal IR videos of thirteen patients with heterogeneous tumors were recorded in the intraoperative context. Time (TD)- and frequency (FD)-domain features were extracted and fed different machine learning models. Models relying on FD features have proven to be the best solutions for the optimal detection of the tumor area (Average Accuracy = 90.45%; Average Sensitivity = 84.64%; Average Specificity = 93,74%). The obtained results highlight the possibility to accurately detect the tumor lesion boundary with a completely non-invasive, contactless, and portable technology, revealing thermal IR imaging as a very promising tool for the neurosurgeon.
Collapse
Affiliation(s)
- Daniela Cardone
- Department of Engineering and Geology, University G. d'Annunzio Chieti-Pescara, Pescara, Italy.
| | - Gianluca Trevisi
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - David Perpetuini
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Chiara Filippini
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| | - Arcangelo Merla
- Department of Engineering and Geology, University G. d'Annunzio Chieti-Pescara, Pescara, Italy
| | - Annunziato Mangiola
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio Chieti-Pescara, Chieti, Italy
| |
Collapse
|
23
|
Zhou T, Ruan S, Hu H. A literature survey of MR-based brain tumor segmentation with missing modalities. Comput Med Imaging Graph 2023; 104:102167. [PMID: 36584536 DOI: 10.1016/j.compmedimag.2022.102167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 11/01/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022]
Abstract
Multimodal MR brain tumor segmentation is one of the hottest issues in the community of medical image processing. However, acquiring the complete set of MR modalities is not always possible in clinical practice, due to the acquisition protocols, image corruption, scanner availability, scanning cost or allergies to certain contrast materials. The missing information can cause some restraints to brain tumor diagnosis, monitoring, treatment planning and prognosis. Thus, it is highly desirable to develop brain tumor segmentation methods to address the missing modalities problem. Based on the recent advancements, in this review, we provide a detailed analysis of the missing modality issue in MR-based brain tumor segmentation. First, we briefly introduce the biomedical background concerning brain tumor, MR imaging techniques, and the current challenges in brain tumor segmentation. Then, we provide a taxonomy of the state-of-the-art methods with five categories, namely, image synthesis-based method, latent feature space-based model, multi-source correlation-based method, knowledge distillation-based method, and domain adaptation-based method. In addition, the principles, architectures, benefits and limitations are elaborated in each method. Following that, the corresponding datasets and widely used evaluation metrics are described. Finally, we analyze the current challenges and provide a prospect for future development trends. This review aims to provide readers with a thorough knowledge of the recent contributions in the field of brain tumor segmentation with missing modalities and suggest potential future directions.
Collapse
Affiliation(s)
- Tongxue Zhou
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou 311121, China
| | - Su Ruan
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France
| | - Haigen Hu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China; Key Laboratory of Visual Media Intelligent Processing Technology of Zhejiang Province, Hangzhou 310023, China.
| |
Collapse
|
24
|
Wang Y, Cao Y, Li J, Wu H, Wang S, Dong X, Yu H. A lightweight hierarchical convolution network for brain tumor segmentation. BMC Bioinformatics 2022; 22:636. [PMID: 36513986 PMCID: PMC9749147 DOI: 10.1186/s12859-022-05039-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 11/04/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Brain tumor segmentation plays a significant role in clinical treatment and surgical planning. Recently, several deep convolutional networks have been proposed for brain tumor segmentation and have achieved impressive performance. However, most state-of-the-art models use 3D convolution networks, which require high computational costs. This makes it difficult to apply these models to medical equipment in the future. Additionally, due to the large diversity of the brain tumor and uncertain boundaries between sub-regions, some models cannot well-segment multiple tumors in the brain at the same time. RESULTS In this paper, we proposed a lightweight hierarchical convolution network, called LHC-Net. Our network uses a multi-scale strategy which the common 3D convolution is replaced by the hierarchical convolution with residual-like connections. It improves the ability of multi-scale feature extraction and greatly reduces parameters and computation resources. On the BraTS2020 dataset, LHC-Net achieves the Dice scores of 76.38%, 90.01% and 83.32% for ET, WT and TC, respectively, which is better than that of 3D U-Net with 73.50%, 89.42% and 81.92%. Especially on the multi-tumor set, our model shows significant performance improvement. In addition, LHC-Net has 1.65M parameters and 35.58G FLOPs, which is two times fewer parameters and three times less computation compared with 3D U-Net. CONCLUSION Our proposed method achieves automatic segmentation of tumor sub-regions from four-modal brain MRI images. LHC-Net achieves competitive segmentation performance with fewer parameters and less computation than the state-of-the-art models. It means that our model can be applied under limited medical computing resources. By using the multi-scale strategy on channels, LHC-Net can well-segment multiple tumors in the patient's brain. It has great potential for application to other multi-scale segmentation tasks.
Collapse
Affiliation(s)
- Yuhu Wang
- Tianjin International Engineering Institute, Tianjin University, Tianjin, China
| | - Yuzhen Cao
- Department of Biomedical Engineering, Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| | - Jinqiu Li
- Tianjin International Engineering Institute, Tianjin University, Tianjin, China
| | - Hongtao Wu
- Department of Biomedical Engineering, Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| | - Shuo Wang
- Department of Biomedical Engineering, Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| | - Xinming Dong
- Tianjin Rehabilitation Convalescent Center, Tianjin, China.
| | - Hui Yu
- Department of Biomedical Engineering, Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China. .,Tianjin International Engineering Institute, Tianjin University, Tianjin, China. .,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China.
| |
Collapse
|
25
|
Li H, Nan Y, Del Ser J, Yang G. Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation. Neural Comput Appl 2022; 35:22071-22085. [PMID: 37724130 PMCID: PMC10505106 DOI: 10.1007/s00521-022-08016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/26/2022] [Indexed: 11/19/2022]
Abstract
Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.
Collapse
Affiliation(s)
- Hao Li
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, UK
- Department of Bioengineering, Faculty of Engineering, Imperial College London, London, UK
| | - Yang Nan
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, UK
| | - Javier Del Ser
- TECNALIA, Basque Research and Technology Alliance (BRTA), Derio, Spain
- University of the Basque Country (UPV/EHU), Bilbao, Spain
| | - Guang Yang
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, UK
- Royal Brompton Hospital, London, UK
| |
Collapse
|
26
|
Ramprasad MVS, Rahman MZU, Bayleyegn MD. A Deep Probabilistic Sensing and Learning Model for Brain Tumor Classification With Fusion-Net and HFCMIK Segmentation. IEEE Open J Eng Med Biol 2022; 3:178-188. [PMID: 36712319 PMCID: PMC9870266 DOI: 10.1109/ojemb.2022.3217186] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 10/14/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Goal: Implementation of an artificial intelli gence-based medical diagnosis tool for brain tumor classification, which is called the BTFSC-Net. Methods: Medical images are preprocessed using a hybrid probabilistic wiener filter (HPWF) The deep learning convolutional neural network (DLCNN) was utilized to fuse MRI and CT images with robust edge analysis (REA) properties, which are used to identify the slopes and edges of source images. Then, hybrid fuzzy c-means integrated k-means (HFCMIK) clustering is used to segment the disease affected region from the fused image. Further, hybrid features such as texture, colour, and low-level features are extracted from the fused image by using gray-level cooccurrence matrix (GLCM), redundant discrete wavelet transform (RDWT) descriptors. Finally, a deep learning based probabilistic neural network (DLPNN) is used to classify malignant and benign tumors. The BTFSC-Net attained 99.21% of segmentation accuracy and 99.46% of classification accuracy. Conclusions: The simulations showed that BTFSC-Net outperformed as compared to existing methods.
Collapse
Affiliation(s)
- M V S Ramprasad
- Koneru Lakshmaiah Education FoundationK L University Guntur 522302 India
- GITAM (Deemed to be University) Visakhapatnam AP 522502 India
| | - Md Zia Ur Rahman
- Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education FoundationK L University Vaddeswaram Guntur 522502 India
| | | |
Collapse
|
27
|
Ma Q, Zhou S, Li C, Liu F, Liu Y, Hou M, Zhang Y. DGRUnit: Dual graph reasoning unit for brain tumor segmentation. Comput Biol Med 2022; 149:106079. [PMID: 36108413 DOI: 10.1016/j.compbiomed.2022.106079] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 08/27/2022] [Accepted: 09/03/2022] [Indexed: 11/20/2022]
Abstract
Many fully automatic segmentation models have been created to solve the difficulty of brain tumor segmentation, thanks to the rapid growth of deep learning. However, few approaches focus on the long-range relationships and contextual interdependence in multimodal Magnetic Resonance (MR) images. In this paper, we propose a novel approach for brain tumor segmentation called the dual graph reasoning unit (DGRUnit). Two parallel graph reasoning modules are included in our proposed method: a spatial reasoning module and a channel reasoning module. The spatial reasoning module models the long-range spatial dependencies between distinct regions in an image using a graph convolutional network (GCN). The channel reasoning module uses a graph attention network (GAT) to model the rich contextual interdependencies between different channels with similar semantic representations. Our experimental results clearly demonstrate the superior performance of the proposed DGRUnit. The ablation study shows the flexibility and generalizability of our model, which can be easily integrated into a wide range of neural networks and further improve them. When compared to several state-of-the-art methods, experimental results show that the proposed approach significantly improves both visual inspection and quantitative metrics for brain tumor segmentation tasks.
Collapse
|
28
|
Chen W, Zhou W, Zhu L, Cao Y, Gu H, Yu B. MTDCNet: A 3D multi-threading dilated convolutional network for brain tumor automatic segmentation. J Biomed Inform 2022; 133:104173. [PMID: 35998815 DOI: 10.1016/j.jbi.2022.104173] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Revised: 04/20/2022] [Accepted: 08/15/2022] [Indexed: 11/24/2022]
Abstract
Glioma is one of the most threatening tumors and the survival rate of the infected patient is low. The automatic segmentation of the tumors by reliable algorithms can reduce diagnosis time. In this paper, a novel 3D multi-threading dilated convolutional network (MTDC-Net) is proposed for the automatic brain tumor segmentation. First of all, a multi-threading dilated convolution (MTDC) strategy is introduced in the encoder part, so that the low dimensional structural features can be extracted and integrated better. At the same time, the pyramid matrix fusion (PMF) algorithm is used to integrate the characteristic structural information better. Secondly, in order to make the better use of context semantical information, this paper proposed a spatial pyramid convolution (SPC) operation. By using convolution with different kernel sizes, the model can aggregate more semantic information. Finally, the multi-threading adaptive pooling up-sampling (MTAU) strategy is used to increase the weight of semantic information, and improve the recognition ability of the model. And a pixel-based post-processing method is used to prevent the effects of error prediction. On the brain tumors segmentation challenge 2018 (BraTS2018) public validation dataset, the dice scores of MTDC-Net are 0.832, 0.892 and 0.809 for core, whole and enhanced of the tumor, respectively. On the BraTS2020 public validation dataset, the dice scores of MTDC-Net are 0.833, 0.896 and 0.797 for the core tumor, whole tumor and enhancing tumor, respectively. Mass numerical experiments show that MTDC-Net is a state-of-the-art network for automatic brain tumor segmentation.
Collapse
Affiliation(s)
- Wankun Chen
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Weifeng Zhou
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Ling Zhu
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Yuan Cao
- College of Information Science and Technology, School of Data Science, Qingdao University of Science and Technology, Qingdao 266061, China
| | - Haiming Gu
- College of Mathematics and Physics, Qingdao University of Science and Technology, Qingdao 266061, China.
| | - Bin Yu
- College of Information Science and Technology, School of Data Science, Qingdao University of Science and Technology, Qingdao 266061, China; School of Data Science, University of Science and Technology of China, Hefei 230027, China.
| |
Collapse
|
29
|
Li X, Jiang Y, Li M, Zhang J, Yin S, Luo H. MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation. Med Phys 2022; 50:2249-2262. [PMID: 35962724 DOI: 10.1002/mp.15933] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/16/2022] [Accepted: 06/14/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Accurate and automated brain tumor segmentation from multi-modality MR images plays a significant role in tumor treatment. However, the existing approaches mainly focus on the fusion of multi-modality while ignoring the correlation between single-modality and tumor sub-components. For example, T2-weighted images show good visualization of edema, and T1-contrast images have a good contrast between enhancing tumor core and necrosis. In the actual clinical process, professional physicians also label tumors according to these characteristics. We design a method for brain tumors segmentation that utilizes both multi-modality fusion and single-modality characteristics. METHODS A multi-modality and single-modality feature recalibration network (MSFR-Net) is proposed for brain tumor segmentation from MR images. Specifically, multi-modality information and single-modality information are assigned to independent pathways. Multi-modality network explicitly learn the relationship between all modalities and all tumor sub-components. Single-modality network learn the relationship between single-modality and its highly correlated tumor sub-components. Then, a dual recalibration module (DRM) is designed to connect the parallel single-modality network and multi-modality network at multiple stages. The function of the DRM is to unify the two types of features into the same feature space. RESULTS Experiments on BraTS 2015 dataset and BraTS 2018 dataset show that the proposed method is competitive and superior to other state-of-the-art methods. The proposed method achieved the segmentation results with dice coefficients of 0.86 and hausdorff distance of 4.82 on BraTS 2018 dataset, with dice coefficients of 0.80, positive predictive value of 0.76 and sensitivity of 0.78 on BraTS 2015 dataset. CONCLUSIONS This work combines the manual labeling process of doctors and introduces the correlation between single-modality and the tumor sub-components into the segmentation network. The method improves the segmentation performance of brain tumors and can be applied in the clinical practice. The code of the proposed method is available at: https://github.com/xiangQAQ/MSFR-Net. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, 7034, Norway
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| |
Collapse
|
30
|
Cai J, He Z, Zheng Z, Xu Q, Hu C, Huo M. Learning global dependencies based on hierarchical full connection for brain tumor segmentation. Comput Methods Programs Biomed 2022; 221:106925. [PMID: 35688765 DOI: 10.1016/j.cmpb.2022.106925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 05/07/2022] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Because the appearance, shape and location of brain tumors vary greatly among different patients, brain tumor segmentation (BTS) is extremely challenging. Recently, many studies have used attention mechanisms to solve this problem, which can be roughly divided into two categories: the spatial attention based on convolution (with or without channel attention) and self-attention. Due to the limitation of convolution operations, the spatial attention based on convolution cannot learn global dependencies very well, resulting in poor performance in BTS. A simple improvement idea is to directly substitute it with self-attention, which has an excellent ability to learn global dependencies. Since self-attention is not friendly to GPU memory, this simple substitution will make the new attention mechanism unable to be applied to high-resolution low-level feature maps, which contain considerable geometric information and are also important for improving the performance of attention mechanism in BTS. METHOD In this paper, we propose a hierarchical fully connected module, named H-FC, to learn global dependencies. H-FC learns local dependencies at different feature map scales through fully connected layers hierarchically, and then combines these local dependencies as approximations of the global dependencies. H-FC requires very little GPU memory and can easily replace spatial attention module based on convolution operation, such as Attention Gate and SAM (in CBAM), to improve the performance of attention mechanisms in BTS. RESULTS Adequate comparative experiments illustrate that H-FC performs better than Attention Gate and SAM (in CBAM), which lack the ability to learn global dependencies, in BTS, with improvements in most metrics and a larger improvement in Hausdorff Distance. By comparing the amount of calculation and parameters of the model before and after adding H-FC, it is prove that H-FC is light-weight. CONCLUSION In this paper, we propose a novel H-FC to learn global dependencies. We illustrate the effectiveness of H-FC through experiments on BraTS2020 dataset. We mainly explore the influence of the region size and the number of steps on the performance of H-FC. We also confirm that the global dependencies of low-level feature maps are also important to BTS. We show that H-FC is light-weight through a time and space complexity analysis and the experimental results.
Collapse
Affiliation(s)
- Jianping Cai
- School of Computer and Computational Science, Zhejiang University City College, Hangzhou, 310011, China.
| | - Zhe He
- School of Computer Science and Technology, Zhejiang University, Hangzhou, 310013, China.
| | - Zengwei Zheng
- School of Computer and Computational Science, Zhejiang University City College, Hangzhou, 310011, China
| | - Qingsheng Xu
- Department of Neurosurgery, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, 310013, China
| | - Chi Hu
- Department of Neurosurgery, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, 310013, China
| | - Meimei Huo
- School of Computer and Computational Science, Zhejiang University City College, Hangzhou, 310011, China
| |
Collapse
|
31
|
Liang J, Yang C, Zeng M, Wang X. TransConver: transformer and convolution parallel network for developing automatic brain tumor segmentation in MRI images. Quant Imaging Med Surg 2022; 12:2397-2415. [PMID: 35371952 PMCID: PMC8923874 DOI: 10.21037/qims-21-919] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 01/04/2022] [Indexed: 07/28/2023]
Abstract
BACKGROUND Medical image segmentation plays a vital role in computer-aided diagnosis (CAD) systems. Both convolutional neural networks (CNNs) with strong local information extraction capacities and transformers with excellent global representation capacities have achieved remarkable performance in medical image segmentation. However, because of the semantic differences between local and global features, how to combine convolution and transformers effectively is an important challenge in medical image segmentation. METHODS In this paper, we proposed TransConver, a U-shaped segmentation network based on convolution and transformer for automatic and accurate brain tumor segmentation in MRI images. Unlike the recently proposed transformer and convolution based models, we proposed a parallel module named transformer-convolution inception (TC-inception), which extracts local and global information via convolution blocks and transformer blocks, respectively, and integrates them by a cross-attention fusion with global and local feature (CAFGL) mechanism. Meanwhile, the improved skip connection structure named skip connection with cross-attention fusion (SCCAF) mechanism can alleviate the semantic differences between encoder features and decoder features for better feature fusion. In addition, we designed 2D-TransConver and 3D-TransConver for 2D and 3D brain tumor segmentation tasks, respectively, and verified the performance and advantage of our model through brain tumor datasets. RESULTS We trained our model on 335 cases from the training dataset of MICCAI BraTS2019 and evaluated the model's performance based on 66 cases from MICCAI BraTS2018 and 125 cases from MICCAI BraTS2019. Our TransConver achieved the best average Dice score of 83.72% and 86.32% on BraTS2019 and BraTS2018, respectively. CONCLUSIONS We proposed a transformer and convolution parallel network named TransConver for brain tumor segmentation. The TC-Inception module effectively extracts global information while retaining local details. The experimental results demonstrated that good segmentation requires the model to extract local fine-grained details and global semantic information simultaneously, and our TransConver effectively improves the accuracy of brain tumor segmentation.
Collapse
|
32
|
Alqazzaz S, Sun X, Nokes LD, Yang H, Yang Y, Xu R, Zhang Y, Yang X. Combined Features in Region of Interest for Brain Tumor Segmentation. J Digit Imaging 2022; 35:938-946. [PMID: 35293605 PMCID: PMC9485383 DOI: 10.1007/s10278-022-00602-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 01/27/2022] [Accepted: 02/03/2022] [Indexed: 11/03/2022] Open
Abstract
Diagnosis of brain tumor gliomas is a challenging task in medical image analysis due to its complexity, the less regularity of tumor structures, and the diversity of tissue textures and shapes. Semantic segmentation approaches using deep learning have consistently outperformed the previous methods in this challenging task. However, deep learning is insufficient to provide the required local features related to tissue texture changes due to tumor growth. This paper designs a hybrid method arising from this need, which incorporates machine-learned and hand-crafted features. A semantic segmentation network (SegNet) is used to generate the machine-learned features, while the grey-level co-occurrence matrix (GLCM)-based texture features construct the hand-crafted features. In addition, the proposed approach only takes the region of interest (ROI), which represents the extension of the complete tumor structure, as input, and suppresses the intensity of other irrelevant area. A decision tree (DT) is used to classify the pixels of ROI MRI images into different parts of tumors, i.e. edema, necrosis and enhanced tumor. The method was evaluated on BRATS 2017 dataset. The results demonstrate that the proposed model provides promising segmentation in brain tumor structure. The F-measures for automatic brain tumor segmentation against ground truth are 0.98, 0.75 and 0.69 for whole tumor, core and enhanced tumor, respectively.
Collapse
Affiliation(s)
- Salma Alqazzaz
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, UK.,Department of Physics College of Science for Women, Baghdad University, Baghdad, Iraq
| | - Xianfang Sun
- School of Computer Science and Informatics, Cardiff University, CF24 3AA, Cardiff, UK
| | - Len Dm Nokes
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, UK
| | - Hong Yang
- Department of Radiology, The Second People's Hospital of Guangxi Zhuang Autonomous Region, Guilin, 541002, PR China
| | - Yingxia Yang
- Department of Radiology, The People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530021, PR China
| | - Ronghua Xu
- Centre of Information and Network Management, The People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530021, PR China
| | - Yanqiang Zhang
- State Information Center of China, Beijing, 100045, PR China
| | - Xin Yang
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, UK.
| |
Collapse
|
33
|
Zhu L, He Q, Huang Y, Zhang Z, Zeng J, Lu L, Kong W, Zhou F. DualMMP-GAN: Dual-scale multi-modality perceptual generative adversarial network for medical image segmentation. Comput Biol Med 2022; 144:105387. [PMID: 35305502 DOI: 10.1016/j.compbiomed.2022.105387] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/04/2022] [Accepted: 03/04/2022] [Indexed: 01/22/2023]
Abstract
Multi-modality magnetic resonance imaging (MRI) can reveal distinct patterns of tissue in the human body and is crucial to clinical diagnosis. But it still remains a challenge to obtain diverse and plausible multi-modality MR images due to expense, noise, and artifacts. For the same lesion, different modalities of MRI have big differences in context information, coarse location, and fine structure. In order to achieve better generation and segmentation performance, a dual-scale multi-modality perceptual generative adversarial network (DualMMP-GAN) is proposed based on cycle-consistent generative adversarial networks (CycleGAN). Dilated residual blocks are introduced to increase the receptive field, preserving structure and context information of images. A dual-scale discriminator is constructed. The generator is optimized by discriminating patches to represent lesions with different sizes. The perceptual consistency loss is introduced to learn the mapping between the generated and target modality at different semantic levels. Moreover, generative multi-modality segmentation (GMMS) combining given modalities with generated modalities is proposed for brain tumor segmentation. Experimental results show that the DualMMP-GAN outperforms the CycleGAN and some state-of-the-art methods in terms of PSNR, SSMI, and RMSE in most tasks. In addition, dice, sensitivity, specificity, and Hausdorff95 obtained from segmentation by GMMS are all higher than those from a single modality. The objective index obtained by the proposed methods are close to upper bounds obtained from real multiple modalities, indicating that GMMS can achieve similar effects as multi-modality. Overall, the proposed methods can serve as an effective method in clinical brain tumor diagnosis with promising application potential.
Collapse
Affiliation(s)
- Li Zhu
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Qiong He
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, 361005, China.
| | - Zihe Zhang
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Jiaming Zeng
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Ling Lu
- School of Information Engineering, Nanchang University, Nanchang, 330031, China.
| | - Weiming Kong
- Hospital of the Joint Logistics Support Force of the Chinese People's Liberation Army, No.908, Nanchang, 330002, China.
| | - Fuqing Zhou
- Department of Radiology, The First Affiliated Hospital, Nanchang University, Nanchang, 330006, China.
| |
Collapse
|
34
|
Qin C, Wu Y, Liao W, Zeng J, Liang S, Zhang X. Improved U-Net3+ with stage residual for brain tumor segmentation. BMC Med Imaging 2022; 22:14. [PMID: 35086482 PMCID: PMC8793173 DOI: 10.1186/s12880-022-00738-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 01/17/2022] [Indexed: 11/14/2022] Open
Abstract
Background For the encoding part of U-Net3+,the ability of brain tumor feature extraction is insufficient, as a result, the features can not be fused well during up-sampling, and the accuracy of segmentation will reduce. Methods In this study, we put forward an improved U-Net3+ segmentation network based on stage residual. In the encoder part, the encoder based on the stage residual structure is used to solve the vanishing gradient problem caused by the increasing in network depth, and enhances the feature extraction ability of the encoder which is instrumental in full feature fusion when up-sampling in the network. What’s more, we replaced batch normalization (BN) layer with filter response normalization (FRN) layer to eliminate batch size impact on the network. Based on the improved U-Net3+ two-dimensional (2D) model with stage residual, IResUnet3+ three-dimensional (3D) model is constructed. We propose appropriate methods to deal with 3D data, which achieve accurate segmentation of the 3D network. Results The experimental results showed that: the sensitivity of WT, TC, and ET increased by 1.34%, 4.6%, and 8.44%, respectively. And the Dice coefficients of ET and WT were further increased by 3.43% and 1.03%, respectively. To facilitate further research, source code can be found at: https://github.com/YuOnlyLookOne/IResUnet3Plus. Conclusion The improved network has a significant improvement in the segmentation task of the brain tumor BraTS2018 dataset, compared with the classical networks u-net, v-net, resunet and u-net3+, the proposed network has smaller parameters and significantly improved accuracy.
Collapse
Affiliation(s)
- Chuanbo Qin
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Yujie Wu
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Wenbin Liao
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China.,School of Computer and Software, Shenzhen University, Shenzhen, 518000, China
| | - Junying Zeng
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China.
| | - Shufen Liang
- Faculty of Intelligent Manufacturing, Wuyi University, Jiangmen, 529020, China
| | - Xiaozhi Zhang
- School of Electrical Engineering, University of South China, Hengyang, 421001, China
| |
Collapse
|
35
|
Abstract
In this paper, we present a Deep Convolutional Neural Networks (CNNs) for fully automatic brain tumor segmentation for both high- and low-grade gliomas in MRI images. Unlike normal tissues or organs that usually have a fixed location or shape, brain tumors with different grades have shown great variation in terms of the location, size, structure, and morphological appearance. Moreover, the severe data imbalance exists not only between the brain tumor and non-tumor tissues, but also among the different sub-regions inside brain tumor (e.g., enhancing tumor, necrotic, edema, and non-enhancing tumor). Therefore, we introduce a hybrid model to address the challenges in the multi-modality multi-class brain tumor segmentation task. First, we propose the dynamic focal Dice loss function that is able to focus more on the smaller tumor sub-regions with more complex structures during training, and the learning capacity of the model is dynamically distributed to each class independently based on its training performance in different training stages. Besides, to better recognize the overall structure of the brain tumor and the morphological relationship among different tumor sub-regions, we relax the boundary constraints for the inner tumor regions in coarse-to-fine fashion. Additionally, a symmetric attention branch is proposed to highlight the possible location of the brain tumor from the asymmetric features caused by growth and expansion of the abnormal tissues in the brain. Generally, to balance the learning capacity of the model between spatial details and high-level morphological features, the proposed model relaxes the constraints of the inner boundary and complex details and enforces more attention on the tumor shape, location, and the harder classes of the tumor sub-regions. The proposed model is validated on the publicly available brain tumor dataset from real patients, BRATS 2019. The experimental results reveal that our model improves the overall segmentation performance in comparison with the state-of-the-art methods, with major progress on the recognition of the tumor shape, the structural relationship of tumor sub-regions, and the segmentation of more challenging tumor sub-regions, e.g., the tumor core, and enhancing tumor.
Collapse
Affiliation(s)
- Pei Wang
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Albert C S Chung
- Lo Kwee-Seong Medical Image Analysis Laboratory, Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
36
|
Wang YL, Zhao ZJ, Hu SY, Chang FL. CLCU-Net: Cross-level connected U-shaped network with selective feature aggregation attention module for brain tumor segmentation. Comput Methods Programs Biomed 2021; 207:106154. [PMID: 34034031 DOI: 10.1016/j.cmpb.2021.106154] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 04/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumors are among the most deadly cancers worldwide. Due to the development of deep convolutional neural networks, many brain tumor segmentation methods help clinicians diagnose and operate. However, most of these methods insufficiently use multi-scale features, reducing their ability to extract brain tumors' features and details. To assist clinicians in the accurate automatic segmentation of brain tumors, we built a new deep learning network to make full use of multi-scale features for improving the performance of brain tumor segmentation. METHODS We propose a novel cross-level connected U-shaped network (CLCU-Net) to connect different scales' features for fully utilizing multi-scale features. Besides, we propose a generic attention module (Segmented Attention Module, SAM) on the connections of different scale features for selectively aggregating features, which provides a more efficient connection of different scale features. Moreover, we employ deep supervision and spatial pyramid pooling (SSP) to improve the method's performance further. RESULTS We evaluated our method on the BRATS 2018 dataset by five indexes and achieved excellent performance with a Dice Score of 88.5%, a Precision of 91.98%, a Recall of 85.62%, a Params of 36.34M and Inference Time of 8.89ms for the whole tumor, which outperformed six state-of-the-art methods. Moreover, the performed analysis of different attention modules' heatmaps proved that the attention module proposed in this study was more suitable for segmentation tasks than the other existing popular attention modules. CONCLUSION Both the qualitative and quantitative experimental results indicate that our cross-level connected U-shaped network with selective feature aggregation attention module can achieve accurate brain tumor segmentation and is considered quite instrumental in clinical practice implementation.
Collapse
Affiliation(s)
- Y L Wang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| | - Z J Zhao
- School of Control Science and Engineering, Shandong University, Jinan 250061, China.
| | - S Y Hu
- the Department of General surgery, First Affiliated Hospital of Shandong First Medical University, Jinan 250012, China
| | - F L Chang
- School of Control Science and Engineering, Shandong University, Jinan 250061, China
| |
Collapse
|
37
|
Huang D, Wang M, Zhang L, Li H, Ye M, Li A. Learning rich features with hybrid loss for brain tumor segmentation. BMC Med Inform Decis Mak 2021; 21:63. [PMID: 34330265 PMCID: PMC8323198 DOI: 10.1186/s12911-021-01431-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 02/09/2021] [Indexed: 11/10/2022] Open
Abstract
Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.
Collapse
Affiliation(s)
- Daobin Huang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.,School of Medical Information, Wannan Medical College, Wuhu, 241002, China.,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China
| | - Minghui Wang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Ling Zhang
- Department of Biochemistry, Wannan Medical College, Wuhu, 241002, China
| | - Haichun Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Minquan Ye
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China. .,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China.
| | - Ao Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.
| |
Collapse
|
38
|
Liew A, Lee CC, Lan BL, Tan M. CASPIANET++: A multidimensional Channel-Spatial Asymmetric attention network with Noisy Student Curriculum Learning paradigm for brain tumor segmentation. Comput Biol Med 2021; 136:104690. [PMID: 34352452 DOI: 10.1016/j.compbiomed.2021.104690] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 07/20/2021] [Accepted: 07/24/2021] [Indexed: 11/16/2022]
Abstract
Convolutional neural networks (CNNs) have been used quite successfully for semantic segmentation of brain tumors. However, current CNNs and attention mechanisms are stochastic in nature and neglect the morphological indicators used by radiologists to manually annotate regions of interest. In this paper, we introduce a channel and spatial wise asymmetric attention (CASPIAN) by leveraging the inherent structure of tumors to detect regions of saliency. To demonstrate the efficacy of our proposed layer, we integrate this into a well-established convolutional neural network (CNN) architecture to achieve higher Dice scores, with less GPU resources. Also, we investigate the inclusion of auxiliary multiscale and multiplanar attention branches to increase the spatial context crucial in semantic segmentation tasks. The resulting architecture is the new CASPIANET++, which achieves Dice Scores of 91.19%, 87.6% and 81.03% for whole tumor, tumor core and enhancing tumor respectively. Furthermore, driven by the scarcity of brain tumor data, we investigate the Noisy Student method for segmentation tasks. Our new Noisy Student Curriculum Learning paradigm, which infuses noise incrementally to increase the complexity of the training images exposed to the network, further boosts the enhancing tumor region to 81.53%. Additional validation performed on the BraTS2020 data shows that the Noisy Student Curriculum Learning method works well without any additional training or finetuning.
Collapse
Affiliation(s)
- Andrea Liew
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Bandar Sunway, 47500, Malaysia
| | - Chun Cheng Lee
- Radiology Department, Sunway Medical Centre, Bandar Sunway, 47500, Malaysia
| | - Boon Leong Lan
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Bandar Sunway, 47500, Malaysia; Advanced Engineering Platform, School of Engineering, Monash University Malaysia, Bandar Sunway, 47500, Malaysia
| | - Maxine Tan
- Electrical and Computer Systems Engineering Discipline, School of Engineering, Monash University Malaysia, Bandar Sunway, 47500, Malaysia; School of Electrical and Computer Engineering, The University of Oklahoma, Norman, OK, 73019, USA.
| |
Collapse
|
39
|
Bal A, Banerjee M, Chaki R, Sharma P. An efficient brain tumor image classifier by combining multi-pathway cascaded deep neural network and handcrafted features in MR images. Med Biol Eng Comput 2021; 59:1495-527. [PMID: 34184181 DOI: 10.1007/s11517-021-02370-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Accepted: 04/27/2021] [Indexed: 10/21/2022]
Abstract
Accurate segmentation and delineation of the sub-tumor regions are very challenging tasks due to the nature of the tumor. Traditionally, convolutional neural networks (CNNs) have succeeded in achieving most promising performance for the segmentation of brain tumor; however, handcrafted features remain very important in identification of tumor's boundary regions accurately. The present work proposes a robust deep learning-based model with three different CNN architectures along with pre-defined handcrafted features for brain tumor segmentation, mainly to find out more prominent boundaries of the core and enhanced tumor regions. Generally, automatic CNN architecture does not use the pre-defined handcrafted features because it extracts the features automatically. In this present work, several pre-defined handcrafted features are computed from four MRI modalities (T2, FLAIR, T1c, and T1) with the help of additional handcrafted masks according to user interest and fed to the convolutional features (automatic features) to improve the overall performance of the proposed CNN model for tumor segmentation. Multi-pathway CNN is explored in this present work along with single-pathway CNN, which extracts simultaneously both local and global features to identify the accurate sub-regions of the tumor with the help of handcrafted features. The present work uses a cascaded CNN architecture, where the outcome of a CNN is considered as an additional input information to next subsequent CNNs. To extract the handcrafted features, convolutional operation was applied on the four MRI modalities with the help of several pre-defined masks to produce a predefined set of handcrafted features. The present work also investigates the usefulness of intensity normalization and data augmentation in pre-processing stage in order to handle the difficulties related to the imbalance of tumor labels. The proposed method is experimented on the BraST 2018 datasets and achieved promising results than the existing (currently published) methods with respect to different metrics such as specificity, sensitivity, and dice similarity coefficient (DSC) for complete, core, and enhanced tumor regions. Quantitatively, a notable gain is achieved around the boundaries of the sub-tumor regions using the proposed two-pathway CNN along with the handcrafted features. Graphical Abstract This data is mandatory. Please provide.
Collapse
|
40
|
Bangalore Yogananda CG, Shah BR, Vejdani-Jahromi M, Nalawade SS, Murugesan GK, Yu FF, Pinho MC, Wagner BC, Emblem KE, Bjørnerud A, Fei B, Madhuranthakam AJ, Maldjian JA. A Fully Automated Deep Learning Network for Brain Tumor Segmentation. ACTA ACUST UNITED AC 2021; 6:186-193. [PMID: 32548295 PMCID: PMC7289260 DOI: 10.18383/j.tom.2019.00026] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
We developed a fully automated method for brain tumor segmentation using deep learning; 285 brain tumor cases with multiparametric magnetic resonance images from the BraTS2018 data set were used. We designed 3 separate 3D-Dense-UNets to simplify the complex multiclass segmentation problem into individual binary-segmentation problems for each subcomponent. We implemented a 3-fold cross-validation to generalize the network's performance. The mean cross-validation Dice-scores for whole tumor (WT), tumor core (TC), and enhancing tumor (ET) segmentations were 0.92, 0.84, and 0.80, respectively. We then retrained the individual binary-segmentation networks using 265 of the 285 cases, with 20 cases held-out for testing. We also tested the network on 46 cases from the BraTS2017 validation data set, 66 cases from the BraTS2018 validation data set, and 52 cases from an independent clinical data set. The average Dice-scores for WT, TC, and ET were 0.90, 0.84, and 0.80, respectively, on the 20 held-out testing cases. The average Dice-scores for WT, TC, and ET on the BraTS2017 validation data set, the BraTS2018 validation data set, and the clinical data set were as follows: 0.90, 0.80, and 0.78; 0.90, 0.82, and 0.80; and 0.85, 0.80, and 0.77, respectively. A fully automated deep learning method was developed to segment brain tumors into their subcomponents, which achieved high prediction accuracy on the BraTS data set and on the independent clinical data set. This method is promising for implementation into a clinical workflow.
Collapse
Affiliation(s)
- Chandan Ganesh Bangalore Yogananda
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Bhavya R Shah
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Maryam Vejdani-Jahromi
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Sahil S Nalawade
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Gowtham K Murugesan
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Frank F Yu
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Marco C Pinho
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Benjamin C Wagner
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Kyrre E Emblem
- Department of Diagnostic Physics, Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Atle Bjørnerud
- Computational Radiology and Artificial Intelligence (CRAI), Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway; and
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Ananth J Madhuranthakam
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| | - Joseph A Maldjian
- Department of Radiology, Advanced Neuroscience Imaging Research Lab (ANSIR Lab), University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
41
|
Saleem H, Shahid AR, Raza B. Visual interpretability in 3D brain tumor segmentation network. Comput Biol Med 2021; 133:104410. [PMID: 33894501 DOI: 10.1016/j.compbiomed.2021.104410] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 04/15/2021] [Accepted: 04/15/2021] [Indexed: 11/23/2022]
Abstract
Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations.
Collapse
|
42
|
Chen B, Zhang L, Chen H, Liang K, Chen X. A novel extended Kalman filter with support vector machine based method for the automatic diagnosis and segmentation of brain tumors. Comput Methods Programs Biomed 2021; 200:105797. [PMID: 33317871 DOI: 10.1016/j.cmpb.2020.105797] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 10/10/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Brain tumors are life-threatening, and their early detection is crucial for improving survival rates. Conventionally, brain tumors are detected by radiologists based on their clinical experience. However, this process is inefficient. This paper proposes a machine learning-based method to 1) determine the presence of a tumor, 2) automatically segment the tumor, and 3) classify it as benign or malignant. METHODS We implemented an Extended Kalman Filter with Support Vector Machine (EKF-SVM), an image analysis platform based on an SVM for automated brain tumor detection. A development dataset of 120 patients which supported by Tiantan Hospital was used for algorithm training. Our machine learning algorithm has 5 components as follows. Firstly, image standardization is applied to all the images. This is followed by noise removal with a non-local means filter, and contrast enhancement with improved dynamic histogram equalization. Secondly, a gray-level co-occurrence matrix is utilized for feature extraction to get the image features. Thirdly, the extracted features are fed into a SVM for classify the MRI initially, and an EKF is used to classify brain tumors in the brain MRIs. Fourthly, cross-validation is used to verify the accuracy of the classifier. Finally, an automatic segmentation method based on the combination of k-means clustering and region growth is used for detecting brain tumors. RESULTS With regard to the diagnostic performance, the EKF-SVM had a 96.05% accuracy for automatically classifying brain tumors. Segmentation based on k-means clustering was capable of identifying the tumor boundaries and extracting the whole tumor. CONCLUSION The proposed EKF-SVM based method has better classification performance for positive brain tumor images, which was mainly due to the dearth of negative examples in our dataset. Therefore, future work should obtain more negative examples and investigate the performance of deep learning algorithms such as the convolutional neural networks for automatic diagnosis and segmentation of brain tumors.
Collapse
Affiliation(s)
- Baoshi Chen
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Lingling Zhang
- Department of Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Hongyan Chen
- Department of Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Kewei Liang
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Xuzhu Chen
- Department of Neuroradiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China.
| |
Collapse
|
43
|
Khosravanian A, Rahmanimanesh M, Keshavarzi P, Mozaffari S. A level set method based on domain transformation and bias correction for MRI brain tumor segmentation. J Neurosci Methods 2021; 352:109091. [PMID: 33515604 DOI: 10.1016/j.jneumeth.2021.109091] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 01/18/2021] [Accepted: 01/21/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Intensity inhomogeneity is one of the common artifacts in image processing. This artifact makes image segmentation more challenging and adversely affects the performance of intensity-based image processing algorithms. NEW METHOD In this paper, a novel region-based level set method is proposed for segmenting the images with intensity inhomogeneity with applications to brain tumor segmentation in magnetic resonance imaging (MRI) scans. For this purpose, the inhomogeneous regions are first modeled as Gaussian distributions with different means and variances, and then transferred into a new domain, where preserves the Gaussian intensity distribution of each region but with better separation. Moreover, our method can perform bias field correction. To this end, the bias field is represented by a linear combination of smooth base functions that enables better intensity inhomogeneity modeling. Therefore, level set fundamental formulation and bias field are modified in the proposed approach. RESULTS To assess the performance of the proposed method, different inhomogeneous images, including synthetic images as well as real brain magnetic resonance images from BraTS 2017 dataset are segmented. Being evaluated by Dice, Jaccard, Sensitivity, and Specificity metrics, the results show that the proposed method suppresses the side effect of the over-smoothing object boundary and it has good accuracy in the segmentation of images with extreme intensity non-uniformity. The mean values of these metrics in brain tumor segmentation are 0.86 ± 0.03, 0.77 ± 0.05, 0.94 ± 0.04, 0.99 ± 0.003, respectively. COMPARISON WITH EXISTING METHOD(S) Our method were compared with six state-of-the-art image segmentation methods: Chan-Vese (CV), Local Intensity Clustering (LIC), Local iNtensity Clustering (LINC), Global inhomogeneous intensity clustering (GINC), Multiplicative Intrinsic Component Optimization (MICO), and Local Statistical Active Contour Model (LSACM) models. We used qualitative and quantitative comparison methods for segmenting synthetic and real images. Experiments indicate that our proposed method is robust to noise and intensity non-uniformity and outperforms other state-of-the-art segmentation methods in terms of bias field correction, noise resistance, and segmentation accuracy. CONCLUSIONS Experimental results show that the proposed model is capable of accurate segmentation and bias field estimation simultaneously. The proposed model suppresses the side effect of the over-smoothing object boundary. Moreover, our model has good accuracy in the segmentation of images with extreme intensity non-uniformity.
Collapse
|
44
|
Khosravanian A, Rahmanimanesh M, Keshavarzi P, Mozaffari S. Fast level set method for glioma brain tumor segmentation based on Superpixel fuzzy clustering and lattice Boltzmann method. Comput Methods Programs Biomed 2021; 198:105809. [PMID: 33130495 DOI: 10.1016/j.cmpb.2020.105809] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/12/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Brain tumor segmentation is a challenging issue due to noise, artifact, and intensity non-uniformity in magnetic resonance images (MRI). Manual MRI segmentation is a very tedious, time-consuming, and user-dependent task. This paper aims to presents a novel level set method to address aforementioned challenges for reliable and automatic brain tumor segmentation. METHODS In the proposed method, a new functional, based on level set method, is presented for medical image segmentation. Firstly, we define a superpixel fuzzy clustering objective function. To create superpixel regions, multiscale morphological gradient reconstruction (MMGR) operation is used. Secondly, a novel fuzzy energy functional is defined based on superpixel segmentation and histogram computation. Then, level set equations are obtained by using gradient descent method. Finally, we solve the level set equations by using lattice Boltzmann method (LBM). To evaluate the performance of the proposed method, both synthetic image dataset and real Glioma brain tumor images from BraTS 2017 dataset are used. RESULTS Experiments indicate that our proposed method is robust to noise, initialization, and intensity non-uniformity. Moreover, it is faster and more accurate than other state-of-the-art segmentation methods with the averages of running time is 3.25 seconds, Dice and Jaccard coefficients for automatic tumor segmentation against ground truth are 0.93 and 0.87, respectively. The mean value of Hausdorff distance, Mean absolute Distance (MAD), accuracy, sensitivity, and specificity are 2.70, 0.005, 0.9940, 0.9183, and 0.9972, respectively. CONCLUSIONS Our proposed method shows satisfactory results for Glioma brain tumor segmentation due to superpixel fuzzy clustering accurate segmentation results. Moreover, our method is fast and robust to noise, initialization, and intensity non-uniformity. Since most of the medical images suffer from these problems, the proposed method can more effective for complicated medical image segmentation.
Collapse
Affiliation(s)
- Asieh Khosravanian
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.
| | | | - Parviz Keshavarzi
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.
| | - Saeed Mozaffari
- Faculty of Electrical and Computer Engineering, Semnan University, Semnan, Iran.
| |
Collapse
|
45
|
Zhou T, Canu S, Ruan S. Fusion based on attention mechanism and context constraint for multi-modal brain tumor segmentation. Comput Med Imaging Graph 2020; 86:101811. [PMID: 33232843 DOI: 10.1016/j.compmedimag.2020.101811] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 10/06/2020] [Accepted: 10/23/2020] [Indexed: 11/18/2022]
Abstract
This paper presents a 3D brain tumor segmentation network from multi-sequence MRI datasets based on deep learning. We propose a three-stage network: generating constraints, fusion under constraints and final segmentation. In the first stage, an initial 3D U-Net segmentation network is introduced to produce an additional context constraint for each tumor region. Under the obtained constraint, multi-sequence MRI are then fused using an attention mechanism to achieve three single tumor region segmentations. Considering the location relationship of the tumor regions, a new loss function is introduced to deal with the multiple class segmentation problem. Finally, a second 3D U-Net network is applied to combine and refine the three single prediction results. In each stage, only 8 initial filters are used, allowing to decrease significantly the number of parameters to be estimated. We evaluated our method on BraTS 2017 dataset. The results are promising in terms of dice score, hausdorff distance, and the amount of memory required for training.
Collapse
Affiliation(s)
- Tongxue Zhou
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France; INSA de Rouen, LITIS - Apprentissage, Rouen 76800, France; Normandie Univ, INSA Rouen, UNIROUEN, UNIHAVRE, LITIS, France
| | - Stéphane Canu
- INSA de Rouen, LITIS - Apprentissage, Rouen 76800, France; Normandie Univ, INSA Rouen, UNIROUEN, UNIHAVRE, LITIS, France
| | - Su Ruan
- Université de Rouen Normandie, LITIS - QuantIF, Rouen 76183, France; Normandie Univ, INSA Rouen, UNIROUEN, UNIHAVRE, LITIS, France.
| |
Collapse
|
46
|
Lyu C, Shu H. A Two-Stage Cascade Model with Variational Autoencoders and Attention Gates for MRI Brain Tumor Segmentation. Brainlesion 2020; 2020:435-447. [PMID: 36037049 DOI: 10.1007/978-3-030-72084-1_39] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Automatic MRI brain tumor segmentation is of vital importance for the disease diagnosis, monitoring, and treatment planning. In this paper, we propose a two-stage encoder-decoder based model for brain tumor subregional segmentation. Variational autoencoder regularization is utilized in both stages to prevent the overfitting issue. The second-stage network adopts attention gates and is trained additionally using an expanded dataset formed by the first-stage outputs. On the BraTS 2020 validation dataset, the proposed method achieves the mean Dice score of 0.9041, 0.8350, and 0.7958, and Hausdorff distance (95%) of 4.953 , 6.299, 23.608 for the whole tumor, tumor core, and enhancing tumor, respectively. The corresponding results on the BraTS 2020 testing dataset are 0.8729, 0.8357, and 0.8205 for Dice score, and 11.4288, 19.9690, and 15.6711 for Hausdorff distance. The code is publicly available at https://github.com/shu-hai/two-stage-VAE-Attention-gate-BraTS2020.
Collapse
Affiliation(s)
- Chenggang Lyu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY 10003, USA
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY 10003, USA
| |
Collapse
|
47
|
Wijethilake N, Islam M, Ren H. Radiogenomics model for overall survival prediction of glioblastoma. Med Biol Eng Comput 2020; 58:1767-1777. [PMID: 32488372 DOI: 10.1007/s11517-020-02179-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Accepted: 04/15/2020] [Indexed: 10/24/2022]
Abstract
Glioblastoma multiforme (GBM) is a very aggressive and infiltrative brain tumor with a high mortality rate. There are radiomic models with handcrafted features to estimate glioblastoma prognosis. In this work, we evaluate to what extent of combining genomic with radiomic features makes an impact on the prognosis of overall survival (OS) in patients with GBM. We apply a hypercolumn-based convolutional network to segment tumor regions from magnetic resonance images (MRI), extract radiomic features (geometric, shape, histogram), and fuse with gene expression profiling data to predict survival rate for each patient. Several state-of-the-art regression models such as linear regression, support vector machine, and neural network are exploited to conduct prognosis analysis. The Cancer Genome Atlas (TCGA) dataset of MRI and gene expression profiling is used in the study to observe the model performance in radiomic, genomic, and radiogenomic features. The results demonstrate that genomic data are correlated with the GBM OS prediction, and the radiogenomic model outperforms both radiomic and genomic models. We further illustrate the most significant genes, such as IL1B, KLHL4, ATP1A2, IQGAP2, and TMSL8, which contribute highly to prognosis analysis. Graphical Abstract Our Proposed fully automated "Radiogenomic"" approach for survival prediction overview. It fuses geometric, intensity, volumetric, genomic and clinical information to predict OS.
Collapse
Affiliation(s)
- Navodini Wijethilake
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,Department of Electronics and Telecommunications, University of Moratuwa, Moratuwa, Sri Lanka
| | - Mobarakol Islam
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,NUS Graduate School for Integrative Sciences and Engineering (NGS), National University of Singapore, Singapore, Singapore
| | - Hongliang Ren
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore. .,Chinese University of Hong Kong, Hong Kong, Hong Kong.
| |
Collapse
|
48
|
Ben Naceur M, Akil M, Saouli R, Kachouri R. Fully automatic brain tumor segmentation with deep learning-based selective attention using overlapping patches and multi-class weighted cross-entropy. Med Image Anal 2020; 63:101692. [PMID: 32417714 DOI: 10.1016/j.media.2020.101692] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 03/18/2020] [Accepted: 03/19/2020] [Indexed: 02/08/2023]
Abstract
In this paper, we present a new Deep Convolutional Neural Networks (CNNs) dedicated to fully automatic segmentation of Glioblastoma brain tumors with high- and low-grade. The proposed CNNs model is inspired by the Occipito-Temporal pathway which has a special function called selective attention that uses different receptive field sizes in successive layers to figure out the crucial objects in a scene. Thus, using selective attention technique to develop the CNNs model, helps to maximize the extraction of relevant features from MRI images. We have also addressed two more issues: class-imbalance, and the spatial relationship among image Patches. To address the first issue, we propose two steps: an equal sampling of images Patches and an experimental analysis of the effect of weighted cross-entropy loss function on the segmentation results. In addition, to overcome the second issue, we have studied the effect of Overlapping Patches against Adjacent Patches where the Overlapping Patches show better segmentation results due to the introduction of the global context as well as the local features of the image Patches compared to the conventionnel Adjacent Patches. Our experiment results are reported on BRATS-2018 dataset where our End-to-End Deep Learning model achieved state-of-the-art performance. The median Dice score of our fully automatic segmentation model is 0.90, 0.83, 0.83 for the whole tumor, tumor core, and enhancing tumor respectively compared to the Dice score of radiologist, that is in the range 74% - 85%. Moreover, our proposed CNNs model is not only computationally efficient at inference time, but it could segment the whole brain on average 12 seconds. Finally, the proposed Deep Learning model provides an accurate and reliable segmentation result, and that makes it suitable for adopting in research and as a part of different clinical settings.
Collapse
Affiliation(s)
- Mostefa Ben Naceur
- Gaspard Monge Computer Science Laboratory, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France; Smart Computer Sciences Laboratory, Computer Sciences Department, Exact.Sc, and SNL, University of Biskra, Algeria.
| | - Mohamed Akil
- Gaspard Monge Computer Science Laboratory, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France.
| | - Rachida Saouli
- Smart Computer Sciences Laboratory, Computer Sciences Department, Exact.Sc, and SNL, University of Biskra, Algeria.
| | - Rostom Kachouri
- Gaspard Monge Computer Science Laboratory, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France.
| |
Collapse
|
49
|
Zhou Z, He Z, Shi M, Du J, Chen D. 3D dense connectivity network with atrous convolutional feature pyramid for brain tumor segmentation in magnetic resonance imaging of human heads. Comput Biol Med 2020; 121:103766. [PMID: 32568669 DOI: 10.1016/j.compbiomed.2020.103766] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Revised: 04/15/2020] [Accepted: 04/15/2020] [Indexed: 10/24/2022]
Abstract
The existing deep convolutional neural networks (DCNNs) based methods have achieved significant progress regarding automatic glioma segmentation in magnetic resonance imaging (MRI) data. However, there are two main problems affecting the performance of traditional DCNNs constructed by simply stacking convolutional layers, namely, exploding/vanishing gradients and limitations to the feature computations. To address these challenges, we propose a novel framework to automatically segment brain tumors. First, a three-dimensional (3D) dense connectivity architecture is used to build the backbone for feature reuse. Second, we design a new feature pyramid module using 3D atrous convolutional layers and add this module to the end of the backbone to fuse multiscale contexts. Finally, a 3D deep supervision mechanism is equipped with the network to promote training. On the multimodal brain tumor image segmentation benchmark (BRATS) datasets, our method achieves Dice similarity coefficient values of 0.87, 0.72, and 0.70 on the BRATS 2013 Challenge, 0.84, 0.70, and 0.61 on the BRATS 2013 LeaderBoard, 0.83, 0.70, and 0.62 on the BRATS 2015 Testing, 0.8642, 0.7738, and 0.7525 on the BRATS 2018 Validation in terms of whole tumors, tumor cores, and enhancing cores, respectively. Compared to the published state-of-the-art methods, the proposed method achieves promising accuracy and fast processing, demonstrating good potential for clinical medicine.
Collapse
Affiliation(s)
- Zexun Zhou
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Zhongshi He
- College of Computer Science, Chongqing University, Chongqing, 400044, China.
| | - Meifeng Shi
- College of Computer Science and Engineering, Chongqing University of Technology, Chongqing, 400054, China
| | - Jinglong Du
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Dingding Chen
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| |
Collapse
|
50
|
Hu X, Luo W, Hu J, Guo S, Huang W, Scott MR, Wiest R, Dahlweid M, Reyes M. Brain SegNet: 3D local refinement network for brain lesion segmentation. BMC Med Imaging 2020; 20:17. [PMID: 32046685 PMCID: PMC7014943 DOI: 10.1186/s12880-020-0409-2] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 01/03/2020] [Indexed: 11/20/2022] Open
Abstract
MR images (MRIs) accurate segmentation of brain lesions is important for improving cancer diagnosis, surgical planning, and prediction of outcome. However, manual and accurate segmentation of brain lesions from 3D MRIs is highly expensive, time-consuming, and prone to user biases. We present an efficient yet conceptually simple brain segmentation network (referred as Brain SegNet), which is a 3D residual framework for automatic voxel-wise segmentation of brain lesion. Our model is able to directly predict dense voxel segmentation of brain tumor or ischemic stroke regions in 3D brain MRIs. The proposed 3D segmentation network can run at about 0.5s per MRIs - about 50 times faster than previous approaches Med Image Anal 43: 98–111, 2018, Med Image Anal 36:61–78, 2017. Our model is evaluated on the BRATS 2015 benchmark for brain tumor segmentation, where it obtains state-of-the-art results, by surpassing recently published results reported in Med Image Anal 43: 98–111, 2018, Med Image Anal 36:61–78, 2017. We further applied the proposed Brain SegNet for ischemic stroke lesion outcome prediction, with impressive results achieved on the Ischemic Stroke Lesion Segmentation (ISLES) 2017 database.
Collapse
Affiliation(s)
- Xiaojun Hu
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Weijian Luo
- Department of Neurosurgery, Second Clinical Medical College of Jinan University (Shenzhen People's Hospital), Shenzhen, China
| | - Jiliang Hu
- Department of Neurosurgery, Second Clinical Medical College of Jinan University (Shenzhen People's Hospital), Shenzhen, China.
| | - Sheng Guo
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Weilin Huang
- Malong Technologies, Shenzhen, China. .,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China.
| | - Matthew R Scott
- Malong Technologies, Shenzhen, China.,Shenzhen Malong Artificial Intelligence Research Center, Shenzhen, China
| | - Roland Wiest
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| | - Michael Dahlweid
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| | - Mauricio Reyes
- Imaging A.I. Lab, Insel Data Science Center, Bern University Hospital, Bern, Switzerland
| |
Collapse
|