1
|
Chuang CH, Chang KY, Huang CS, Bessas AM. Augmenting brain-computer interfaces with ART: An artifact removal transformer for reconstructing multichannel EEG signals. Neuroimage 2025; 310:121123. [PMID: 40057290 DOI: 10.1016/j.neuroimage.2025.121123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2025] [Accepted: 03/04/2025] [Indexed: 04/09/2025] Open
Abstract
Artifact removal in electroencephalography (EEG) is a longstanding challenge that significantly impacts neuroscientific analysis and brain-computer interface (BCI) performance. Tackling this problem demands advanced algorithms, extensive noisy-clean training data, and thorough evaluation strategies. This study presents the Artifact Removal Transformer (ART), an innovative EEG denoising model employing transformer architecture to adeptly capture the transient millisecond-scale dynamics characteristic of EEG signals. Our approach offers a holistic, end-to-end denoising solution that simultaneously addresses multiple artifact types in multichannel EEG data. We enhanced the generation of noisy-clean EEG data pairs using an independent component analysis, thus fortifying the training scenarios critical for effective supervised learning. We performed comprehensive validations using a wide range of open datasets from various BCI applications, employing metrics like mean squared error and signal-to-noise ratio, as well as sophisticated techniques such as source localization and EEG component classification. Our evaluations confirm that ART surpasses other deep-learning-based artifact removal methods, setting a new benchmark in EEG signal processing. This advancement not only boosts the accuracy and reliability of artifact removal but also promises to catalyze further innovations in the field, facilitating the study of brain dynamics in naturalistic environments.
Collapse
Affiliation(s)
- Chun-Hsiang Chuang
- Research Center for Education and Mind Sciences, College of Education, National Tsing Hua University, Hsinchu, Taiwan; Institute of Information Systems and Applications, College of Electrical Engineering and Computer Science, National Tsing Hua University, Hsinchu, Taiwan.
| | - Kong-Yi Chang
- Research Center for Education and Mind Sciences, College of Education, National Tsing Hua University, Hsinchu, Taiwan; Institute of Information Systems and Applications, College of Electrical Engineering and Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Chih-Sheng Huang
- Department of Artificial Intelligence Research and Development, Elan Microelectronics Corporation, Hsinchu, Taiwan; College of Artificial Intelligence and Green Energy, National Yang Ming Chiao Tung University, Hsinchu, Taiwan; College of Electrical Engineering and Computer Science, National Taipei University of Technology, Taipei, Taiwan
| | - Anne-Mei Bessas
- Research Center for Education and Mind Sciences, College of Education, National Tsing Hua University, Hsinchu, Taiwan
| |
Collapse
|
2
|
Li LL, Cao GZ, Zhang YP, Li WC, Cui F. MACNet: A Multidimensional Attention-Based Convolutional Neural Network for Lower-Limb Motor Imagery Classification. SENSORS (BASEL, SWITZERLAND) 2024; 24:7611. [PMID: 39686148 DOI: 10.3390/s24237611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Revised: 11/24/2024] [Accepted: 11/27/2024] [Indexed: 12/18/2024]
Abstract
Decoding lower-limb motor imagery (MI) is highly important in brain-computer interfaces (BCIs) and rehabilitation engineering. However, it is challenging to classify lower-limb MI from electroencephalogram (EEG) signals, because lower-limb motions (LLMs) including MI are excessively close to physiological representations in the human brain and generate low-quality EEG signals. To address this challenge, this paper proposes a multidimensional attention-based convolutional neural network (CNN), termed MACNet, which is specifically designed for lower-limb MI classification. MACNet integrates a temporal refining module and an attention-enhanced convolutional module by leveraging the local and global feature representation abilities of CNNs and attention mechanisms. The temporal refining module adaptively investigates critical information from each electrode channel to refine EEG signals along the temporal dimension. The attention-enhanced convolutional module extracts temporal and spatial features while refining the feature maps across the channel and spatial dimensions. Owing to the scarcity of public datasets available for lower-limb MI, a specified lower-limb MI dataset involving four routine LLMs is built, consisting of 10 subjects over 20 sessions. Comparison experiments and ablation studies are conducted on this dataset and a public BCI Competition IV 2a EEG dataset. The experimental results show that MACNet achieves state-of-the-art performance and outperforms alternative models for the subject-specific mode. Visualization analysis reveals the excellent feature learning capabilities of MACNet and the potential relationship between lower-limb MI and brain activity. The effectiveness and generalizability of MACNet are verified.
Collapse
Affiliation(s)
- Ling-Long Li
- Guangdong Key Laboratory of Electromagnetic Control and Intelligent Robots, College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, China
| | - Guang-Zhong Cao
- Guangdong Key Laboratory of Electromagnetic Control and Intelligent Robots, College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, China
| | - Yue-Peng Zhang
- Shenzhen Institute of Information Technology, Shenzhen 518172, China
| | - Wan-Chen Li
- School of Psychology, Shenzhen University, Shenzhen 518060, China
| | - Fang Cui
- School of Psychology, Shenzhen University, Shenzhen 518060, China
| |
Collapse
|
3
|
Ma J, Ma W, Zhang J, Li Y, Yang B, Shan C. Partial prior transfer learning based on self-attention CNN for EEG decoding in stroke patients. Sci Rep 2024; 14:28170. [PMID: 39548177 PMCID: PMC11568294 DOI: 10.1038/s41598-024-79202-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2024] [Accepted: 11/06/2024] [Indexed: 11/17/2024] Open
Abstract
The utilization of motor imagery-based brain-computer interfaces (MI-BCI) has been shown to assist stroke patients activate motor regions in the brain. In particular, the brain regions activated by unilateral upper limb multi-task are more extensive, which is more beneficial for rehabilitation, but it also increases the difficulty of decoding. In this paper, self-attention convolutional neural network based partial prior transfer learning (SACNN-PPTL) is proposed to improve the classification performance of patients' MI multi-task. The backbone network of the algorithm is SACNN, which accords with the inherent features of electroencephalogram (EEG) and contains the temporal feature module, the spatial feature module and the feature generalization module. In addition, PPTL is introduced to transfer part of the target domain while preserving the generalization of the base model while improving the specificity of the target domain. In the experiment, five backbone networks and three training modes are selected as comparison algorithms. The experimental results show that SACNN-PPTL had a classification accuracy of 55.4%±0.17 in four types of MI tasks for 22 patients, which is significantly higher than comparison algorithms (P < 0.05). SACNN-PPTL effectively improves the decoding performance of MI tasks and promotes the development of BCI-based rehabilitation for unilateral upper limb.
Collapse
Affiliation(s)
- Jun Ma
- Department of Rehabilitation Medicine, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China.
| | - Wanlu Ma
- China-Japan Friendship Hospital, Beijing, 100029, China
| | - Jingjing Zhang
- Department of Rehabilitation Medicine, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China
| | - Yongcong Li
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China
| | - Banghua Yang
- School of Mechatronic Engineering and Automation, Shanghai University, Shanghai, 200444, China.
| | - Chunlei Shan
- Department of Rehabilitation Medicine, Tong Ren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China.
- Institute of Rehabilitation, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| |
Collapse
|
4
|
Pang R, Sang H, Yi L, Gao C, Xu H, Wei Y, Zhang L, Sun J. Working memory load recognition with deep learning time series classification. BIOMEDICAL OPTICS EXPRESS 2024; 15:2780-2797. [PMID: 38855665 PMCID: PMC11161351 DOI: 10.1364/boe.516063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/19/2024] [Accepted: 01/31/2024] [Indexed: 06/11/2024]
Abstract
Working memory load (WML) is one of the widely applied signals in the areas of human-machine interaction. The precise evaluation of the WML is crucial for this kind of application. This study aims to propose a deep learning (DL) time series classification (TSC) model for inter-subject WML decoding. We used fNIRS to record the hemodynamic signals of 27 participants during visual working memory tasks. Traditional machine learning and deep time series classification algorithms were respectively used for intra-subject and inter-subject WML decoding from the collected blood oxygen signals. The intra-subject classification accuracy of LDA and SVM were 94.6% and 79.1%. Our proposed TAResnet-BiLSTM model had the highest inter-subject WML decoding accuracy, reaching 92.4%. This study provides a new idea and method for the brain-computer interface application of fNIRS in real-time WML detection.
Collapse
Affiliation(s)
- Richong Pang
- Barco Technology Limited, Zhuhai 519031, China
- Joint Laboratory of Brain-Verse Digital Convergence, Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Haojun Sang
- Chinese Institute for Brain Research, Beijing 102206, China
| | - Li Yi
- School of Mechatronic Engineering and Automation, Foshan University, Foshan 528000, China
| | - Chenyang Gao
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300110, China
| | - Hongkai Xu
- Barco Technology Limited, Zhuhai 519031, China
- Joint Laboratory of Brain-Verse Digital Convergence, Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Yanzhao Wei
- Barco Technology Limited, Zhuhai 519031, China
- Joint Laboratory of Brain-Verse Digital Convergence, Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Lei Zhang
- Chinese Institute for Brain Research, Beijing 102206, China
| | - Jinyan Sun
- School of Medicine, Foshan University, Foshan 528000, China
| |
Collapse
|
5
|
Tao W, Wang Z, Wong CM, Jia Z, Li C, Chen X, Chen CLP, Wan F. ADFCNN: Attention-Based Dual-Scale Fusion Convolutional Neural Network for Motor Imagery Brain-Computer Interface. IEEE Trans Neural Syst Rehabil Eng 2024; 32:154-165. [PMID: 38090841 DOI: 10.1109/tnsre.2023.3342331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Convolutional neural networks (CNNs) have been successfully applied to motor imagery (MI)-based brain-computer interface (BCI). Nevertheless, single-scale CNN fail to extract abundant information over a wide spectrum from EEG signals, while typical multi-scale CNNs cannot effectively fuse information from different scales with concatenation-based methods. To overcome these challenges, we propose a new scheme equipped with attention-based dual-scale fusion convolutional neural network (ADFCNN), which jointly extracts and fuses EEG spectral and spatial information at different scales. This scheme also provides novel insight through self-attention for effective information fusion from different scales. Specifically, temporal convolutions with two different kernel sizes identify EEG μ and β rhythms, while spatial convolutions at two different scales generate global and detailed spatial information, respectively, and the self-attention mechanism performs feature fusion based on the internal similarity of the concatenated features extracted by the dual-scale CNN. The proposed scheme achieves the superior performance compared with state-of-the-art methods in subject-specific motor imagery recognition on BCI Competition IV dataset 2a, 2b and OpenBMI dataset, with the cross-session average classification accuracies of 79.39% and significant improvements of 9.14% on BCI-IV2a, 87.81% and 7.66% on BCI-IV2b, 65.26% and 7.2% on OpenBMI dataset, and the within-session average classification accuracies of 86.87% and significant improvements of 10.89% on BCI-IV2a, 87.26% and 8.07% on BCI-IV2b, 84.29% and 5.17% on OpenBMI dataset, respectively. What is more, ablation experiments are conducted to investigate the mechanism and demonstrate the effectiveness of the dual-scale joint temporal-spatial CNN and self-attention modules. Visualization is also used to reveal the learning process and feature distribution of the model.
Collapse
|
6
|
Wang H, Jiang J, Gan JQ, Wang H. Motor Imagery EEG Classification Based on a Weighted Multi-Branch Structure Suitable for Multisubject Data. IEEE Trans Biomed Eng 2023; 70:3040-3051. [PMID: 37186527 DOI: 10.1109/tbme.2023.3274231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Electroencephalogram (EEG) signal recognition based on deep learning technology requires the support of sufficient data. However, training data scarcity usually occurs in subject-specific motor imagery tasks unless multisubject data can be used to enlarge training data. Unfortunately, because of the large discrepancies between data distributions from different subjects, model performance could only be improved marginally or even worsened by simply training on multisubject data. METHOD This article proposes a novel weighted multi-branch (WMB) structure for handling multisubject data to solve the problem, in which each branch is responsible for fitting a pair of source-target subject data and adaptive weights are used to integrate all branches or select branches with the largest weights to make the final decision. The proposed WMB structure was applied to six well-known deep learning models (EEGNet, Shallow ConvNet, Deep ConvNet, ResNet, MSFBCNN, and EEG_TCNet) and comprehensive experiments were conducted on EEG datasets BCICIV-2a, BCICIV-2b, high gamma dataset (HGD) and two supplementary datasets. RESULT Superior results against the state-of-the-art models have demonstrated the efficacy of the proposed method in subject-specific motor imagery EEG classification. For example, the proposed WMB_EEGNet achieved classification accuracies of 84.14%, 90.23%, and 97.81% on BCICIV-2a, BCICIV-2b and HGD, respectively. CONCLUSION It is clear that the proposed WMB structure is capable to make good use of multisubject data with large distribution discrepancies for subject-specific EEG classification.
Collapse
|
7
|
Xie Y, Wang K, Meng J, Yue J, Meng L, Yi W, Jung TP, Xu M, Ming D. Cross-dataset transfer learning for motor imagery signal classification via multi-task learning and pre-training. J Neural Eng 2023; 20:056037. [PMID: 37774694 DOI: 10.1088/1741-2552/acfe9c] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective.Deep learning (DL) models have been proven to be effective in decoding motor imagery (MI) signals in Electroencephalogram (EEG) data. However, DL models' success relies heavily on large amounts of training data, whereas EEG data collection is laborious and time-consuming. Recently, cross-dataset transfer learning has emerged as a promising approach to meet the data requirements of DL models. Nevertheless, transferring knowledge across datasets involving different MI tasks remains a significant challenge in cross-dataset transfer learning, limiting the full utilization of valuable data resources. APPROACH This study proposes a pre-training-based cross-dataset transfer learning method inspired by Hard Parameter Sharing in multi-task learning. Different datasets with distinct MI paradigms are considered as different tasks, classified with shared feature extraction layers and individual task-specific layers to allow cross-dataset classification with one unified model. Then, Pre-training and fine-tuning are employed to transfer knowledge across datasets. We also designed four fine-tuning schemes and conducted extensive experiments on them. MAIN RESULTS The results showed that compared to models without pre-training, models with pre-training achieved a maximum increase in accuracy of 7.76%. Moreover, when limited training data were available, the pre-training method significantly improved DL model's accuracy by 27.34% at most. The experiments also revealed that pre-trained models exhibit faster convergence and remarkable robustness. The training time per subject could be reduced by up to 102.83 s, and the variance of classification accuracy decreased by 75.22% at best. SIGNIFICANCE This study represents the first comprehensive investigation of the cross-dataset transfer learning method between two datasets with different MI tasks. The proposed pre-training method requires only minimal fine-tuning data when applying DL models to new MI paradigms, making MI-Brain-computer interface more practical and user-friendly.
Collapse
Affiliation(s)
- Yuting Xie
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Kun Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Jiayuan Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Jin Yue
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Lin Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Weibo Yi
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
- Beijing Institute of Mechanical Equipment, Beijin, People's Republic of China
| | - Tzyy-Ping Jung
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| |
Collapse
|
8
|
Luo J, Wang Y, Xia S, Lu N, Ren X, Shi Z, Hei X. A shallow mirror transformer for subject-independent motor imagery BCI. Comput Biol Med 2023; 164:107254. [PMID: 37499295 DOI: 10.1016/j.compbiomed.2023.107254] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/28/2023] [Accepted: 07/07/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVE Motor imagery BCI plays an increasingly important role in motor disorders rehabilitation. However, the position and duration of the discriminative segment in an EEG trial vary from subject to subject and even trial to trial, and this leads to poor performance of subject-independent motor imagery classification. Thus, determining how to detect and utilize the discriminative signal segments is crucial for improving the performance of subject-independent motor imagery BCI. APPROACH In this paper, a shallow mirror transformer is proposed for subject-independent motor imagery EEG classification. Specifically, a multihead self-attention layer with a global receptive field is employed to detect and utilize the discriminative segment from the entire input EEG trial. Furthermore, the mirror EEG signal and the mirror network structure are constructed to improve the classification precision based on ensemble learning. Finally, the subject-independent setup was used to evaluate the shallow mirror transformer on motor imagery EEG signals from subjects existing in the training set and new subjects. MAIN RESULTS The experiments results on BCI Competition IV datasets 2a and 2b and the OpenBMI dataset demonstrated the promising effectiveness of the proposed shallow mirror transformer. The shallow mirror transformer obtained average accuracies of 74.48% and 76.1% for new subjects and existing subjects, respectively, which were highest among the compared state-of-the-art methods. In addition, visualization of the attention score showed the ability of discriminative EEG segment detection. This paper demonstrated that multihead self-attention is effective in capturing global EEG signal information in motor imagery classification. SIGNIFICANCE This study provides an effective model based on a multihead self-attention layer for subject-independent motor imagery-based BCIs. To the best of our knowledge, this is the shallowest transformer model available, in which a small number of parameters promotes the performance in motor imagery EEG classification for such a small sample problem.
Collapse
Affiliation(s)
- Jing Luo
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China.
| | - Yaojie Wang
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| | - Shuxiang Xia
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| | - Na Lu
- State Key Laboratory for Manufacturing Systems Engineering, Systems Engineering Institute, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Xiaoyong Ren
- Department of Otolaryngology Head and Neck Surgery & Center of Sleep Medicine, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Zhenghao Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| | - Xinhong Hei
- Shaanxi Key Laboratory for Network Computing and Security Technology and Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, China
| |
Collapse
|
9
|
Luo J, Li J, Mao Q, Shi Z, Liu H, Ren X, Hei X. Overlapping filter bank convolutional neural network for multisubject multicategory motor imagery brain-computer interface. BioData Min 2023; 16:19. [PMID: 37434221 DOI: 10.1186/s13040-023-00336-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 07/03/2023] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND Motor imagery brain-computer interfaces (BCIs) is a classic and potential BCI technology achieving brain computer integration. In motor imagery BCI, the operational frequency band of the EEG greatly affects the performance of motor imagery EEG recognition model. However, as most algorithms used a broad frequency band, the discrimination from multiple sub-bands were not fully utilized. Thus, using convolutional neural network (CNNs) to extract discriminative features from EEG signals of different frequency components is a promising method in multisubject EEG recognition. METHODS This paper presents a novel overlapping filter bank CNN to incorporate discriminative information from multiple frequency components in multisubject motor imagery recognition. Specifically, two overlapping filter banks with fixed low-cut frequency or sliding low-cut frequency are employed to obtain multiple frequency component representations of EEG signals. Then, multiple CNN models are trained separately. Finally, the output probabilities of multiple CNN models are integrated to determine the predicted EEG label. RESULTS Experiments were conducted based on four popular CNN backbone models and three public datasets. And the results showed that the overlapping filter bank CNN was efficient and universal in improving multisubject motor imagery BCI performance. Specifically, compared with the original backbone model, the proposed method can improve the average accuracy by 3.69 percentage points, F1 score by 0.04, and AUC by 0.03. In addition, the proposed method performed best among the comparison with the state-of-the-art methods. CONCLUSION The proposed overlapping filter bank CNN framework with fixed low-cut frequency is an efficient and universal method to improve the performance of multisubject motor imagery BCI.
Collapse
Affiliation(s)
- Jing Luo
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China.
- Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China.
| | - Jundong Li
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
- Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
| | - Qi Mao
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
- Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
| | - Zhenghao Shi
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
- Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
| | - Haiqin Liu
- Department of Otolaryngology Head and Neck Surgery & Center of Sleep Medicine, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, People's Republic of China
| | - Xiaoyong Ren
- Department of Otolaryngology Head and Neck Surgery & Center of Sleep Medicine, The Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, People's Republic of China
| | - Xinhong Hei
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
- Human-Machine Integration Intelligent Robot Shaanxi University Engineering Research Center, Xi'an University of Technology, Xi'an, Shaanxi, People's Republic of China
| |
Collapse
|
10
|
Tang X, Yang C, Sun X, Zou M, Wang H. Motor Imagery EEG Decoding Based on Multi-Scale Hybrid Networks and Feature Enhancement. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1208-1218. [PMID: 37022411 DOI: 10.1109/tnsre.2023.3242280] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Motor Imagery (MI) based on Electroencephalography (EEG), a typical Brain-Computer Interface (BCI) paradigm, can communicate with external devices according to the brain's intentions. Convolutional Neural Networks (CNN) are gradually used for EEG classification tasks and have achieved satisfactory performance. However, most CNN-based methods employ a single convolution mode and a convolution kernel size, which cannot extract multi-scale advanced temporal and spatial features efficiently. What's more, they hinder the further improvement of the classification accuracy of MI-EEG signals. This paper proposes a novel Multi-Scale Hybrid Convolutional Neural Network (MSHCNN) for MI-EEG signal decoding to improve classification performance. The two-dimensional convolution is used to extract temporal and spatial features of EEG signals and the one-dimensional convolution is used to extract advanced temporal features of EEG signals. In addition, a channel coding method is proposed to improve the expression capacity of the spatiotemporal characteristics of EEG signals. We evaluate the performance of the proposed method on the dataset collected in the laboratory and BCI competition IV 2b, 2a, and the average accuracy is at 96.87%, 85.25%, and 84.86%, respectively. Compared with other advanced methods, our proposed method achieves higher classification accuracy. Then we use the proposed method for an online experiment and design an intelligent artificial limb control system. The proposed method effectively extracts EEG signals' advanced temporal and spatial features. Additionally, we design an online recognition system, which contributes to the further development of the BCI system.
Collapse
|
11
|
She Q, Chen T, Fang F, Zhang J, Gao Y, Zhang Y. Improved Domain Adaptation Network Based on Wasserstein Distance for Motor Imagery EEG Classification. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1137-1148. [PMID: 37022366 DOI: 10.1109/tnsre.2023.3241846] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Motor Imagery (MI) paradigm is critical in neural rehabilitation and gaming. Advances in brain-computer interface (BCI) technology have facilitated the detection of MI from electroencephalogram (EEG). Previous studies have proposed various EEG-based classification algorithms to identify the MI, however, the performance of prior models was limited due to the cross-subject heterogeneity in EEG data and the shortage of EEG data for training. Therefore, inspired by generative adversarial network (GAN), this study aims to propose an improved domain adaption network based on Wasserstein distance, which utilizes existing labeled data from multiple subjects (source domain) to improve the performance of MI classification on a single subject (target domain). Specifically, our proposed framework consists of three components, including a feature extractor, a domain discriminator, and a classifier. The feature extractor employs an attention mechanism and a variance layer to improve the discrimination of features extracted from different MI classes. Next, the domain discriminator adopts the Wasserstein matrix to measure the distance between source domain and target domain, and aligns the data distributions of source and target domain via adversarial learning strategy. Finally, the classifier uses the knowledge acquired from the source domain to predict the labels in the target domain. The proposed EEG-based MI classification framework was evaluated by two open-source datasets, the BCI Competition IV Datasets 2a and 2b. Our results demonstrated that the proposed framework could enhance the performance of EEG-based MI detection, achieving better classification results compared with several state-of-the-art algorithms. In conclusion, this study is promising in helping the neural rehabilitation of different neuropsychiatric diseases.
Collapse
|
12
|
Convolutional Neural Network with a Topographic Representation Module for EEG-Based Brain-Computer Interfaces. Brain Sci 2023; 13:brainsci13020268. [PMID: 36831811 PMCID: PMC9954538 DOI: 10.3390/brainsci13020268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/02/2023] [Accepted: 02/03/2023] [Indexed: 02/08/2023] Open
Abstract
Convolutional neural networks (CNNs) have shown great potential in the field of brain-computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures.
Collapse
|
13
|
Miao M, Zheng L, Xu B, Yang Z, Hu W. A multiple frequency bands parallel spatial–temporal 3D deep residual learning framework for EEG-based emotion recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
14
|
Yang L, Shi T, Lv J, Liu Y, Dai Y, Zou L. A multi-feature fusion decoding study for unilateral upper-limb fine motor imagery. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:2482-2500. [PMID: 36899543 DOI: 10.3934/mbe.2023116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
To address the fact that the classical motor imagination paradigm has no noticeable effect on the rehabilitation training of upper limbs in patients after stroke and the corresponding feature extraction algorithm is limited to a single domain, this paper describes the design of a unilateral upper-limb fine motor imagination paradigm and the collection of data from 20 healthy people. It presents a feature extraction algorithm for multi-domain fusion and compares the common spatial pattern (CSP), improved multiscale permutation entropy (IMPE) and multi-domain fusion features of all participants through the use of decision tree, linear discriminant analysis, naive Bayes, a support vector machine, k-nearest neighbor and ensemble classification precision algorithms in the ensemble classifier. For the same subject, the average classification accuracy improvement of the same classifier for multi-domain feature extraction relative to CSP feature results went up by 1.52%. The average classification accuracy improvement of the same classifier went up by 32.87% relative to the IMPE feature classification results. This study's unilateral fine motor imagery paradigm and multi-domain feature fusion algorithm provide new ideas for upper limb rehabilitation after stroke.
Collapse
Affiliation(s)
- Liangyu Yang
- The School of Microelectronics and Control Engineering, Changzhou University, Changzhou, Jiangsu 213164, China
| | - Tianyu Shi
- The School of Microelectronics and Control Engineering, Changzhou University, Changzhou, Jiangsu 213164, China
| | - Jidong Lv
- The School of Microelectronics and Control Engineering, Changzhou University, Changzhou, Jiangsu 213164, China
| | - Yan Liu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Department of Medical Image, Suzhou 215163, China
- Suzhou Guokekangcheng Medical Technique Co., Ltd., Suzhou 215163, China
| | - Yakang Dai
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Science, Department of Medical Image, Suzhou 215163, China
- Suzhou Guokekangcheng Medical Technique Co., Ltd., Suzhou 215163, China
| | - Ling Zou
- The School of Microelectronics and Control Engineering, Changzhou University, Changzhou, Jiangsu 213164, China
- Key Laboratory of Brain Machine Collaborative Intelligence Foundation of Zhejiang Province, Hangzhou, Zhejiang 310018, China
| |
Collapse
|
15
|
Wen Y, He W, Zhang Y. A new attention-based 3D densely connected cross-stage-partial network for motor imagery classification in BCI. J Neural Eng 2022; 19. [PMID: 36130589 DOI: 10.1088/1741-2552/ac93b4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/21/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The challenge for motor imagery (MI) in brain-computer interface (BCI) systems is finding a reliable classification model that has high classification accuracy and excellent robustness. Currently, one of the main problems leading to degraded classification performance is the inaccuracy caused by nonstationarities and low signal-to-noise ratio in electroencephalogram (EEG) signals. APPROACH This study proposes a novel attention-based 3D densely connected cross-stage-partial network (DCSPNet) model to achieve efficient EEG-based MI classification. This is an end-to-end classification model framework based on the convolutional neural network (CNN) architecture. In this framework, to fully utilize the complementary features in each dimension, the optimal features are extracted adaptively from the EEG signals through the spatial-spectral-temporal (SST) attention mechanism. The 3D DCSPNet is introduced to reduce the gradient loss by segmenting the extracted feature maps to strengthen the network learning capability. Additionally, the design of the densely connected structure increases the robustness of the network. MAIN RESULTS The performance of the proposed method was evaluated using the BCI competition IV 2a and the high gamma dataset, achieving an average accuracy of 84.45% and 97.88%, respectively. Our method outperformed most state-of-the-art classification algorithms, demonstrating its effectiveness and strong generalization ability. SIGNIFICANCE The experimental results show that our method is promising for improving the performance of MI-BCI. As a general framework based on time-series classification, it can be applied to BCI-related fields.
Collapse
Affiliation(s)
- Yintang Wen
- Yanshan University, Qinhuangdao, Qinhuangdao, Hebei, 066004, CHINA
| | - Wenjing He
- Yanshan University, Qinhuangdao, Qinhuangdao, Hebei, 066004, CHINA
| | - Yuyan Zhang
- Yanshan University, Qinhuangdao, Qinhuangdao, Hebei, 066004, CHINA
| |
Collapse
|
16
|
Zhu H, Forenzo D, He B. On the Deep Learning Models for EEG-Based Brain-Computer Interface Using Motor Imagery. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2283-2291. [PMID: 35951573 PMCID: PMC9420068 DOI: 10.1109/tnsre.2022.3198041] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Motor imagery (MI) based brain-computer interface (BCI) is an important BCI paradigm which requires powerful classifiers. Recent development of deep learning technology has prompted considerable interest in using deep learning for classification and resulted in multiple models. Finding the best performing models among them would be beneficial for designing better BCI systems and classifiers going forward. However, it is difficult to directly compare performance of various models through the original publications, since the datasets used to test the models are different from each other, too small, or even not publicly available. In this work, we selected five MI-EEG deep classification models proposed recently: EEGNet, Shallow & Deep ConvNet, MB3D and ParaAtt, and tested them on two large, publicly available, databases with 42 and 62 human subjects. Our results show that the models performed similarly on one dataset while EEGNet performed the best on the second with a relatively small training cost using the parameters that we evaluated.
Collapse
|
17
|
Altuwaijri GA, Muhammad G. Electroencephalogram-Based Motor Imagery Signals Classification Using a Multi-Branch Convolutional Neural Network Model with Attention Blocks. Bioengineering (Basel) 2022; 9:323. [PMID: 35877374 PMCID: PMC9311604 DOI: 10.3390/bioengineering9070323] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/29/2022] [Accepted: 07/12/2022] [Indexed: 11/24/2022] Open
Abstract
Brain signals can be captured via electroencephalogram (EEG) and be used in various brain-computer interface (BCI) applications. Classifying motor imagery (MI) using EEG signals is one of the important applications that can help a stroke patient to rehabilitate or perform certain tasks. Dealing with EEG-MI signals is challenging because the signals are weak, may contain artefacts, are dependent on the patient's mood and posture, and have low signal-to-noise ratio. This paper proposes a multi-branch convolutional neural network model called the Multi-Branch EEGNet with Convolutional Block Attention Module (MBEEGCBAM) using attention mechanism and fusion techniques to classify EEG-MI signals. The attention mechanism is applied both channel-wise and spatial-wise. The proposed model is a lightweight model that has fewer parameters and higher accuracy compared to other state-of-the-art models. The accuracy of the proposed model is 82.85% and 95.45% using the BCI-IV2a motor imagery dataset and the high gamma dataset, respectively. Additionally, when using the fusion approach (FMBEEGCBAM), it achieves 83.68% and 95.74% accuracy, respectively.
Collapse
Affiliation(s)
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia;
| |
Collapse
|
18
|
Ou Y, Sun S, Gan H, Zhou R, Yang Z. An improved self-supervised learning for EEG classification. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:6907-6922. [PMID: 35730288 DOI: 10.3934/mbe.2022325] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Motor Imagery EEG (MI-EEG) classification plays an important role in different Brain-Computer Interface (BCI) systems. Recently, deep learning has been widely used in the MI-EEG classification tasks, however this technology requires a large number of labeled training samples which are difficult to obtain, and insufficient labeled training samples will result in a degradation of the classification performance. To address the degradation problem, we investigate a Self-Supervised Learning (SSL) based MI-EEG classification method to reduce the dependence on a large number of labeled training samples. The proposed method includes a pretext task and a downstream classification one. In the pretext task, each MI-EEG is rearranged according to the temporal characteristic. A network is pre-trained using the original and rearranged MI-EEGs. In the downstream task, a MI-EEG classification network is firstly initialized by the network learned in the pretext task and then trained using a small number of the labeled training samples. A series of experiments are conducted on Data sets 1 and 2b of BCI competition IV and IVa of BCI competition III. In the case of one third of the labeled training samples, the proposed method can obtain an obvious improvement compared to the baseline network without using SSL. In the experiments under different percentages of the labeled training samples, the results show that the designed SSL strategy is effective and beneficial to improving the classification performance.
Collapse
Affiliation(s)
- Yanghan Ou
- School of Computer Science, Hubei University of Technology, Wuhan 430068, China
| | - Siqin Sun
- Wuhan Third Hospital (Tongren Hospital of Wuhan University), Wuhan 430074, China
| | - Haitao Gan
- School of Computer Science, Hubei University of Technology, Wuhan 430068, China
- Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou 310018, China
| | - Ran Zhou
- School of Computer Science, Hubei University of Technology, Wuhan 430068, China
| | - Zhi Yang
- School of Computer Science, Hubei University of Technology, Wuhan 430068, China
| |
Collapse
|
19
|
Altuwaijri GA, Muhammad G, Altaheri H, Alsulaiman M. A Multi-Branch Convolutional Neural Network with Squeeze-and-Excitation Attention Blocks for EEG-Based Motor Imagery Signals Classification. Diagnostics (Basel) 2022; 12:diagnostics12040995. [PMID: 35454043 PMCID: PMC9032940 DOI: 10.3390/diagnostics12040995] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 04/10/2022] [Accepted: 04/12/2022] [Indexed: 02/04/2023] Open
Abstract
Electroencephalography-based motor imagery (EEG-MI) classification is a critical component of the brain-computer interface (BCI), which enables people with physical limitations to communicate with the outside world via assistive technology. Regrettably, EEG decoding is challenging because of the complexity, dynamic nature, and low signal-to-noise ratio of the EEG signal. Developing an end-to-end architecture capable of correctly extracting EEG data’s high-level features remains a difficulty. This study introduces a new model for decoding MI known as a Multi-Branch EEGNet with squeeze-and-excitation blocks (MBEEGSE). By clearly specifying channel interdependencies, a multi-branch CNN model with attention blocks is employed to adaptively change channel-wise feature responses. When compared to existing state-of-the-art EEG motor imagery classification models, the suggested model achieves good accuracy (82.87%) with reduced parameters in the BCI-IV2a motor imagery dataset and (96.15%) in the high gamma dataset.
Collapse
Affiliation(s)
- Ghadir Ali Altuwaijri
- Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia; (G.A.A.); (H.A.); (M.A.)
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia; (G.A.A.); (H.A.); (M.A.)
- Centre of Smart Robotics Research (CS2R), King Saud University, Riyadh 11543, Saudi Arabia
- Correspondence:
| | - Hamdi Altaheri
- Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia; (G.A.A.); (H.A.); (M.A.)
- Centre of Smart Robotics Research (CS2R), King Saud University, Riyadh 11543, Saudi Arabia
| | - Mansour Alsulaiman
- Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia; (G.A.A.); (H.A.); (M.A.)
- Centre of Smart Robotics Research (CS2R), King Saud University, Riyadh 11543, Saudi Arabia
| |
Collapse
|
20
|
Altuwaijri GA, Muhammad G. A Multibranch of Convolutional Neural Network Models for Electroencephalogram-Based Motor Imagery Classification. BIOSENSORS 2022; 12:22. [PMID: 35049650 PMCID: PMC8773854 DOI: 10.3390/bios12010022] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 12/19/2021] [Accepted: 12/30/2021] [Indexed: 06/14/2023]
Abstract
Automatic high-level feature extraction has become a possibility with the advancement of deep learning, and it has been used to optimize efficiency. Recently, classification methods for Convolutional Neural Network (CNN)-based electroencephalography (EEG) motor imagery have been proposed, and have achieved reasonably high classification accuracy. These approaches, however, use the CNN single convolution scale, whereas the best convolution scale varies from subject to subject. This limits the precision of classification. This paper proposes multibranch CNN models to address this issue by effectively extracting the spatial and temporal features from raw EEG data, where the branches correspond to different filter kernel sizes. The proposed method's promising performance is demonstrated by experimental results on two public datasets, the BCI Competition IV 2a dataset and the High Gamma Dataset (HGD). The results of the technique show a 9.61% improvement in the classification accuracy of multibranch EEGNet (MBEEGNet) from the fixed one-branch EEGNet model, and 2.95% from the variable EEGNet model. In addition, the multibranch ShallowConvNet (MBShallowConvNet) improved the accuracy of a single-scale network by 6.84%. The proposed models outperformed other state-of-the-art EEG motor imagery classification methods.
Collapse
Affiliation(s)
- Ghadir Ali Altuwaijri
- Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia;
- Computer Sciences and Information Technology College, Majmaah University, Al Majma’ah 11952, Saudi Arabia
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia;
- Centre of Smart Robotics Research (CS2R), King Saud University, Riyadh 11543, Saudi Arabia
| |
Collapse
|
21
|
Imaginary Finger Movements Decoding Using Empirical Mode Decomposition and a Stacked BiLSTM Architecture. MATHEMATICS 2021. [DOI: 10.3390/math9243297] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Motor Imagery Electroencephalogram (MI-EEG) signals are widely used in Brain-Computer Interfaces (BCI). MI-EEG signals of large limbs movements have been explored in recent researches because they deliver relevant classification rates for BCI systems. However, smaller and noisy signals corresponding to hand-finger imagined movements are less frequently used because they are difficult to classify. This study proposes a method for decoding finger imagined movements of the right hand. For this purpose, MI-EEG signals from C3, Cz, P3, and Pz sensors were carefully selected to be processed in the proposed framework. Therefore, a method based on Empirical Mode Decomposition (EMD) is used to tackle the problem of noisy signals. At the same time, the sequence classification is performed by a stacked Bidirectional Long Short-Term Memory (BiLSTM) network. The proposed method was evaluated using k-fold cross-validation on a public dataset, obtaining an accuracy of 82.26%.
Collapse
|
22
|
Zhang Y, Chen W, Lin CL, Pei Z, Chen J, Chen Z. Boosting-LDA algriothm with multi-domain feature fusion for motor imagery EEG decoding. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102983] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
23
|
Altaheri H, Muhammad G, Alsulaiman M, Amin SU, Altuwaijri GA, Abdul W, Bencherif MA, Faisal M. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: a review. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06352-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
24
|
Musallam YK, AlFassam NI, Muhammad G, Amin SU, Alsulaiman M, Abdul W, Altaheri H, Bencherif MA, Algabri M. Electroencephalography-based motor imagery classification using temporal convolutional network fusion. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102826] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|