1
|
Advancements in brain-computer interfaces for the rehabilitation of unilateral spatial neglect: a concise review. Front Neurosci 2024; 18:1373377. [PMID: 38784094 PMCID: PMC11111994 DOI: 10.3389/fnins.2024.1373377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/24/2024] [Indexed: 05/25/2024] Open
Abstract
This short review examines recent advancements in neurotechnologies within the context of managing unilateral spatial neglect (USN), a common condition following stroke. Despite the success of brain-computer interfaces (BCIs) in restoring motor function, there is a notable absence of effective BCI devices for treating cerebral visual impairments, a prevalent consequence of brain lesions that significantly hinders rehabilitation. This review analyzes current non-invasive BCIs and technological solutions dedicated to cognitive rehabilitation, with a focus on visuo-attentional disorders. We emphasize the need for further research into the use of BCIs for managing cognitive impairments and propose a new potential solution for USN rehabilitation, by combining the clinical subtleties of this syndrome with the technological advancements made in the field of neurotechnologies.
Collapse
|
2
|
Signal alignment for cross-datasets in P300 brain-computer interfaces. J Neural Eng 2024; 21:036007. [PMID: 38657615 DOI: 10.1088/1741-2552/ad430d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Objective.Transfer learning has become an important issue in the brain-computer interface (BCI) field, and studies on subject-to-subject transfer within the same dataset have been performed. However, few studies have been performed on dataset-to-dataset transfer, including paradigm-to-paradigm transfer. In this study, we propose a signal alignment (SA) for P300 event-related potential (ERP) signals that is intuitive, simple, computationally less expensive, and can be used for cross-dataset transfer learning.Approach.We proposed a linear SA that uses the P300's latency, amplitude scale, and reverse factor to transform signals. For evaluation, four datasets were introduced (two from conventional P300 Speller BCIs, one from a P300 Speller with face stimuli, and the last from a standard auditory oddball paradigm).Results.Although the standard approach without SA had an average precision (AP) score of 25.5%, the approach demonstrated a 35.8% AP score, and we observed that the number of subjects showing improvement was 36.0% on average. Particularly, we confirmed that the Speller dataset with face stimuli was more comparable with other datasets.Significance.We proposed a simple and intuitive way to align ERP signals that uses the characteristics of ERP signals. The results demonstrated the feasibility of cross-dataset transfer learning even between datasets with different paradigms.
Collapse
|
3
|
Frequency Domain Channel-Wise Attack to CNN Classifiers in Motor Imagery Brain-Computer Interfaces. IEEE Trans Biomed Eng 2024; 71:1587-1598. [PMID: 38113159 DOI: 10.1109/tbme.2023.3344295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
OBJECTIVE Convolutional neural network (CNN), a classical structure in deep learning, has been commonly deployed in the motor imagery brain-computer interface (MIBCI). Many methods have been proposed to evaluate the vulnerability of such CNN models, primarily by attacking them using direct temporal perturbations. In this work, we propose a novel attacking approach based on perturbations in the frequency domain instead. METHODS For a given natural MI trial in the frequency domain, the proposed approach, called frequency domain channel-wise attack (FDCA), generates perturbations at each channel one after another to fool the CNN classifiers. The advances of this strategy are two-fold. First, instead of focusing on the temporal domain, perturbations are generated in the frequency domain where discriminative patterns can be extracted for motor imagery (MI) classification tasks. Second, the perturbing optimization is performed based on differential evolution algorithm in a black-box scenario where detailed model knowledge is not required. RESULTS Experimental results demonstrate the effectiveness of the proposed FDCA which achieves a significantly higher success rate than the baselines and existing methods in attacking three major CNN classifiers on four public MI benchmarks. CONCLUSION Perturbations generated in the frequency domain yield highly competitive results in attacking MIBCI deployed by CNN models even in a black-box setting, where the model information is well-protected. SIGNIFICANCE To our best knowledge, existing MIBCI attack approaches are all gradient-based methods and require details about the victim model, e.g., the parameters and objective function. We provide a more flexible strategy that does not require model details but still produces an effective attack outcome.
Collapse
|
4
|
Unraveling motor imagery brain patterns using explainable artificial intelligence based on Shapley values. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 246:108048. [PMID: 38308997 DOI: 10.1016/j.cmpb.2024.108048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 01/22/2024] [Accepted: 01/23/2024] [Indexed: 02/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Motor imagery (MI) based brain-computer interfaces (BCIs) are widely used in rehabilitation due to the close relationship that exists between MI and motor execution (ME). However, the underlying brain mechanisms of MI remain not well understood. Most MI-BCIs use the sensorimotor rhythms elicited in the primary motor cortex (M1) and somatosensory cortex (S1), which consist of an event-related desynchronization followed by an event-related synchronization. Consequently, this has resulted in systems that only record signals around M1 and S1. However, MI could involve a more complex network including sensory, association, and motor areas. In this study, we hypothesize that the superior accuracies achieved by new deep learning (DL) models applied to MI decoding rely on focusing on a broader MI activation of the brain. Parallel to the success of DL, the field of explainable artificial intelligence (XAI) has seen continuous development to provide explanations for DL networks success. The goal of this study is to use XAI in combination with DL to extract information about MI brain activation patterns from non-invasive electroencephalography (EEG) signals. METHODS We applied an adaptation of Shapley additive explanations (SHAP) to EEGSym, a state-of-the-art DL network with exceptional transfer learning capabilities for inter-subject MI classification. We obtained the SHAP values from two public databases comprising 171 users generating left and right hand MI instances with and without real-time feedback. RESULTS We found that EEGSym based most of its prediction on the signal of the frontal electrodes, i.e. F7 and F8, and on the first 1500 ms of the analyzed imagination period. We also found that MI involves a broad network not only based on M1 and S1, but also on the prefrontal cortex (PFC) and the posterior parietal cortex (PPC). We further applied this knowledge to select a 8-electrode configuration that reached inter-subject accuracies of 86.5% ± 10.6% on the Physionet dataset and 88.7% ± 7.0% on the Carnegie Mellon University's dataset. CONCLUSION Our results demonstrate the potential of combining DL and SHAP-based XAI to unravel the brain network involved in producing MI. Furthermore, SHAP values can optimize the requirements for out-of-laboratory BCI applications involving real users.
Collapse
|
5
|
Manifold Learning-Based Common Spatial Pattern for EEG Signal Classification. IEEE J Biomed Health Inform 2024; 28:1971-1981. [PMID: 38265900 DOI: 10.1109/jbhi.2024.3357995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
EEG signal classification using Riemannian manifolds has shown great potential. However, the huge computational cost associated with Riemannian metrics poses challenges for applying Riemannian methods, particularly in high-dimensional feature data. To address these, we propose an efficient ensemble method called MLCSP-TSE-MLP, which aims to reduce the computational cost while achieving superior performance. MLCSP of the ensemble utilizes a Riemannian graph embedding strategy to learn intrinsic low-dimensional sub-manifolds, enhancing discrimination. TSE uses the Euclidean mean as the reference point for tangent space mapping and reducing computational cost. Finally, the ensemble incorporates the MLP classifier to offer improved classification performance. Classification results conducted on three datasets demonstrate that MLCSP-TSE-MLP achieves significant superior performance compared to various competing methods. Notably, the MLCSP-TSE module achieves a remarkable increase in training speed and exhibits much lower test time compared to traditional Riemannian methods. Based on these results, we believe that the proposed MLCSP-TSE-MLP is a powerful tool for handling high-dimensional data and holds great potential for practical applications.
Collapse
|
6
|
Subject-independent meta-learning framework towards optimal training of EEG-based classifiers. Neural Netw 2024; 172:106108. [PMID: 38219680 DOI: 10.1016/j.neunet.2024.106108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 11/13/2023] [Accepted: 01/05/2024] [Indexed: 01/16/2024]
Abstract
Advances in deep learning have shown great promise towards the application of performing high-accuracy Electroencephalography (EEG) signal classification in a variety of tasks. However, many EEG-based datasets are often plagued by the issue of high inter-subject signal variability. Robust deep learning models are notoriously difficult to train under such scenarios, often leading to subpar or widely varying performance across subjects under the leave-one-subject-out paradigm. Recently, the model agnostic meta-learning framework was introduced as a way to increase the model's ability to generalize towards new tasks. While the original framework focused on task-based meta-learning, this research aims to show that the meta-learning methodology can be modified towards subject-based signal classification while maintaining the same task objectives and achieve state-of-the-art performance. Namely, we propose the novel implementation of a few/zero-shot subject-independent meta-learning framework towards multi-class inner speech and binary class motor imagery classification. Compared to current subject-adaptive methods which utilize large number of labels from the target, the proposed framework shows its effectiveness in training zero-calibration and few-shot models for subject-independent EEG classification. The proposed few/zero-shot subject-independent meta-learning mechanism performs well on both small and large datasets and achieves robust, generalized performance across subjects. The results obtained shows a significant improvement over the current state-of-the-art, with the binary class motor imagery achieving 88.70% and the accuracy of multi-class inner speech achieving an average of 31.15%. Codes will be made available to public upon publication.
Collapse
|
7
|
Dynamic decomposition graph convolutional neural network for SSVEP-based brain-computer interface. Neural Netw 2024; 172:106075. [PMID: 38278092 DOI: 10.1016/j.neunet.2023.12.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/11/2023] [Accepted: 12/18/2023] [Indexed: 01/28/2024]
Abstract
The SSVEP-based paradigm serves as a prevalent approach in the realm of brain-computer interface (BCI). However, the processing of multi-channel electroencephalogram (EEG) data introduces challenges due to its non-Euclidean characteristic, necessitating methodologies that account for inter-channel topological relations. In this paper, we introduce the Dynamic Decomposition Graph Convolutional Neural Network (DDGCNN) designed for the classification of SSVEP EEG signals. Our approach incorporates layerwise dynamic graphs to address the oversmoothing issue in Graph Convolutional Networks (GCNs), employing a dense connection mechanism to mitigate the gradient vanishing problem. Furthermore, we enhance the traditional linear transformation inherent in GCNs with graph dynamic fusion, thereby elevating feature extraction and adaptive aggregation capabilities. Our experimental results demonstrate the effectiveness of proposed approach in learning and extracting features from EEG topological structure. The results shown that DDGCNN outperforms other state-of-the-art (SOTA) algorithms reported on two datasets (Dataset 1: 54 subjects, 4 targets, 2 sessions; Dataset 2: 35 subjects, 40 targets). Additionally, we showcase the implementation of DDGCNN in the context of synchronized BCI robotic fish control. This work represents a significant advancement in the field of EEG signal processing for SSVEP-based BCIs. Our proposed method processes SSVEP time domain signals directly as an end-to-end system, making it easy to deploy. The code is available at https://github.com/zshubin/DDGCNN.
Collapse
|
8
|
Aggregating intrinsic information to enhance BCI performance through federated learning. Neural Netw 2024; 172:106100. [PMID: 38232427 DOI: 10.1016/j.neunet.2024.106100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 11/20/2023] [Accepted: 01/03/2024] [Indexed: 01/19/2024]
Abstract
Insufficient data is a long-standing challenge for Brain-Computer Interface (BCI) to build a high-performance deep learning model. Though numerous research groups and institutes collect a multitude of EEG datasets for the same BCI task, sharing EEG data from multiple sites is still challenging due to the heterogeneity of devices. The significance of this challenge cannot be overstated, given the critical role of data diversity in fostering model robustness. However, existing works rarely discuss this issue, predominantly centering their attention on model training within a single dataset, often in the context of inter-subject or inter-session settings. In this work, we propose a hierarchical personalized Federated Learning EEG decoding (FLEEG) framework to surmount this challenge. This innovative framework heralds a new learning paradigm for BCI, enabling datasets with disparate data formats to collaborate in the model training process. Each client is assigned a specific dataset and trains a hierarchical personalized model to manage diverse data formats and facilitate information exchange. Meanwhile, the server coordinates the training procedure to harness knowledge gleaned from all datasets, thus elevating overall performance. The framework has been evaluated in Motor Imagery (MI) classification with nine EEG datasets collected by different devices but implementing the same MI task. Results demonstrate that the proposed framework can boost classification performance up to 8.4% by enabling knowledge sharing between multiple datasets, especially for smaller datasets. Visualization results also indicate that the proposed framework can empower the local models to put a stable focus on task-related areas, yielding better performance. To the best of our knowledge, this is the first end-to-end solution to address this important challenge.
Collapse
|
9
|
Effects of the presentation order of stimulations in sequential ERP/SSVEP Hybrid Brain-Computer Interface. Biomed Phys Eng Express 2024; 10:035009. [PMID: 38430561 DOI: 10.1088/2057-1976/ad2f58] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 03/01/2024] [Indexed: 03/04/2024]
Abstract
Hybrid Brain-Computer Interface (hBCI) combines multiple neurophysiology modalities or paradigms to speed up the output of a single command or produce multiple ones simultaneously. Concurrent hBCIs that employ endogenous and exogenous paradigms are limited by the reduced set of possible commands. Conversely, the fusion of different exogenous visual evoked potentials demonstrated impressive performances; however, they suffer from limited portability. Yet, sequential hBCIs did not receive much attention mainly due to slower transfer rate and user fatigue during prolonged BCI use (Lorenz et al 2014 J. Neural Eng. 11 035007). Moreover, the crucial factors for optimizing the hybridization remain under-explored. In this paper, we test the feasibility of sequential Event Related-Potentials (ERP) and Steady-State Visual Evoked Potentials (SSVEP) hBCI and study the effect of stimulus order presentation between ERP-SSVEP and SSVEP-ERP for the control of directions and speed of powered wheelchairs or mobile robots with 15 commands. Exploiting the fast single trial face stimulus ERP, SSVEP and modern efficient convolutional neural networks, the configuration with SSVEP presented at first achieved significantly (p < 0.05) higher average accuracy rate with 76.39% ( ± 7.30 standard deviation) hybrid command accuracy and an average Information Transfer Rate (ITR) of 25.05 ( ± 5.32 standard deviation) bits per minute (bpm). The results of the study demonstrate the suitability of a sequential SSVEP-ERP hBCI with challenging dry electroencephalography (EEG) electrodes and low-compute capacity. Although it presents lower ITR than concurrent hBCIs, our system presents an alternative in small screen settings when the conditions for concurrent hBCIs are difficult to satisfy.
Collapse
|
10
|
Evaluation of temporal, spatial and spectral filtering in CSP-based methods for decoding pedaling-based motor tasks using EEG signals. Biomed Phys Eng Express 2024; 10:035003. [PMID: 38417162 DOI: 10.1088/2057-1976/ad2e35] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 02/28/2024] [Indexed: 03/01/2024]
Abstract
Stroke is a neurological syndrome that usually causes a loss of voluntary control of lower/upper body movements, making it difficult for affected individuals to perform Activities of Daily Living (ADLs). Brain-Computer Interfaces (BCIs) combined with robotic systems, such as Motorized Mini Exercise Bikes (MMEB), have enabled the rehabilitation of people with disabilities by decoding their actions and executing a motor task. However, Electroencephalography (EEG)-based BCIs are affected by the presence of physiological and non-physiological artifacts. Thus, movement discrimination using EEG become challenging, even in pedaling tasks, which have not been well explored in the literature. In this study, Common Spatial Patterns (CSP)-based methods were proposed to classify pedaling motor tasks. To address this, Filter Bank Common Spatial Patterns (FBCSP) and Filter Bank Common Spatial-Spectral Patterns (FBCSSP) were implemented with different spatial filtering configurations by varying the time segment with different filter bank combinations for the three methods to decode pedaling tasks. An in-house EEG dataset during pedaling tasks was registered for 8 participants. As results, the best configuration corresponds to a filter bank with two filters (8-19 Hz and 19-30 Hz) using a time window between 1.5 and 2.5 s after the cue and implementing two spatial filters, which provide accuracy of approximately 0.81, False Positive Rates lower than 0.19, andKappaindex of 0.61. This work implies that EEG oscillatory patterns during pedaling can be accurately classified using machine learning. Therefore, our method can be applied in the rehabilitation context, such as MMEB-based BCIs, in the future.
Collapse
|
11
|
An open dataset for human SSVEPs in the frequency range of 1-60 Hz. Sci Data 2024; 11:196. [PMID: 38351064 PMCID: PMC10864273 DOI: 10.1038/s41597-024-03023-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/30/2024] [Indexed: 02/16/2024] Open
Abstract
A steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) system relies on the photic driving response to effectively elicit characteristic electroencephalogram (EEG) signals. However, traditional visual stimuli mainly adopt high-contrast black-and-white flickering stimulations, which are easy to cause visual fatigue. This paper presents an SSVEP dataset acquired at a wide frequency range from 1 to 60 Hz with an interval of 1 Hz using flickering stimuli under two different modulation depths. This dataset contains 64-channel EEG data from 30 healthy subjects when they fixated on a single flickering stimulus. The stimulus was rendered on an LCD display with a refresh rate of 240 Hz. Initially, the dataset was rigorously validated through comprehensive data analysis to investigate SSVEP responses and user experiences. Subsequently, BCI performance was evaluated through offline simulations of frequency-coded and phase-coded BCI paradigms. This dataset provides comprehensive and high-quality data for studying and developing SSVEP-based BCI systems.
Collapse
|
12
|
Cross-Subject Motor Imagery Decoding by Transfer Learning of Tactile ERD. IEEE Trans Neural Syst Rehabil Eng 2024; 32:662-671. [PMID: 38271166 DOI: 10.1109/tnsre.2024.3358491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
For Brain-Computer Interface (BCI) based on motor imagery (MI), the MI task is abstract and spontaneous, presenting challenges in measurement and control and resulting in a lower signal-to-noise ratio. The quality of the collected MI data significantly impacts the cross-subject calibration results. To address this challenge, we introduce a novel cross-subject calibration method based on passive tactile afferent stimulation, in which data induced by tactile stimulation is utilized to calibrate transfer learning models for cross-subject decoding. During the experiments, tactile stimulation was applied to either the left or right hand, with subjects only required to sense tactile stimulation. Data from these tactile tasks were used to train or fine-tune models and subsequently applied to decode pure MI data. We evaluated BCI performance using both the classical Common Spatial Pattern (CSP) combined with the Linear Discriminant Analysis (LDA) algorithm and a state-of-the-art deep transfer learning model. The results demonstrate that the proposed calibration method achieved decoding performance at an equivalent level to traditional MI calibration, with the added benefit of outperforming traditional MI calibration with fewer trials. The simplicity and effectiveness of the proposed cross-subject tactile calibration method make it valuable for practical applications of BCI, especially in clinical settings.
Collapse
|
13
|
Noise-Factorized Disentangled Representation Learning for Generalizable Motor Imagery EEG Classification. IEEE J Biomed Health Inform 2024; 28:765-776. [PMID: 38010934 DOI: 10.1109/jbhi.2023.3337072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Motor Imagery (MI) Electroencephalography (EEG) is one of the most common Brain-Computer Interface (BCI) paradigms that has been widely used in neural rehabilitation and gaming. Although considerable research efforts have been dedicated to developing MI EEG classification algorithms, they are mostly limited in handling scenarios where the training and testing data are not from the same subject or session. Such poor generalization capability significantly limits the realization of BCI in real-world applications. In this paper, we proposed a novel framework to disentangle the representation of raw EEG data into three components, subject/session-specific, MI-task-specific, and random noises, so that the subject/session-specific feature extends the generalization capability of the system. This is realized by a joint discriminative and generative framework, supported by a series of fundamental training losses and training strategies. We evaluated our framework on three public MI EEG datasets, and detailed experimental results show that our method can achieve superior performance by a large margin compared to current state-of-the-art benchmark algorithms.
Collapse
|
14
|
An EEG motor imagery dataset for brain computer interface in acute stroke patients. Sci Data 2024; 11:131. [PMID: 38272904 PMCID: PMC10811218 DOI: 10.1038/s41597-023-02787-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 11/24/2023] [Indexed: 01/27/2024] Open
Abstract
The brain-computer interface (BCI) is a technology that involves direct communication with parts of the brain and has evolved rapidly in recent years; it has begun to be used in clinical practice, such as for patient rehabilitation. Patient electroencephalography (EEG) datasets are critical for algorithm optimization and clinical applications of BCIs but are rare at present. We collected data from 50 acute stroke patients with wireless portable saline EEG devices during the performance of two tasks: 1) imagining right-handed movements and 2) imagining left-handed movements. The dataset consists of four types of data: 1) the motor imagery instructions, 2) raw recording data, 3) pre-processed data after removing artefacts and other manipulations, and 4) patient characteristics. This is the first open dataset to address left- and right-handed motor imagery in acute stroke patients. We believe that the dataset will be very helpful for analysing brain activation and designing decoding methods that are more applicable for acute stroke patients, which will greatly facilitate research in the field of motor imagery-BCI.
Collapse
|
15
|
Transferring a deep learning model from healthy subjects to stroke patients in a motor imagery brain-computer interface. J Neural Eng 2024; 21:016007. [PMID: 38091617 DOI: 10.1088/1741-2552/ad152f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024]
Abstract
Objective.Motor imagery (MI) brain-computer interfaces (BCIs) based on electroencephalogram (EEG) have been developed primarily for stroke rehabilitation, however, due to limited stroke data, current deep learning methods for cross-subject classification rely on healthy data. This study aims to assess the feasibility of applying MI-BCI models pre-trained using data from healthy individuals to detect MI in stroke patients.Approach.We introduce a new transfer learning approach where features from two-class MI data of healthy individuals are used to detect MI in stroke patients. We compare the results of the proposed method with those obtained from analyses within stroke data. Experiments were conducted using Deep ConvNet and state-of-the-art subject-specific machine learning MI classifiers, evaluated on OpenBMI two-class MI-EEG data from healthy subjects and two-class MI versus rest data from stroke patients.Main results.Results of our study indicate that through domain adaptation of a model pre-trained using healthy subjects' data, an average MI detection accuracy of 71.15% (±12.46%) can be achieved across 71 stroke patients. We demonstrate that the accuracy of the pre-trained model increased by 18.15% after transfer learning (p<0.001). Additionally, the proposed transfer learning method outperforms the subject-specific results achieved by Deep ConvNet and FBCSP, with significant enhancements of 7.64% (p<0.001) and 5.55% (p<0.001) in performance, respectively. Notably, the healthy-to-stroke transfer learning approach achieved similar performance to stroke-to-stroke transfer learning, with no significant difference (p>0.05). Explainable AI analyses using transfer models determined channel relevance patterns that indicate contributions from the bilateral motor, frontal, and parietal regions of the cortex towards MI detection in stroke patients.Significance.Transfer learning from healthy to stroke can enhance the clinical use of BCI algorithms by overcoming the challenge of insufficient clinical data for optimal training.
Collapse
|
16
|
Multi-frequency steady-state visual evoked potential dataset. Sci Data 2024; 11:26. [PMID: 38177151 PMCID: PMC10766626 DOI: 10.1038/s41597-023-02841-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 12/11/2023] [Indexed: 01/06/2024] Open
Abstract
The Steady-State Visual Evoked Potential (SSVEP) is a widely used modality in Brain-Computer Interfaces (BCIs). Existing research has demonstrated the capabilities of SSVEP that use single frequencies for each target in various applications with relatively small numbers of commands required in the BCI. Multi-frequency SSVEP has been developed to extend the capability of single-frequency SSVEP to tasks that involve large numbers of commands. However, the development on multi-frequency SSVEP methodologies is falling behind compared to the number of studies with single-frequency SSVEP. This dataset was constructed to promote research in multi-frequency SSVEP by making SSVEP signals collected with different frequency stimulation settings publicly available. In this dataset, SSVEPs were collected from 35 participants using single-, dual-, and tri-frequency stimulation and with three different multi-frequency stimulation variants.
Collapse
|
17
|
Filter bank second-order underdamped stochastic resonance analysis for implementing a short-term high-speed SSVEP detection. Neuroimage 2024; 285:120501. [PMID: 38101496 DOI: 10.1016/j.neuroimage.2023.120501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 12/10/2023] [Accepted: 12/12/2023] [Indexed: 12/17/2023] Open
Abstract
OBJECTIVE The progression of brain-computer interfaces (BCIs) has been propelled by breakthroughs in neuroscience, signal processing, and machine learning, marking it as a dynamic field of study over the past few decades. Nevertheless, the nonlinear and non-stationary characteristics of steady-state visual evoked potentials (SSVEPs), coupled with the incongruity between frequently employed linear techniques and nonlinear signal attributes, resulted in the subpar performance of mainstream non-training algorithms like canonical correlation analysis (CCA), multivariate synchronization index (MSI), and filter bank CCA (FBCCA) in short-term SSVEP detection. METHODS To tackle this problem, the novel fusions of common filter bank analysis, CCA dimensionality reduction methods, USSR models, and MSI recognition models are used in SSVEP signal recognition. RESULTS Unlike conventional linear techniques such as CCA, MSI, and FBCCA, the filter bank second-order underdamped stochastic resonance (FBUSSR) analysis demonstrates superior efficacy in the detection of short-term high-speed SSVEPs. CONCLUSION This research enlists 32 subjects and uses a public dataset to assess the proposed approach, and the experimental outcomes indicate that the non-training method can attain greater recognition precision and stability. Furthermore, under the conditions of the newly proposed fusion method and light stimulation, the USSR model exhibits the most optimal enhancement effect. SIGNIFICANCE The findings of this study underscore the expansive potential for the application of BCI systems in the realm of neuroscience and signal processing.
Collapse
|
18
|
Deep Unsupervised Representation Learning for Feature-Informed EEG Domain Extraction. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4882-4894. [PMID: 38048235 DOI: 10.1109/tnsre.2023.3339179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
In electroencephalography (EEG) classification paradigms, data from a target subject is often difficult to obtain, leading to difficulties in training a robust deep learning network. Transfer learning and their variations are effective tools in improving such models suffering from lack of data. However, many of the proposed variations and deep models often rely on a single assumed distribution to represent the latent features which may not scale well due to inter- and intra-subject variations in signals. This leads to significant instability in individual subject decoding performances. The presence of non-trivial domain differences between different sets of training or transfer learning data causes poorer model generalization towards the target subject. However, the detection of these domain differences is often difficult to perform due to the ill-defined nature of the EEG domain features. This study proposes a novel inference model, the Joint Embedding Variational Autoencoder, that offers conditionally tighter approximation of the estimated spatiotemporal feature distribution through the use of jointly optimised variational autoencoders to achieve optimizable data dependent inputs as an additional variable for improved overall model optimisation and scaling without sacrificing model tightness. To learn the variational bound, we show that maximising the marginal log-likelihood of only the second embedding section is required to achieve conditionally tighter lower bounds. Furthermore, we show that this model provides state-of-the-art EEG data reconstruction and deep feature extraction. The extracted domains of the EEG signals across each subject displays the rationale as to why there exists disparity between subjects' adaptation efficacy.
Collapse
|
19
|
EEG decoding for datasets with heterogenous electrode configurations using transfer learning graph neural networks. J Neural Eng 2023; 20:066027. [PMID: 37931308 DOI: 10.1088/1741-2552/ad09ff] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 11/06/2023] [Indexed: 11/08/2023]
Abstract
Objective. Brain-machine interfacing (BMI) has greatly benefited from adopting machine learning methods for feature learning that require extensive data for training, which are often unavailable from a single dataset. Yet, it is difficult to combine data across labs or even data within the same lab collected over the years due to the variation in recording equipment and electrode layouts resulting in shifts in data distribution, changes in data dimensionality, and altered identity of data dimensions. Our objective is to overcome this limitation and learn from many different and diverse datasets across labs with different experimental protocols.Approach. To tackle the domain adaptation problem, we developed a novel machine learning framework combining graph neural networks (GNNs) and transfer learning methodologies for non-invasive motor imagery (MI) EEG decoding, as an example of BMI. Empirically, we focus on the challenges of learning from EEG data with different electrode layouts and varying numbers of electrodes. We utilize three MI EEG databases collected using very different numbers of EEG sensors (from 22 channels to 64) and layouts (from custom layouts to 10-20).Main results. Our model achieved the highest accuracy with lower standard deviations on the testing datasets. This indicates that the GNN-based transfer learning framework can effectively aggregate knowledge from multiple datasets with different electrode layouts, leading to improved generalization in subject-independent MI EEG classification.Significance. The findings of this study have important implications for brain-computer-interface research, as they highlight a promising method for overcoming the limitations posed by non-unified experimental setups. By enabling the integration of diverse datasets with varying electrode layouts, our proposed approach can help advance the development and application of BMI technologies.
Collapse
|
20
|
Effects of Virtual Reality Cognitive Training on Neuroplasticity: A Quasi-Randomized Clinical Trial in Patients with Stroke. Biomedicines 2023; 11:3225. [PMID: 38137446 PMCID: PMC10740852 DOI: 10.3390/biomedicines11123225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/29/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023] Open
Abstract
Cognitive Rehabilitation (CR) is a therapeutic approach designed to improve cognitive functioning after a brain injury, including stroke. Two major categories of techniques, namely traditional and advanced (including virtual reality-VR), are widely used in CR for patients with various neurological disorders. More objective outcome measures are needed to better investigate cognitive recovery after a stroke. In the last ten years, the application of electroencephalography (EEG) as a non-invasive and portable neuroimaging method has been explored to extract the hallmarks of neuroplasticity induced by VR rehabilitation approaches, particularly within the chronic stroke population. The aim of this study is to investigate the neurophysiological effects of CR conducted in a virtual environment using the VRRS device. Thirty patients with moderate-to-severe ischemic stroke in the chronic phase (at least 6 months after the event), with a mean age of 58.13 (±8.33) for the experimental group and 57.33 (±11.06) for the control group, were enrolled. They were divided into two groups: an experimental group and a control group, receiving neurocognitive stimulation using VR and the same amount of conventional neurorehabilitation, respectively. To study neuroplasticity changes after the training, we focused on the power band spectra of theta, alpha, and beta EEG rhythms in both groups. We observed that when VR technology was employed to amplify the effects of treatments on cognitive recovery, significant EEG-related neural improvements were detected in the primary motor circuit in terms of power spectral density and time-frequency domains. Indeed, EEG analysis suggested that VR resulted in a significant increase in both the alpha band power in the occipital areas and the beta band power in the frontal areas, while no significant variations were observed in the theta band power. Our data suggest the potential effectiveness of a VR-based rehabilitation approach in promoting neuroplastic changes even in the chronic phase of ischemic stroke.
Collapse
|
21
|
Tensor-CSPNet: A Novel Geometric Deep Learning Framework for Motor Imagery Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:10955-10969. [PMID: 35749326 DOI: 10.1109/tnnls.2022.3172108] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Deep learning (DL) has been widely investigated in a vast majority of applications in electroencephalography (EEG)-based brain-computer interfaces (BCIs), especially for motor imagery (MI) classification in the past five years. The mainstream DL methodology for the MI-EEG classification exploits the temporospatial patterns of EEG signals using convolutional neural networks (CNNs), which have been particularly successful in visual images. However, since the statistical characteristics of visual images depart radically from EEG signals, a natural question arises whether an alternative network architecture exists apart from CNNs. To address this question, we propose a novel geometric DL (GDL) framework called Tensor-CSPNet, which characterizes spatial covariance matrices derived from EEG signals on symmetric positive definite (SPD) manifolds and fully captures the temporospatiofrequency patterns using existing deep neural networks on SPD manifolds, integrating with experiences from many successful MI-EEG classifiers to optimize the framework. In the experiments, Tensor-CSPNet attains or slightly outperforms the current state-of-the-art performance on the cross-validation and holdout scenarios in two commonly used MI-EEG datasets. Moreover, the visualization and interpretability analyses also exhibit the validity of Tensor-CSPNet for the MI-EEG classification. To conclude, in this study, we provide a feasible answer to the question by generalizing the DL methodologies on SPD manifolds, which indicates the start of a specific GDL methodology for the MI-EEG classification.
Collapse
|
22
|
Explainable cross-task adaptive transfer learning for motor imagery EEG classification. J Neural Eng 2023; 20:066021. [PMID: 37963394 DOI: 10.1088/1741-2552/ad0c61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 11/14/2023] [Indexed: 11/16/2023]
Abstract
Objective. In the field of motor imagery (MI) electroencephalography (EEG)-based brain-computer interfaces, deep transfer learning (TL) has proven to be an effective tool for solving the problem of limited availability in subject-specific data for the training of robust deep learning (DL) models. Although considerable progress has been made in the cross-subject/session and cross-device scenarios, the more challenging problem of cross-task deep TL remains largely unexplored.Approach. We propose a novel explainable cross-task adaptive TL method for MI EEG decoding. Firstly, similarity analysis and data alignment are performed for EEG data of motor execution (ME) and MI tasks. Afterwards, the MI EEG decoding model is obtained via pre-training with extensive ME EEG data and fine-tuning with partial MI EEG data. Finally, expected gradient-based post-hoc explainability analysis is conducted for the visualization of important temporal-spatial features.Main results. Extensive experiments are conducted on one large ME EEG High-Gamma dataset and two large MI EEG datasets (openBMI and GIST). The best average classification accuracy of our method reaches 80.00% and 72.73% for OpenBMI and GIST respectively, which outperforms several state-of-the-art algorithms. In addition, the results of the explainability analysis further validate the correlation between ME and MI EEG data and the effectiveness of ME/MI cross-task adaptation.Significance. This paper confirms that the decoding of MI EEG can be well facilitated by pre-existing ME EEG data, which largely relaxes the constraint of training samples for MI EEG decoding and is important in a practical sense.
Collapse
|
23
|
IMH-Net: a convolutional neural network for end-to-end EEG motor imagery classification. Comput Methods Biomech Biomed Engin 2023:1-14. [PMID: 37936533 DOI: 10.1080/10255842.2023.2275244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 10/17/2023] [Indexed: 11/09/2023]
Abstract
As the main component of Brain-computer interface (BCI) technology, the classification algorithm based on EEG has developed rapidly. The previous algorithms were often based on subject-dependent settings, resulting in BCI needing to be calibrated for new users. In this work, we propose IMH-Net, an end-to-end subject-independent model. The model first uses Inception blocks extracts the frequency domain features of the data, then further compresses the feature vectors to extract the spatial domain features, and finally learns the global information and classification through Multi-Head Attention mechanism. On the OpenBMI dataset, IMH-Net obtained 73.90 ± 13.10% accuracy and 73.09 ± 14.99% F1-score in subject-independent manner, which improved the accuracy by 1.96% compared with the comparison model. On the BCI competition IV dataset 2a, this model also achieved the highest accuracy and F1-score in subject-dependent manner. The IMH-Net model we proposed can improve the accuracy of subject-independent Motor Imagery (MI), and the robustness of the algorithm is high, which has strong practical value in the field of BCI.
Collapse
|
24
|
Enhanced Motor Imagery Decoding by Calibration Model-Assisted With Tactile ERD. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4295-4305. [PMID: 37883287 DOI: 10.1109/tnsre.2023.3327788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
OBJECTIVE In this study, we propose a tactile-assisted calibration method for a motor imagery (MI) based Brain-Computer Interface (BCI) system. METHOD In the proposed calibration, tactile stimulation was applied to the hand wrist to assist the subjects in the MI task, which is named SA-MI task. Then, classifier training in the SA-MI Calibration was performed using the SA-MI data, while the Conventional Calibration employed the MI data. After the classifiers were trained, the performance was evaluated on a common MI dataset. RESULTS Our study demonstrated that the SA-MI Calibration significantly improved the performance as compared with the Conventional Calibration, with a decoding accuracy of (78.3% vs. 71.3%). Moreover, the average calibration time could be reduced by 40%. This benefit of the SA-MI Calibration effect was further validated by an independent control group, which showed no improvement when tactile stimulation was not applied during the calibration phase. Further analysis showed that when compared with MI, greater motor-related cortical activation and higher R 2 value in the alpha-beta frequency band were induced in SA-MI. CONCLUSION Indeed, the SA-MI Calibration could significantly improve the performance and reduce the calibration time as compared with the Conventional Calibration. SIGNIFICANCE The proposed tactile stimulation-assisted MI Calibration method holds great potential for a faster and more accurate system setup at the beginning of BCI usage.
Collapse
|
25
|
Hierarchical Transformer for Motor Imagery-Based Brain Computer Interface. IEEE J Biomed Health Inform 2023; 27:5459-5470. [PMID: 37578918 DOI: 10.1109/jbhi.2023.3304646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
In this paper, we propose a novel transformer-based classification algorithm for the brain computer interface (BCI) using a motor imagery (MI) electroencephalogram (EEG) signal. To design the MI classification algorithm, we apply an up-to-date deep learning model, the transformer, that has revolutionized the natural language processing (NLP) and successfully widened its application to many other domains such as the computer vision. Within a long MI trial spanning a few seconds, the classification algorithm should give more attention to the time periods during which the intended motor task is imagined by the subject without any artifact. To achieve this goal, we propose a hierarchical transformer architecture that consists of a high-level transformer (HLT) and a low-level transformer (LLT). We break down a long MI trial into a number of short-term intervals. The LLT extracts a feature from each short-term interval, and the HLT pays more attention to the features from more relevant short-term intervals by using the self-attention mechanism of the transformer. We have done extensive tests of the proposed scheme on four open MI datasets, and shown that the proposed hierarchical transformer excels in both the subject-dependent and subject-independent tests.
Collapse
|
26
|
A Multi-Domain Convolutional Neural Network for EEG-Based Motor Imagery Decoding. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3988-3998. [PMID: 37815970 DOI: 10.1109/tnsre.2023.3323325] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
Motor imagery (MI) decoding plays a crucial role in the advancement of electroencephalography (EEG)-based brain-computer interface (BCI) technology. Currently, most researches focus on complex deep learning structures for MI decoding. The growing complexity of networks may result in overfitting and lead to inaccurate decoding outcomes due to the redundant information. To address this limitation and make full use of the multi-domain EEG features, a multi-domain temporal-spatial-frequency convolutional neural network (TSFCNet) is proposed for MI decoding. The proposed network provides a novel mechanism that utilize the spatial and temporal EEG features combined with frequency and time-frequency characteristics. This network enables powerful feature extraction without complicated network structure. Specifically, the TSFCNet first employs the MixConv-Residual block to extract multiscale temporal features from multi-band filtered EEG data. Next, the temporal-spatial-frequency convolution block implements three shallow, parallel and independent convolutional operations in spatial, frequency and time-frequency domain, and captures high discriminative representations from these domains respectively. Finally, these features are effectively aggregated by average pooling layers and variance layers, and the network is trained with the joint supervision of the cross-entropy and the center loss. Our experimental results show that the TSFCNet outperforms the state-of-the-art models with superior classification accuracy and kappa values (82.72% and 0.7695 for dataset BCI competition IV 2a, 86.39% and 0.7324 for dataset BCI competition IV 2b). These competitive results demonstrate that the proposed network is promising for enhancing the decoding performance of MI BCIs.
Collapse
|
27
|
Real-Time Classification of Motor Imagery Using Dynamic Window-Level Granger Causality Analysis of fMRI Data. Brain Sci 2023; 13:1406. [PMID: 37891775 PMCID: PMC10604978 DOI: 10.3390/brainsci13101406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 09/18/2023] [Accepted: 09/26/2023] [Indexed: 10/29/2023] Open
Abstract
This article presents a method for extracting neural signal features to identify the imagination of left- and right-hand grasping movements. A functional magnetic resonance imaging (fMRI) experiment is employed to identify four brain regions with significant activations during motor imagery (MI) and the effective connections between these regions of interest (ROIs) were calculated using Dynamic Window-level Granger Causality (DWGC). Then, a real-time fMRI (rt-fMRI) classification system for left- and right-hand MI is developed using the Open-NFT platform. We conducted data acquisition and processing on three subjects, and all of whom were recruited from a local college. As a result, the maximum accuracy of using Support Vector Machine (SVM) classifier on real-time three-class classification (rest, left hand, and right hand) with effective connections is 69.3%. And it is 3% higher than that of traditional multivoxel pattern classification analysis on average. Moreover, it significantly improves classification accuracy during the initial stage of MI tasks while reducing the latency effects in real-time decoding. The study suggests that the effective connections obtained through the DWGC method serve as valuable features for real-time decoding of MI using fMRI. Moreover, they exhibit higher sensitivity to changes in brain states. This research offers theoretical support and technical guidance for extracting neural signal features in the context of fMRI-based studies.
Collapse
|
28
|
SincMSNet: a Sinc filter convolutional neural network for EEG motor imagery classification. J Neural Eng 2023; 20:056024. [PMID: 37683664 DOI: 10.1088/1741-2552/acf7f4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/08/2023] [Indexed: 09/10/2023]
Abstract
Objective.Motor imagery (MI) is widely used in brain-computer interfaces (BCIs). However, the decode of MI-EEG using convolutional neural networks (CNNs) remains a challenge due to individual variability.Approach.We propose a fully end-to-end CNN called SincMSNet to address this issue. SincMSNet employs the Sinc filter to extract subject-specific frequency band information and utilizes mixed-depth convolution to extract multi-scale temporal information for each band. It then applies a spatial convolutional block to extract spatial features and uses a temporal log-variance block to obtain classification features. The model of SincMSNet is trained under the joint supervision of cross-entropy and center loss to achieve inter-class separable and intra-class compact representations of EEG signals.Main results.We evaluated the performance of SincMSNet on the BCIC-IV-2a (four-class) and OpenBMI (two-class) datasets. SincMSNet achieves impressive results, surpassing benchmark methods. In four-class and two-class inter-session analysis, it achieves average accuracies of 80.70% and 71.50% respectively. In four-class and two-class single-session analysis, it achieves average accuracies of 84.69% and 76.99% respectively. Additionally, visualizations of the learned band-pass filter bands by Sinc filters demonstrate the network's ability to extract subject-specific frequency band information from EEG.Significance.This study highlights the potential of SincMSNet in improving the performance of MI-EEG decoding and designing more robust MI-BCIs. The source code for SincMSNet can be found at:https://github.com/Want2Vanish/SincMSNet.
Collapse
|
29
|
A large EEG database with users' profile information for motor imagery brain-computer interface research. Sci Data 2023; 10:580. [PMID: 37670009 PMCID: PMC10480224 DOI: 10.1038/s41597-023-02445-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/04/2023] [Indexed: 09/07/2023] Open
Abstract
We present and share a large database containing electroencephalographic signals from 87 human participants, collected during a single day of brain-computer interface (BCI) experiments, organized into 3 datasets (A, B, and C) that were all recorded using the same protocol: right and left hand motor imagery (MI). Each session contains 240 trials (120 per class), which represents more than 20,800 trials, or approximately 70 hours of recording time. It includes the performance of the associated BCI users, detailed information about the demographics, personality profile as well as some cognitive traits and the experimental instructions and codes (executed in the open-source platform OpenViBE). Such database could prove useful for various studies, including but not limited to: (1) studying the relationships between BCI users' profiles and their BCI performances, (2) studying how EEG signals properties varies for different users' profiles and MI tasks, (3) using the large number of participants to design cross-user BCI machine learning algorithms or (4) incorporating users' profile information into the design of EEG signal classification algorithms.
Collapse
|
30
|
A shallow mirror transformer for subject-independent motor imagery BCI. Comput Biol Med 2023; 164:107254. [PMID: 37499295 DOI: 10.1016/j.compbiomed.2023.107254] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Revised: 06/28/2023] [Accepted: 07/07/2023] [Indexed: 07/29/2023]
Abstract
OBJECTIVE Motor imagery BCI plays an increasingly important role in motor disorders rehabilitation. However, the position and duration of the discriminative segment in an EEG trial vary from subject to subject and even trial to trial, and this leads to poor performance of subject-independent motor imagery classification. Thus, determining how to detect and utilize the discriminative signal segments is crucial for improving the performance of subject-independent motor imagery BCI. APPROACH In this paper, a shallow mirror transformer is proposed for subject-independent motor imagery EEG classification. Specifically, a multihead self-attention layer with a global receptive field is employed to detect and utilize the discriminative segment from the entire input EEG trial. Furthermore, the mirror EEG signal and the mirror network structure are constructed to improve the classification precision based on ensemble learning. Finally, the subject-independent setup was used to evaluate the shallow mirror transformer on motor imagery EEG signals from subjects existing in the training set and new subjects. MAIN RESULTS The experiments results on BCI Competition IV datasets 2a and 2b and the OpenBMI dataset demonstrated the promising effectiveness of the proposed shallow mirror transformer. The shallow mirror transformer obtained average accuracies of 74.48% and 76.1% for new subjects and existing subjects, respectively, which were highest among the compared state-of-the-art methods. In addition, visualization of the attention score showed the ability of discriminative EEG segment detection. This paper demonstrated that multihead self-attention is effective in capturing global EEG signal information in motor imagery classification. SIGNIFICANCE This study provides an effective model based on a multihead self-attention layer for subject-independent motor imagery-based BCIs. To the best of our knowledge, this is the shallowest transformer model available, in which a small number of parameters promotes the performance in motor imagery EEG classification for such a small sample problem.
Collapse
|
31
|
Local and global convolutional transformer-based motor imagery EEG classification. Front Neurosci 2023; 17:1219988. [PMID: 37662099 PMCID: PMC10469791 DOI: 10.3389/fnins.2023.1219988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 08/07/2023] [Indexed: 09/05/2023] Open
Abstract
Transformer, a deep learning model with the self-attention mechanism, combined with the convolution neural network (CNN) has been successfully applied for decoding electroencephalogram (EEG) signals in Motor Imagery (MI) Brain-Computer Interface (BCI). However, the extremely non-linear, nonstationary characteristics of the EEG signals limits the effectiveness and efficiency of the deep learning methods. In addition, the variety of subjects and the experimental sessions impact the model adaptability. In this study, we propose a local and global convolutional transformer-based approach for MI-EEG classification. The local transformer encoder is combined to dynamically extract temporal features and make up for the shortcomings of the CNN model. The spatial features from all channels and the difference in hemispheres are obtained to improve the robustness of the model. To acquire adequate temporal-spatial feature representations, we combine the global transformer encoder and Densely Connected Network to improve the information flow and reuse. To validate the performance of the proposed model, three scenarios including within-session, cross-session and two-session are designed. In the experiments, the proposed method achieves up to 1.46%, 7.49% and 7.46% accuracy improvement respectively in the three scenarios for the public Korean dataset compared with current state-of-the-art models. For the BCI competition IV 2a dataset, the proposed model also achieves a 2.12% and 2.21% improvement for the cross-session and two-session scenarios respectively. The results confirm that the proposed approach can effectively extract much richer set of MI features from the EEG signals and improve the performance in the BCI applications.
Collapse
|
32
|
MIRACLE: MInd ReAding CLassification Engine. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3212-3222. [PMID: 37535483 DOI: 10.1109/tnsre.2023.3301507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/05/2023]
Abstract
Brain-computer interfaces (BCIs) have revolutionized the way humans interact with machines, particularly for patients with severe motor impairments. EEG-based BCIs have limited functionality due to the restricted pool of stimuli that they can distinguish, while those elaborating event-related potentials up to now employ paradigms that require the patient's perception of the eliciting stimulus. In this work, we propose MIRACLE: a novel BCI system that combines functional data analysis and machine-learning techniques to decode patients' minds from the elicited potentials. MIRACLE relies on a hierarchical ensemble classifier recognizing 10 different semantic categories of imagined stimuli. We validated MIRACLE on an extensive dataset collected from 20 volunteers, with both imagined and perceived stimuli, to compare the system performance on the two. Furthermore, we quantify the importance of each EEG channel in the decision-making process of the classifier, which can help reduce the number of electrodes required for data acquisition, enhancing patients' comfort.
Collapse
|
33
|
A Temporal Dependency Learning CNN With Attention Mechanism for MI-EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3188-3200. [PMID: 37498754 DOI: 10.1109/tnsre.2023.3299355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Deep learning methods have been widely explored in motor imagery (MI)-based brain computer interface (BCI) systems to decode electroencephalography (EEG) signals. However, most studies fail to fully explore temporal dependencies among MI-related patterns generated in different stages during MI tasks, resulting in limited MI-EEG decoding performance. Apart from feature extraction, learning temporal dependencies is equally important to develop a subject-specific MI-based BCI because every subject has their own way of performing MI tasks. In this paper, a novel temporal dependency learning convolutional neural network (CNN) with attention mechanism is proposed to address MI-EEG decoding. The network first learns spatial and spectral information from multi-view EEG data via the spatial convolution block. Then, a series of non-overlapped time windows is employed to segment the output data, and the discriminative feature is further extracted from each time window to capture MI-related patterns generated in different stages. Furthermore, to explore temporal dependencies among discriminative features in different time windows, we design a temporal attention module that assigns different weights to features in various time windows and fuses them into more discriminative features. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and OpenBMI datasets show that our proposed network outperforms the state-of-the-art algorithms and achieves the average accuracy of 79.48%, improved by 2.30% on the BCIC-IV-2a dataset. We demonstrate that learning temporal dependencies effectively improves MI-EEG decoding performance. The code is available at https://github.com/Ma-Xinzhi/LightConvNet.
Collapse
|
34
|
Subject-Independent EEG Classification of Motor Imagery Based on Dual-Branch Feature Fusion. Brain Sci 2023; 13:1109. [PMID: 37509039 PMCID: PMC10377689 DOI: 10.3390/brainsci13071109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023] Open
Abstract
A brain computer interface (BCI) system helps people with motor dysfunction interact with the external environment. With the advancement of technology, BCI systems have been applied in practice, but their practicability and usability are still greatly challenged. A large amount of calibration time is often required before BCI systems are used, which can consume the patient's energy and easily lead to anxiety. This paper proposes a novel motion-assisted method based on a novel dual-branch multiscale auto encoder network (MSAENet) to decode human brain motion imagery intentions, while introducing a central loss function to compensate for the shortcomings of traditional classifiers that only consider inter-class differences and ignore intra-class coupling. The effectiveness of the method is validated on three datasets, namely BCIIV2a, SMR-BCI and OpenBMI, to achieve zero calibration of the MI-BCI system. The results show that our proposed network displays good results on all three datasets. In the case of subject-independence, the MSAENet outperformed the other four comparison methods on the BCIIV2a and SMR-BCI datasets, while achieving F1_score values as high as 69.34% on the OpenBMI dataset. Our method maintains better classification accuracy with a small number of parameters and short prediction times, and the method achieves zero calibration of the MI-BCI system.
Collapse
|
35
|
Overlapping filter bank convolutional neural network for multisubject multicategory motor imagery brain-computer interface. BioData Min 2023; 16:19. [PMID: 37434221 DOI: 10.1186/s13040-023-00336-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 07/03/2023] [Indexed: 07/13/2023] Open
Abstract
BACKGROUND Motor imagery brain-computer interfaces (BCIs) is a classic and potential BCI technology achieving brain computer integration. In motor imagery BCI, the operational frequency band of the EEG greatly affects the performance of motor imagery EEG recognition model. However, as most algorithms used a broad frequency band, the discrimination from multiple sub-bands were not fully utilized. Thus, using convolutional neural network (CNNs) to extract discriminative features from EEG signals of different frequency components is a promising method in multisubject EEG recognition. METHODS This paper presents a novel overlapping filter bank CNN to incorporate discriminative information from multiple frequency components in multisubject motor imagery recognition. Specifically, two overlapping filter banks with fixed low-cut frequency or sliding low-cut frequency are employed to obtain multiple frequency component representations of EEG signals. Then, multiple CNN models are trained separately. Finally, the output probabilities of multiple CNN models are integrated to determine the predicted EEG label. RESULTS Experiments were conducted based on four popular CNN backbone models and three public datasets. And the results showed that the overlapping filter bank CNN was efficient and universal in improving multisubject motor imagery BCI performance. Specifically, compared with the original backbone model, the proposed method can improve the average accuracy by 3.69 percentage points, F1 score by 0.04, and AUC by 0.03. In addition, the proposed method performed best among the comparison with the state-of-the-art methods. CONCLUSION The proposed overlapping filter bank CNN framework with fixed low-cut frequency is an efficient and universal method to improve the performance of multisubject motor imagery BCI.
Collapse
|
36
|
Dataset Evaluation Method and Application for Performance Testing of SSVEP-BCI Decoding Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:6310. [PMID: 37514603 PMCID: PMC10385518 DOI: 10.3390/s23146310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/24/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems have been extensively researched over the past two decades, and multiple sets of standard datasets have been published and widely used. However, there are differences in sample distribution and collection equipment across different datasets, and there is a lack of a unified evaluation method. Most new SSVEP decoding algorithms are tested based on self-collected data or offline performance verification using one or two previous datasets, which can lead to performance differences when used in actual application scenarios. To address these issues, this paper proposed a SSVEP dataset evaluation method and analyzed six datasets with frequency and phase modulation paradigms to form an SSVEP algorithm evaluation dataset system. Finally, based on the above datasets, performance tests were carried out on the four existing SSVEP decoding algorithms. The findings reveal that the performance of the same algorithm varies significantly when tested on diverse datasets. Substantial performance variations were observed among subjects, ranging from the best-performing to the worst-performing. The above results demonstrate that the SSVEP dataset evaluation method can integrate six datasets to form a SSVEP algorithm performance testing dataset system. This system can test and verify the SSVEP decoding algorithm from different perspectives such as different subjects, different environments, and different equipment, which is helpful for the research of new SSVEP decoding algorithms and has significant reference value for other BCI application fields.
Collapse
|
37
|
Mental Tasks Modulate Motor-Units Above 10 Hz and are a Potential Control Signal for Movement Augmentation: a Preliminary Study. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083291 DOI: 10.1109/embc40787.2023.10340378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Spinal motor neurons receive a wide range of input frequencies. However, only frequencies below ca. 10 Hz are directly translated into motor output. Frequency components above 10 Hz are filtered out by neural pathways and muscle dynamics. These higher frequency components may have an indirect effect on motor output, or may simply represent movement-independent oscillations that leak down from supraspinal areas such as the motor cortex. If movement-independent oscillations leak down from supraspinal areas, they could provide a potential control signal in movement augmentation applications. We analysed high-density electromyography (HD-EMG) signals from the tibialis anterior muscle while human subjects performed various mental tasks. The subjects performed an isometric dorsiflexion of the right foot at a low level of force while simultaneously (1) imagining a movement of the right foot, (2) imagining a movement of both hands, (3) performing a mathematical task, or (4) performing no additional task. We classified the channel-averaged HD-EMG signals and the cumulative spike train (CST) of motor-units using a filter bank and a linear classifier. We found that in some subjects, the mental task can be classified from the channel-averaged HD-EMG signals and the CST in oscillations above 10 Hz. Furthermore, we found that these oscillation modulations are incompatible with a systematic and task-dependent change in force level. Our preliminary findings from a limited number of subjects suggest that some mental task-induced oscillations from supraspinal areas leak down to spinal motor neurons and are discriminable via EMG or CST signals at the innervated muscle.
Collapse
|
38
|
EEG motor imagery classification using deep learning approaches in naïve BCI users. Biomed Phys Eng Express 2023; 9:045029. [PMID: 37321179 DOI: 10.1088/2057-1976/acde82] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 06/15/2023] [Indexed: 06/17/2023]
Abstract
Motor Imagery (MI)-Brain Computer-Interfaces (BCI) illiteracy defines that not all subjects can achieve a good performance in MI-BCI systems due to different factors related to the fatigue, substance consumption, concentration, and experience in the use. To reduce the effects of lack of experience in the use of BCI systems (naïve users), this paper presents the implementation of three Deep Learning (DL) methods with the hypothesis that the performance of BCI systems could be improved compared with baseline methods in the evaluation of naïve BCI users. The methods proposed here are based on Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM)/Bidirectional Long Short-Term Memory (BiLSTM), and a combination of CNN and LSTM used for upper limb MI signal discrimination on a dataset of 25 naïve BCI users. The results were compared with three widely used baseline methods based on the Common Spatial Pattern (CSP), Filter Bank Common Spatial Pattern (FBCSP), and Filter Bank Common Spatial-Spectral Pattern (FBCSSP), in different temporal window configurations. As results, the LSTM-BiLSTM-based approach presented the best performance, according to the evaluation metrics of Accuracy, F-score, Recall, Specificity, Precision, and ITR, with a mean performance of 80% (maximum 95%) and ITR of 10 bits/min using a temporal window of 1.5 s. The DL Methods represent a significant increase of 32% compared with the baseline methods (p< 0.05). Thus, with the outcomes of this study, it is expected to increase the controllability, usability, and reliability of the use of robotic devices in naïve BCI users.
Collapse
|
39
|
The effects of layer-wise relevance propagation-based feature selection for EEG classification: a comparative study on multiple datasets. Front Hum Neurosci 2023; 17:1205881. [PMID: 37342822 PMCID: PMC10277566 DOI: 10.3389/fnhum.2023.1205881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/17/2023] [Indexed: 06/23/2023] Open
Abstract
Introduction The brain-computer interface (BCI) allows individuals to control external devices using their neural signals. One popular BCI paradigm is motor imagery (MI), which involves imagining movements to induce neural signals that can be decoded to control devices according to the user's intention. Electroencephalography (EEG) is frequently used for acquiring neural signals from the brain in the fields of MI-BCI due to its non-invasiveness and high temporal resolution. However, EEG signals can be affected by noise and artifacts, and patterns of EEG signals vary across different subjects. Therefore, selecting the most informative features is one of the essential processes to enhance classification performance in MI-BCI. Methods In this study, we design a layer-wise relevance propagation (LRP)-based feature selection method which can be easily integrated into deep learning (DL)-based models. We assess its effectiveness for reliable class-discriminative EEG feature selection on two different publicly available EEG datasets with various DL-based backbone models in the subject-dependent scenario. Results and discussion The results show that LRP-based feature selection enhances the performance for MI classification on both datasets for all DL-based backbone models. Based on our analysis, we believe that it can broad its capability to different research domains.
Collapse
|
40
|
Improving the performance of SSVEP-BCI contaminated by physiological noise via adversarial training. MEDICINE IN NOVEL TECHNOLOGY AND DEVICES 2023. [DOI: 10.1016/j.medntd.2023.100213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023] Open
|
41
|
Bridging the BCI illiteracy gap: a subject-to-subject semantic style transfer for EEG-based motor imagery classification. Front Hum Neurosci 2023; 17:1194751. [PMID: 37256201 PMCID: PMC10225603 DOI: 10.3389/fnhum.2023.1194751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/25/2023] [Indexed: 06/01/2023] Open
Abstract
Introduction Brain-computer interfaces (BCIs) facilitate direct interaction between the human brain and computers, enabling individuals to control external devices through cognitive processes. Despite its potential, the problem of BCI illiteracy remains one of the major challenges due to inter-subject EEG variability, which hinders many users from effectively utilizing BCI systems. In this study, we propose a subject-to-subject semantic style transfer network (SSSTN) at the feature-level to address the BCI illiteracy problem in electroencephalogram (EEG)-based motor imagery (MI) classification tasks. Methods Our approach uses the continuous wavelet transform method to convert high-dimensional EEG data into images as input data. The SSSTN 1) trains a classifier for each subject, 2) transfers the distribution of class discrimination styles from the source subject (the best-performing subject for the classifier, i.e., BCI expert) to each subject of the target domain (the remaining subjects except the source subject, specifically BCI illiterates) through the proposed style loss, and applies a modified content loss to preserve the class-relevant semantic information of the target domain, and 3) finally merges the classifier predictions of both source and target subject using an ensemble technique. Results and discussion We evaluate the proposed method on the BCI Competition IV-2a and IV-2b datasets and demonstrate improved classification performance over existing methods, especially for BCI illiterate users. The ablation experiments and t-SNE visualizations further highlight the effectiveness of the proposed method in achieving meaningful feature-level semantic style transfer.
Collapse
|
42
|
Can vibrotactile stimulation and tDCS help inefficient BCI users? J Neuroeng Rehabil 2023; 20:60. [PMID: 37143057 PMCID: PMC10157902 DOI: 10.1186/s12984-023-01181-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 04/19/2023] [Indexed: 05/06/2023] Open
Abstract
Brain-computer interface (BCI) has helped people by allowing them to control a computer or machine through brain activity without actual body movement. Despite this advantage, BCI cannot be used widely because some people cannot achieve controllable performance. To solve this problem, researchers have proposed stimulation methods to modulate relevant brain activity to improve BCI performance. However, multiple studies have reported mixed results following stimulation, and the comparative study of different stimulation modalities has been overlooked. Accordingly, this study was designed to compare vibrotactile stimulation and transcranial direct current stimulation's (tDCS) effects on brain activity modulation and motor imagery BCI performance among inefficient BCI users. We recruited 44 subjects and divided them into sham, vibrotactile stimulation, and tDCS groups, and low performers were selected from each stimulation group. We found that the latter's BCI performance in the vibrotactile stimulation group increased significantly by 9.13% (p < 0.01), and while the tDCS group subjects' performance increased by 5.13%, it was not significant. In contrast, sham group subjects showed no increased performance. In addition to BCI performance, pre-stimulus alpha band power and the phase locking values (PLVs) averaged over sensory motor areas showed significant increases in low performers following stimulation in the vibrotactile stimulation and tDCS groups, while sham stimulation group subjects and high performers showed no significant stimulation effects across all groups. Our findings suggest that stimulation effects may differ depending upon BCI efficiency, and inefficient BCI users have greater plasticity than efficient BCI users.
Collapse
|
43
|
Recognizable Rehabilitation Movements of Multiple Unilateral Upper Limb: an fMRI Study of Motor Execution and Motor Imagery. J Neurosci Methods 2023; 392:109861. [PMID: 37075914 DOI: 10.1016/j.jneumeth.2023.109861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/18/2023] [Accepted: 04/15/2023] [Indexed: 04/21/2023]
Abstract
BACKGROUND This paper presents a study investigating the recognizability of multiple unilateral upper limb movements in stroke rehabilitation. METHODS A functional magnetic experiment is employed to study motor execution (ME) and motor imagery (MI) of four movements for the unilateral upper limb: hand-grasping, hand-handling, arm-reaching, and wrist-twisting. The functional magnetic resonance imaging (fMRI) images of ME and MI tasks are statistically analyzed to delineate the region of interest (ROI). Then parameter estimation associated with ROIs for each ME and MI task are evaluated, where differences in ROIs for different movements are compared using analysis of covariance (ANCOVA). RESULTS All movements of ME and MI tasks activate motor areas of the brain, and there are significant differences (p<0.05) in ROIs evoked by different movements. The activation area is larger when executing the hand-grasping task instead of the others. CONCLUSION The four movements we propose can be adopted as MI tasks, especially for stroke rehabilitation, since they are highly recognizable and capable of activating more brain areas during MI and ME.
Collapse
|
44
|
Online adaptive classification system for brain–computer interface based on error-related potentials and neurofeedback. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
45
|
Review of public motor imagery and execution datasets in brain-computer interfaces. Front Hum Neurosci 2023; 17:1134869. [PMID: 37063105 PMCID: PMC10101208 DOI: 10.3389/fnhum.2023.1134869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 03/10/2023] [Indexed: 04/18/2023] Open
Abstract
The demand for public datasets has increased as data-driven methodologies have been introduced in the field of brain-computer interfaces (BCIs). Indeed, many BCI datasets are available in various platforms or repositories on the web, and the studies that have employed these datasets appear to be increasing. Motor imagery is one of the significant control paradigms in the BCI field, and many datasets related to motor tasks are open to the public already. However, to the best of our knowledge, these studies have yet to investigate and evaluate the datasets, although data quality is essential for reliable results and the design of subject- or system-independent BCIs. In this study, we conducted a thorough investigation of motor imagery/execution EEG datasets recorded from healthy participants published over the past 13 years. The 25 datasets were collected from six repositories and subjected to a meta-analysis. In particular, we reviewed the specifications of the recording settings and experimental design, and evaluated the data quality measured by classification accuracy from standard algorithms such as Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA) for comparison and compatibility across the datasets. As a result, we found that various stimulation types, such as text, figure, or arrow, were used to instruct subjects what to imagine and the length of each trial also differed, ranging from 2.5 to 29 s with a mean of 9.8 s. Typically, each trial consisted of multiple sections: pre-rest (2.38 s), imagination ready (1.64 s), imagination (4.26 s, ranging from 1 to 10 s), the post-rest (3.38 s). In a meta-analysis of the total of 861 sessions from all datasets, the mean classification accuracy of the two-class (left-hand vs. right-hand motor imagery) problem was 66.53%, and the population of the BCI poor performers, those who are unable to reach proficiency in using a BCI system, was 36.27% according to the estimated accuracy distribution. Further, we analyzed the CSP features and found that each dataset forms a cluster, and some datasets overlap in the feature space, indicating a greater similarity among them. Finally, we checked the minimal essential information (continuous signals, event type/latency, and channel information) that should be included in the datasets for convenient use, and found that only 71% of the datasets met those criteria. Our attempts to evaluate and compare the public datasets are timely, and these results will contribute to understanding the dataset's quality and recording settings as well as the use of using public datasets for future work on BCIs.
Collapse
|
46
|
A hybrid P300-SSVEP brain-computer interface speller with a frequency enhanced row and column paradigm. Front Neurosci 2023; 17:1133933. [PMID: 37008204 PMCID: PMC10050351 DOI: 10.3389/fnins.2023.1133933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/27/2023] [Indexed: 03/17/2023] Open
Abstract
ObjectiveThis study proposes a new hybrid brain-computer interface (BCI) system to improve spelling accuracy and speed by stimulating P300 and steady-state visually evoked potential (SSVEP) in electroencephalography (EEG) signals.MethodsA frequency enhanced row and column (FERC) paradigm is proposed to incorporate the frequency coding into the row and column (RC) paradigm so that the P300 and SSVEP signals can be evoked simultaneously. A flicker (white-black) with a specific frequency from 6.0 to 11.5 Hz with an interval of 0.5 Hz is assigned to one row or column of a 6 × 6 layout, and the row/column flashes are carried out in a pseudorandom sequence. A wavelet and support vector machine (SVM) combination is adopted for P300 detection, an ensemble task-related component analysis (TRCA) method is used for SSVEP detection, and the two detection possibilities are fused using a weight control approach.ResultsThe implemented BCI speller achieved an accuracy of 94.29% and an information transfer rate (ITR) of 28.64 bit/min averaged across 10 subjects during the online tests. An accuracy of 96.86% is obtained during the offline calibration tests, higher than that of only using P300 (75.29%) or SSVEP (89.13%). The SVM in P300 outperformed the previous linear discrimination classifier and its variants (61.90–72.22%), and the ensemble TRCA in SSVEP outperformed the canonical correlation analysis method (73.33%).ConclusionThe proposed hybrid FERC stimulus paradigm can improve the performance of the speller compared with the classical single stimulus paradigm. The implemented speller can achieve comparable accuracy and ITR to its state-of-the-art counterparts with advanced detection algorithms.
Collapse
|
47
|
An Analysis of Deep Learning Models in SSVEP-Based BCI: A Survey. Brain Sci 2023; 13:brainsci13030483. [PMID: 36979293 PMCID: PMC10046535 DOI: 10.3390/brainsci13030483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/04/2023] [Accepted: 03/10/2023] [Indexed: 03/15/2023] Open
Abstract
The brain–computer interface (BCI), which provides a new way for humans to directly communicate with robots without the involvement of the peripheral nervous system, has recently attracted much attention. Among all the BCI paradigms, BCIs based on steady-state visual evoked potentials (SSVEPs) have the highest information transfer rate (ITR) and the shortest training time. Meanwhile, deep learning has provided an effective and feasible solution for solving complex classification problems in many fields, and many researchers have started to apply deep learning to classify SSVEP signals. However, the designs of deep learning models vary drastically. There are many hyper-parameters that influence the performance of the model in an unpredictable way. This study surveyed 31 deep learning models (2011–2023) that were used to classify SSVEP signals and analyzed their design aspects including model input, model structure, performance measure, etc. Most of the studies that were surveyed in this paper were published in 2021 and 2022. This survey is an up-to-date design guide for researchers who are interested in using deep learning models to classify SSVEP signals.
Collapse
|
48
|
Filter bank sinc-convolutional network with channel self-attention for high performance motor imagery decoding. J Neural Eng 2023; 20. [PMID: 36763992 DOI: 10.1088/1741-2552/acbb2c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 02/10/2023] [Indexed: 02/12/2023]
Abstract
Objective.Motor Imagery Brain-Computer Interface (MI-BCI) is an active Brain-Computer Interface (BCI) paradigm focusing on the identification of motor intention, which is one of the most important non-invasive BCI paradigms. In MI-BCI studies, deep learning-based methods (especially lightweight networks) have attracted more attention in recent years, but the decoding performance still needs further improving.Approach.To solve this problem, we designed a filter bank structure with sinc-convolutional layers for spatio-temporal feature extraction of MI-electroencephalography in four motor rhythms. The Channel Self-Attention method was introduced for feature selection based on both global and local information, so as to build a model called Filter Bank Sinc-convolutional Network with Channel Self-Attention for high performance MI-decoding. Also, we proposed a data augmentation method based on multivariate empirical mode decomposition to improve the generalization capability of the model.Main results.We performed an intra-subject evaluation experiment on unseen data of three open MI datasets. The proposed method achieved mean accuracy of 78.20% (4-class scenario) on BCI Competition IV IIa, 87.34% (2-class scenario) on BCI Competition IV IIb, and 72.03% (2-class scenario) on Open Brain Machine Interface (OpenBMI) dataset, which are significantly higher than those of compared deep learning-based methods by at least 3.05% (p= 0.0469), 3.18% (p= 0.0371), and 2.27% (p= 0.0024) respectively.Significance.This work provides a new option for deep learning-based MI decoding, which can be employed for building BCI systems for motor rehabilitation.
Collapse
|
49
|
Discrepancy between inter- and intra-subject variability in EEG-based motor imagery brain-computer interface: Evidence from multiple perspectives. Front Neurosci 2023; 17:1122661. [PMID: 36860620 PMCID: PMC9968845 DOI: 10.3389/fnins.2023.1122661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 01/26/2023] [Indexed: 02/17/2023] Open
Abstract
Introduction Inter- and intra-subject variability are caused by the variability of the psychological and neurophysiological factors over time and across subjects. In the application of in Brain-Computer Interfaces (BCI), the existence of inter- and intra-subject variability reduced the generalization ability of machine learning models seriously, which further limited the use of BCI in real life. Although many transfer learning methods can compensate for the inter- and intra-subject variability to some extent, there is still a lack of clear understanding about the change of feature distribution between the cross-subject and cross-session electroencephalography (EEG) signal. Methods To investigate this issue, an online platform for motor-imagery BCI decoding has been built in this work. The EEG signal from both the multi-subject (Exp1) and multi-session (Exp2) experiments has been analyzed from multiple perspectives. Results Firstly we found that with the similar variability of classification results, the time-frequency response of the EEG signal within-subject in Exp2 is more consistent than cross-subject results in Exp1. Secondly, the standard deviation of the common spatial pattern (CSP) feature has a significant difference between Exp1 and Exp2. Thirdly, for model training, different strategies for the training sample selection should be applied for the cross-subject and cross-session tasks. Discussion All these findings have deepened the understanding of inter- and intra-subject variability. They can also guide practice for the new transfer learning methods development in EEG-based BCI. In addition, these results also proved that BCI inefficiency was not caused by the subject's unable to generate the event-related desynchronization/synchronization (ERD/ERS) signal during the motor imagery.
Collapse
|
50
|
Mutual Information-Driven Subject-Invariant and Class-Relevant Deep Representation Learning in BCI. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:739-749. [PMID: 34357871 DOI: 10.1109/tnnls.2021.3100583] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In recent years, deep learning-based feature representation methods have shown a promising impact on electroencephalography (EEG)-based brain-computer interface (BCI). Nonetheless, owing to high intra- and inter-subject variabilities, many studies on decoding EEG were designed in a subject-specific manner by using calibration samples, with no concern of its practical use, hampered by time-consuming steps and a large data requirement. To this end, recent studies adopted a transfer learning strategy, especially domain adaptation techniques. Among those, we have witnessed the potential of adversarial learning-based transfer learning in BCIs. In the meantime, it is known that adversarial learning-based domain adaptation methods are prone to negative transfer that disrupts learning generalized feature representations, applicable to diverse domains, for example, subjects or sessions in BCIs. In this article, we propose a novel framework that learns class-relevant and subject-invariant feature representations in an information-theoretic manner, without using adversarial learning. To be specific, we devise two operational components in a deep network that explicitly estimate mutual information between feature representations: 1) to decompose features in an intermediate layer into class-relevant and class-irrelevant ones and 2) to enrich class-discriminative feature representation. On two large EEG datasets, we validated the effectiveness of our proposed framework by comparing with several comparative methods in performance. Furthermore, we conducted rigorous analyses by performing an ablation study in regard to the components in our network, explaining our model's decision on input EEG signals via layer-wise relevance propagation, and visualizing the distribution of learned features via t-SNE.
Collapse
|