1
|
Pan L, Wang K, Huang Y, Sun X, Meng J, Yi W, Xu M, Jung TP, Ming D. Enhancing motor imagery EEG classification with a Riemannian geometry-based spatial filtering (RSF) method. Neural Netw 2025; 188:107511. [PMID: 40294568 DOI: 10.1016/j.neunet.2025.107511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 03/19/2025] [Accepted: 04/21/2025] [Indexed: 04/30/2025]
Abstract
Motor imagery (MI) refers to the mental simulation of movements without physical execution, and it can be captured using electroencephalography (EEG). This area has garnered significant research interest due to its substantial potential in brain-computer interface (BCI) applications, especially for individuals with physical disabilities. However, accurate classification of MI EEG signals remains a major challenge due to their non-stationary nature, low signal-to-noise ratio, and sensitivity to both external and physiological noise. Traditional classification methods, such as common spatial pattern (CSP), often assume that the data is stationary and Gaussian, which limits their applicability in real-world scenarios where these assumptions do not hold. These challenges highlight the need for more robust methods to improve classification accuracy in MI-BCI systems. To address these issues, this study introduces a Riemannian geometry-based spatial filtering (RSF) method that projects EEG signals into a lower-dimensional subspace, maximizing the Riemannian distance between covariance matrices from different classes. By leveraging the inherent geometric properties of EEG data, RSF enhances the discriminative power of the features while maintaining robustness against noise. The performance of RSF was evaluated in combination with ten commonly used MI decoding algorithms, including CSP with linear discriminant analysis (CSP-LDA), Filter Bank CSP (FBCSP), Minimum Distance to Riemannian Mean (MDM), Tangent Space Mapping (TSM), EEGNet, ShallowConvNet (sCNN), DeepConvNet (dCNN), FBCNet, Graph-CSPNet, and LMDA-Net, using six publicly available MI-BCI datasets. The results demonstrate that RSF significantly improves classification accuracy and reduces computational time, particularly for deep learning models with high computational complexity. These findings underscore the potential of RSF as an effective spatial filtering approach for MI EEG classification, providing new insights and opportunities for the development of robust MI-BCI systems. The code for this research is available at https://github.com/PLC-TJU/RSF.
Collapse
Affiliation(s)
- Lincong Pan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, PR China.
| | - Kun Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Yongzhi Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Xinwei Sun
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, PR China
| | - Jiayuan Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing 100192, PR China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Tzyy-Ping Jung
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Swartz Center for Computational Neuroscience, University of California, San Diego, CA 92093, USA.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| |
Collapse
|
2
|
Deng X, Huo H, Ai L, Xu D, Li C. A Novel 3D Approach with a CNN and Swin Transformer for Decoding EEG-Based Motor Imagery Classification. SENSORS (BASEL, SWITZERLAND) 2025; 25:2922. [PMID: 40363359 PMCID: PMC12074355 DOI: 10.3390/s25092922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2025] [Revised: 04/29/2025] [Accepted: 04/30/2025] [Indexed: 05/15/2025]
Abstract
Motor imagery (MI) is a crucial research field within the brain-computer interface (BCI) domain. It enables patients with muscle or neural damage to control external devices and achieve movement functions by simply imagining bodily motions. Despite the significant clinical and application value of MI-BCI technology, accurately decoding high-dimensional and low signal-to-noise ratio (SNR) electroencephalography (EEG) signals remains challenging. Moreover, traditional deep learning approaches exhibit limitations in processing EEG signals, particularly in capturing the intrinsic correlations between electrode channels and long-distance temporal dependencies. To address these challenges, this research introduces a novel end-to-end decoding network that integrates convolutional neural networks (CNNs) and a Swin Transformer, aiming at enhancing the classification accuracy of the MI paradigm in EEG signals. This approach transforms EEG signals into a three-dimensional data structure, utilizing one-dimensional convolutions along the temporal dimension and two-dimensional convolutions across the EEG electrode distribution for initial spatio-temporal feature extraction, followed by deep feature exploration using a 3D Swin Transformer module. Experimental results show that on the BCI Competition IV-2a dataset, the proposed method achieves 83.99% classification accuracy, which is significantly better than the existing deep learning methods. This finding underscores the efficacy of combining a CNN and Swin Transformer in a 3D data space for processing high-dimensional, low-SNR EEG signals, offering a new perspective for the future development of MI-BCI. Future research could further explore the applicability of this method across various BCI tasks and its potential clinical implementations.
Collapse
Affiliation(s)
- Xin Deng
- Chongqing Key Laboratory of Germplasm Innovation and Utilization of Native Plants, Chongqing 401329, China
- The Key Laboratory of Data Engineering and Visual Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Huaxiang Huo
- The Key Laboratory of Data Engineering and Visual Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Lijiao Ai
- Chongqing Key Laboratory of Germplasm Innovation and Utilization of Native Plants, Chongqing 401329, China
| | - Daijiang Xu
- The Key Laboratory of Data Engineering and Visual Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Chenhui Li
- The Key Laboratory of Data Engineering and Visual Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
3
|
Meng J, Li S, Li G, Luo R, Sheng X, Zhu X. Improving Reliability of Life Applications Using Model-Based Brain Switches via SSVEP. IEEE Trans Biomed Eng 2025; 72:1636-1644. [PMID: 40030518 DOI: 10.1109/tbme.2024.3516733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
The brain switch improves the reliability of asynchronous brain-computer interface (aBCI) systems by switching the control state of the BCI system. Traditional brain switch research focuses on extracting advanced electroencephalography (EEG) features. However, a low signal-to-noise ratio (SNR) of EEG signals resulted in limited feature information and low performance of brain switches. Here, we design a virtual physical system to build the brain switch, allowing users to trigger the system through periodic brainwave modulation, fully integrating limited feature information and improving reliability. Furthermore, we designed multiple experiments to validate the effectiveness of the proposed brain switch based on steady-state visual evoked potentials (SSVEP). The results verified the performance of SSVEP brain switches based on virtual physical systems, improving the reliability of brain switches to 0.1 FP/h or even better with acceptable triggering time and calibration-free for most subjects. This represents that the proposed virtual physical model-based brain switch can utilize SSVEP features and output the reliable commands required to control external devices, promoting BCI real applications.
Collapse
|
4
|
Pang Y, Wang X, Zhao Z, Han C, Gao N. Multi-view collaborative ensemble classification for EEG signals based on 3D second-order difference plot and CSP. Phys Med Biol 2025; 70:085018. [PMID: 40203859 DOI: 10.1088/1361-6560/adcafa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2024] [Accepted: 04/09/2025] [Indexed: 04/11/2025]
Abstract
Objective.EEG signal analysis methods based on electrical source imaging (ESI) technique have significantly improved classification accuracy and response time. However, for the refined and informative source signals, the current studies have not fully considered their dynamic variability in feature extraction and lacked an effective integration of their dynamic variability and spatial characteristics. Additionally, the adaptability and complementarity of classifiers have not been considered comprehensively. These two aspects lead to the issue of insufficient decoding of source signals, which still limits the application of brain-computer interface (BCI). To address these challenges, this paper proposes a multi-view collaborative ensemble classification method for EEG signals based on three-dimensional second-order difference plot (3D SODP) and common spatial pattern.Approach.First, EEG signals are mapped to the source domain using the ESI technique, and then the source signals in the region of interest are obtained. Next, features from three viewpoints of the source signals are extracted, including 3D SODP features, spatial features, and the weighted fusion of both. Finally, the extracted multi-view features are integrated with subject-specific sub-classifier combination, and a voting mechanism is used to determine the final classification.Main results.The results show that the proposed method achieves classification accuracy of 81.3% and 82.6% respectively in two sessions of the OpenBMI dataset, which is nearly 5% higher than the state-of-the-art method, and maintains the analysis response time required for online BCI.Significance.This paper employs multi-view feature extraction to fully capture the characteristics of the source signals and enhances feature utilization through collaborative ensemble classification. The results demonstrate high accuracy and robust performance, providing a novel approach for online BCI.
Collapse
Affiliation(s)
- Yu Pang
- Department of Information & Electrical Engineering, Shandong Jianzhu University, Jinan, People's Republic of China
| | - Xiaoling Wang
- Department of Information & Electrical Engineering, Shandong Jianzhu University, Jinan, People's Republic of China
| | - Ze Zhao
- Department of Information & Electrical Engineering, Shandong Jianzhu University, Jinan, People's Republic of China
| | - Changqing Han
- Department of Information & Electrical Engineering, Shandong Jianzhu University, Jinan, People's Republic of China
| | - Nuo Gao
- Department of Information & Electrical Engineering, Shandong Jianzhu University, Jinan, People's Republic of China
| |
Collapse
|
5
|
Yan W, Luo Q, Du C. Channel component correlation analysis for multi-channel EEG feature component extraction. Front Neurosci 2025; 19:1522964. [PMID: 40242456 PMCID: PMC12000010 DOI: 10.3389/fnins.2025.1522964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2024] [Accepted: 02/11/2025] [Indexed: 04/18/2025] Open
Abstract
Introduction Electroencephalogram (EEG) analysis has shown significant research value for brain disease diagnosis, neuromodulation and brain-computer interface (BCI) application. The analysis and processing of EEG signals is complex since EEG are nonstationary, nonlinear, and often contaminated by intense background noise. Principal component analysis (PCA) and independent component analysis (ICA), as the commonly used methods for multi-dimensional signal feature component extraction, still have some limitations in terms of performance and calculation. Methods In this study, channel component correlation analysis (CCCA) method was proposed to extract feature components of multi-channel EEG. Firstly, empirical wavelet transform (EWT) decomposed each channel signal into different frequency bands, and reconstructed them into a multi-dimensional signal. Then the objective optimization function was constructed by maximizing the covariance between multi-dimensional signals. Finally the feature components of multi-channel EEG were extracted using the calculated weight coefficient. Results The results showed that the CCCA method could find the most relevant frequency band between multi-channel EEG. Compared with PCA and ICA methods, CCCA could extract the common components of multi-channel EEG more effectively, which is of great significance for the accurate analysis of EEG. Discussion The CCCA method proposed in this study has shown excellent performance in the feature component extraction of multi-channel EEG and could be considered for practical engineering applications.
Collapse
Affiliation(s)
- Wenqiang Yan
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
| | - Qi Luo
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
| | - Chenghang Du
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, China
| |
Collapse
|
6
|
Russo JS, Shiels TA, Lin CHS, John SE, Grayden DB. Feasibility of source-level motor imagery classification for people with multiple sclerosis. J Neural Eng 2025; 22:026020. [PMID: 40064095 DOI: 10.1088/1741-2552/adbec1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 03/10/2025] [Indexed: 03/20/2025]
Abstract
Objective.There is limited work investigating brain-computer interface (BCI) technology in people with multiple sclerosis (pwMS), a neurodegenerative disorder of the central nervous system. Present work is limited to recordings at the scalp, which may be significantly altered by changes within the cortex due to volume conduction. The recordings obtained from the sensors, therefore, combine disease-related alterations and task-relevant neural signals, as well as signals from other regions of the brain that are not relevant. The current study aims to unmix signals affected by multiple sclerosis (MS) progression and BCI task-relevant signals using estimated source activity to improve classification accuracy.Approach.Data was collected from eight participants with a range of MS severity and ten neurotypical participants. This dataset was used to report the classification accuracy of imagined movements of the hands and feet at the sensor-level and the source-level in the current study.K-means clustering of equivalent current dipoles was conducted to unmix temporally independent signals. The location of these dipoles was compared between MS and control groups and used for classification of imagined movement. Linear discriminant analysis classification was performed at each time-frequency point to highlight differences in frequency band delay.Main Results.Source-level signal acquisition significantly improved decoding accuracy of imagined movement vs rest and movement vs movement classification in pwMS and controls. There was no significant difference found in alpha (7-13 Hz) and beta (13-30 Hz) band classification delay between the neurotypical control and MS group, including imagery of limbs with weakness or paralysis.Significance.This study is the first to demonstrate the advantages of source-level analysis for BCI applications in pwMS. The results highlight the potential for enhanced clinical outcomes and emphasize the need for longitudinal studies to assess the impact of MS progression on BCI performance, which is crucial for effective clinical translation of BCI technology.
Collapse
Affiliation(s)
- John S Russo
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| | - Thomas A Shiels
- Department of Medicine, Northern Health, Melbourne, Australia
| | - Chin-Hsuan Sophie Lin
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, Australia
| | - Sam E John
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute, The University of Melbourne, Melbourne, Australia
| | - David B Grayden
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
7
|
Wei Y, Meng J, Luo R, Mai X, Li S, Xia Y, Zhu X. Action Observation With Rhythm Imagery (AORI): A Novel Paradigm to Activate Motor-Related Pattern for High-Performance Motor Decoding. IEEE Trans Biomed Eng 2025; 72:1085-1096. [PMID: 39466862 DOI: 10.1109/tbme.2024.3487133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
OBJECTIVE The Motor Imagery (MI) paradigm has been widely used in brain-computer interface (BCI) for device control and motor rehabilitation. However, the MI paradigm faces challenges such as comprehension difficulty and limited decoding accuracy. Therefore, we propose the Action Observation with Rhythm Imagery (AORI) as a natural paradigm to provide distinct features for high-performance decoding. METHODS Twenty subjects were recruited in the current study to perform the AORI task. Spectral-spatial, temporal and time-frequency analyses were conducted to investigate the AORI-activated brain pattern. Task-discriminant component analysis (TDCA) was utilized to perform multiclass motor decoding. RESULTS The results demonstrated distinct lateralized ERD in the alpha and beta bands, and clear lateralized steady-state movement-related rhythm (SSMRR) at the movement frequencies and their first harmonics. The activated brain areas included frontal, sensorimotor, posterior parietal, and occipital regions. Notably, the decoding accuracy reached 92.16% ± 7.61% in the four-class scenario. CONCLUSION AND SIGNIFICANCE We proposed the AORI paradigm, revealed the activated motor-related pattern and proved its efficacy for high-performance motor decoding. These findings provide new possibilities for designing a natural and robust BCI for motor control and motor rehabilitation.
Collapse
|
8
|
Feng Z, Guan C, Zheng R, Sun Y. STARTS: A Self-Adapted Spatio-Temporal Framework for Automatic E/MEG Source Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1230-1242. [PMID: 39423081 DOI: 10.1109/tmi.2024.3483292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2024]
Abstract
To obtain accurate brain source activities, the highly ill-posed source imaging of electro- and magneto-encephalography (E/MEG) requires proficiency in incorporation of biophysiological constraints and signal-processing techniques. Here, we propose a spatio-temporal-constrainted E/MEG source imaging framework-STARTS that can reconstruct the source in a fully automatic way. Specifically, a block-diagonal covariance was adopted to reconstruct the source extents while maintain spatial homogeneity. Temporal basis functions (TBFs) of both sources and noise were estimated and updated in a data-driven fashion to alleviate the influence of noises and further improve source localization accuracy. The performance of the proposed STARTS was quantitatively assessed through a series of simulation experiments, wherein superior results were obtained in comparison with the benchmark ESI algorithms (including LORETA, EBI-Convex, BESTIES & SI-STBF). Additional validations on epileptic and resting-state EEG data further indicate that the STARTS can produce neurophysiologically plausible results. Moreover, a computationally efficient version of STARTS: smooth STARTS was also introduced with an elementary spatial constraint, which exhibited comparable performance and reduced execution cost. In sum, the proposed STARTS, with its advanced spatio-temporal constraints and self-adapted update operation, provides an effective and efficient approach for E/MEG source imaging.
Collapse
|
9
|
Zhang L, Zhang H, Yan S, Li R, Yao D, Hu Y, Zhang R. Improving pre-movement patterns detection with multi-dimensional EEG features for readiness potential decrease. J Neural Eng 2025; 22:016034. [PMID: 39870046 DOI: 10.1088/1741-2552/adaef2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 01/27/2025] [Indexed: 01/29/2025]
Abstract
Objective.The readiness potential (RP) is an important neural characteristic in motor preparation-based brain-computer interface. In our previous research, we observed a significant decrease of the RP amplitude in some cases, which severely affects the pre-movement patterns detection. In this paper, we aimed to improve the accuracy (Acc) of pre-movement patterns detection in the condition of RP decrease.Approach.We analyzed multi-dimensional EEG features in terms of time-frequency, brain networks, and cross-frequency coupling (CFC). And, a multi-dimensional Electroencephalogram feature combination (MEFC) algorithm was proposed. The features used include: (1) waveforms of the RP; (2) energy in alpha and beta bands; (3) brain network in alpha and beta bands; and (4) CFC value between 2 and 10 Hz.Main results.By employing support vector machines, the MEFC method achieved an average recognition rate of 88.9% and 85.5% under normal and RP decrease conditions, respectively. Compared to classical algorithm, the average Acc for both tasks improved by 7.8% and 8.8% respectively.Significance.This method can effectively improve the Acc of pre-movement patterns decoding in the condition of RP decrease.
Collapse
Affiliation(s)
- Lipeng Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| | - Hongyu Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
| | - Shaoting Yan
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
| | - Ruiqi Li
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
| | - Dezhong Yao
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Key Laboratory for NeuroInformation, University of Electronic Science and Technology, Chendu, People's Republic of China
| | - Yuxia Hu
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| | - Rui Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| |
Collapse
|
10
|
Ghosh S, Yadav RK, Soni S, Giri S, Muthukrishnan SP, Kumar L, Bhasin S, Roy S. Decoding the brain-machine interaction for upper limb assistive technologies: advances and challenges. Front Hum Neurosci 2025; 19:1532783. [PMID: 39981127 PMCID: PMC11839673 DOI: 10.3389/fnhum.2025.1532783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 01/23/2025] [Indexed: 02/22/2025] Open
Abstract
Understanding how the brain encodes upper limb movements is crucial for developing control mechanisms in assistive technologies. Advances in assistive technologies, particularly Brain-machine Interfaces (BMIs), highlight the importance of decoding motor intentions and kinematics for effective control. EEG-based BMI systems show promise due to their non-invasive nature and potential for inducing neural plasticity, enhancing motor rehabilitation outcomes. While EEG-based BMIs show potential for decoding motor intention and kinematics, studies indicate inconsistent correlations with actual or planned movements, posing challenges for achieving precise and reliable prosthesis control. Further, the variability in predictive EEG patterns across individuals necessitates personalized tuning to improve BMI efficiency. Integrating multiple physiological signals could enhance BMI precision and reliability, paving the way for more effective motor rehabilitation strategies. Studies have shown that brain activity adapts to gravitational and inertial constraints during movement, highlighting the critical role of neural adaptation to biomechanical changes in creating control systems for assistive devices. This review aims to provide a comprehensive overview of recent progress in deciphering neural activity patterns associated with both physiological and assisted upper limb movements, highlighting avenues for future exploration in neurorehabilitation and brain-machine interface development.
Collapse
Affiliation(s)
- Sutirtha Ghosh
- Department of Physiology, All India Institute of Medical Sciences, New Delhi, India
| | - Rohit Kumar Yadav
- Department of Physiology, All India Institute of Medical Sciences, New Delhi, India
| | - Sunaina Soni
- Department of Physiology, All India Institute of Medical Sciences, New Delhi, India
| | - Shivangi Giri
- Department of Biomedical Engineering, National Institute of Technology, Raipur, India
- Department of Applied Mechanics, Indian Institute of Technology Delhi, New Delhi, India
| | | | - Lalan Kumar
- Department of Electrical Engineering, Bharti School of Telecommunication, New Delhi, India
- Yardi School of Artificial Intelligence, Indian Institute of Technology Delhi, New Delhi, India
| | - Shubhendu Bhasin
- Department of Electrical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Sitikantha Roy
- Department of Applied Mechanics, Indian Institute of Technology Delhi, New Delhi, India
| |
Collapse
|
11
|
Edelman BJ, Zhang S, Schalk G, Brunner P, Muller-Putz G, Guan C, He B. Non-Invasive Brain-Computer Interfaces: State of the Art and Trends. IEEE Rev Biomed Eng 2025; 18:26-49. [PMID: 39186407 PMCID: PMC11861396 DOI: 10.1109/rbme.2024.3449790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/28/2024]
Abstract
Brain-computer interface (BCI) is a rapidly evolving technology that has the potential to widely influence research, clinical and recreational use. Non-invasive BCI approaches are particularly common as they can impact a large number of participants safely and at a relatively low cost. Where traditional non-invasive BCIs were used for simple computer cursor tasks, it is now increasingly common for these systems to control robotic devices for complex tasks that may be useful in daily life. In this review, we provide an overview of the general BCI framework as well as the various methods that can be used to record neural activity, extract signals of interest, and decode brain states. In this context, we summarize the current state-of-the-art of non-invasive BCI research, focusing on trends in both the application of BCIs for controlling external devices and algorithm development to optimize their use. We also discuss various open-source BCI toolboxes and software, and describe their impact on the field at large.
Collapse
|
12
|
Cai C, Qi X, Long Y, Zhang Z, Yan J, Kang H, Wu W, Nagarajan SS. Robust interpolation of EEG/MEG sensor time-series via electromagnetic source imaging. J Neural Eng 2025; 22:10.1088/1741-2552/ada309. [PMID: 39719120 PMCID: PMC11925353 DOI: 10.1088/1741-2552/ada309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 12/24/2024] [Indexed: 12/26/2024]
Abstract
Objective.electroencephalography (EEG) and magnetoencephalography (MEG) are widely used non-invasive techniques in clinical and cognitive neuroscience. However, low spatial resolution measurements, partial brain coverage by some sensor arrays, as well as noisy sensors could result in distorted sensor topographies resulting in inaccurate reconstructions of underlying brain dynamics. Solving these problems has been a challenging task. This paper proposes a robust framework based on electromagnetic source imaging for interpolation of unknown or poor quality EEG/MEG measurements.Approach.This framework consists of two steps: (1) estimating brain source activity using a robust inverse algorithm along with the leadfield matrix of available good sensors, and (2) interpolating unknown or poor quality EEG/MEG measurements using the reconstructed brain sources using the leadfield matrices of unknown or poor quality sensors. We evaluate the proposed framework through simulations and several real datasets, comparing its performance to two popular benchmarks-neighborhood interpolation and spherical spline interpolation algorithms.Results.In both simulations and real EEG/MEG measurements, we demonstrate several advantages compared to benchmarks, which are robust to highly correlated brain activity, low signal-to-noise ratio data and accurately estimates cortical dynamics.Significance.These results demonstrate a rigorous platform to enhance the spatial resolution of EEG and MEG, to overcome limitations of partial coverage of EEG/MEG sensor arrays that is particularly relevant to low-channel count optically pumped magnetometer arrays, and to estimate activity in poor/noisy sensors to a certain extent based on the available measurements from other good sensors. Implementation of this framework will enhance the quality of EEG and MEG, thereby expanding the potential applications of these modalities.
Collapse
Affiliation(s)
- Chang Cai
- The National Engineering Research Center for E-Learning, Central Chinan Normal University, Wuhan, China
| | - Xinbao Qi
- The National Engineering Research Center for E-Learning, Central Chinan Normal University, Wuhan, China
| | - Yuanshun Long
- The National Engineering Research Center for E-Learning, Central Chinan Normal University, Wuhan, China
| | - Zheyuan Zhang
- The National Engineering Research Center for E-Learning, Central Chinan Normal University, Wuhan, China
| | - Jing Yan
- Hubei Meteorological Information and Technology Support Center, Hubei Meteorological Service, Wuhan, 430074, China
| | - Huicong Kang
- Department of Neurology, Tongji Hospital affiliated to Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430040, China
| | - Wei Wu
- Alto Neuroscience Inc., Los Altos, CA 94022
| | - Srikantan S. Nagarajan
- Biomagnetic Imaging Laboratory, University of California, San Francisco, 513 Parnassus Avenue, S362, San Francisco, CA 94143, USA
| |
Collapse
|
13
|
Zhang Y, Zhang C, Jiang R, Qiu S, He H. A Distribution Adaptive Feedback Training Method to Improve Human Motor Imagery Ability. IEEE Trans Neural Syst Rehabil Eng 2025; PP:380-390. [PMID: 40030957 DOI: 10.1109/tnsre.2025.3527629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
A brain-computer interface (BCI) based on motor imagery (MI) can translate users' subjective movement-related mental state without external stimulus, which has been successfully used for replacing and repairing motor function. In contrast with studies about decoding methods, less work was reported about training users to improve the performance of MI-BCIs. This study aimed to develop a novel MI feedback training method to enhance the ability of humans to use the MI-BCI system. In this study, an adaptive MI feedback training method was proposed to improve the effectiveness of the training process. The method updated the feedback model during training process and assigned different weights to the samples to better adapt the changes in the distribution of the Electroencephalograms (EEGs). An online feedback training system was established. Each of ten subjects participated in a three-day experiment involving three different feedback methods: no feedback algorithm update, feedback algorithm update, and feedback algorithm update using the proposed adaptive method. Comparison experiments were conducted on three different feedback methods. The experimental results showed that the feedback algorithm using the proposed method can most quickly improve the MI classification accuracy and has the largest increase in accuracy. This indicates that the proposed method can enhance the effectiveness of feedback training and improve the practicality of MI-BCI systems.
Collapse
|
14
|
Tantawanich P, Phunruangsakao C, Izumi SI, Hayashibe M. A Systematic Review of Bimanual Motor Coordination in Brain-Computer Interface. IEEE Trans Neural Syst Rehabil Eng 2024; PP:266-285. [PMID: 40030619 DOI: 10.1109/tnsre.2024.3522168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Advancements in neuroscience and artificial intelligence are propelling rapid progress in brain-computer interfaces (BCIs). These developments hold significant potential for decoding motion intentions from brain signals, enabling direct control commands without reliance on conventional neural pathways. Growing interest exists in decoding bimanual motor tasks, crucial for activities of daily living. This stems from the need to restore motor function, especially in individuals with deficits. This review aims to summarize neurological advancements in bimanual BCIs, encompassing neuroimaging techniques, experimental paradigms, and analysis algorithms. Thirty-six articles were reviewed, adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The literature search result revealed diverse experimental paradigms, protocols, and research directions, including enhancing the decoding accuracy, advancing versatile prosthesis robots, and enabling real-time applications. Notably, within BCI studies on bimanual movement coordination, a shared objective is to achieve naturalistic movement and practical applications with neurorehabilitation potential.
Collapse
|
15
|
Wang H, Qi Y, Yao L, Wang Y, Farina D, Pan G. A Human-Machine Joint Learning Framework to Boost Endogenous BCI Training. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:17534-17548. [PMID: 37647178 DOI: 10.1109/tnnls.2023.3305621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Brain-computer interfaces (BCIs) provide a direct pathway from the brain to external devices and have demonstrated great potential for assistive and rehabilitation technologies. Endogenous BCIs based on electroencephalogram (EEG) signals, such as motor imagery (MI) BCIs, can provide some level of control. However, mastering spontaneous BCI control requires the users to generate discriminative and stable brain signal patterns by imagery, which is challenging and is usually achieved over a very long training time (weeks/months). Here, we propose a human-machine joint learning framework to boost the learning process in endogenous BCIs, by guiding the user to generate brain signals toward an optimal distribution estimated by the decoder, given the historical brain signals of the user. To this end, we first model the human-machine joint learning process in a uniform formulation. Then a human-machine joint learning framework is proposed: 1) for the human side, we model the learning process in a sequential trial-and-error scenario and propose a novel "copy/new" feedback paradigm to help shape the signal generation of the subject toward the optimal distribution and 2) for the machine side, we propose a novel adaptive learning algorithm to learn an optimal signal distribution along with the subject's learning process. Specifically, the decoder reweighs the brain signals generated by the subject to focus more on "good" samples to cope with the learning process of the subject. Online and psuedo-online BCI experiments with 18 healthy subjects demonstrated the advantages of the proposed joint learning process over coadaptive approaches in both learning efficiency and effectiveness.
Collapse
|
16
|
Wang Z, Liu Y, Huang S, Qiu S, Zhang Y, Huang H, An X, Ming D. EEG Characteristic Comparison of Motor Imagery Between Supernumerary and Inherent Limb: Sixth-Finger MI Enhances the ERD Pattern and Classification Performance. IEEE J Biomed Health Inform 2024; 28:7078-7089. [PMID: 39222461 DOI: 10.1109/jbhi.2024.3452701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Adding supernumerary robotic limbs (SRLs) to humans and controlling them directly through the brain are main goals for movement augmentation. However, it remains uncertain whether neural patterns different from the traditional inherent limbs motor imagery (MI) can be extracted, which is essential for high-dimensional control of external devices. In this work, we established a MI neo-framework consisting of novel supernumerary robotic sixth-finger MI (SRF-MI) and traditional right-hand MI (RH-MI) paradigms and validated the distinctness of EEG response patterns between two MI tasks for the first time. Twenty-four subjects were recruited for this experiment involving three mental tasks. Event-related spectral perturbation was adopted to supply details about event-related desynchronization (ERD). Activation region, intensity and response time (RT) of ERD were compared between SRF-MI and RH-MI tasks. Three classical classification algorithms were utilized to verify the separability between different mental tasks. And genetic algorithm aims to select optimal combination of channels for neo-framework. A bilateral sensorimotor and prefrontal modulation was found during the SRF-MI task, whereas in RH-MI only contralateral sensorimotor modulation was exhibited. The novel SRF-MI paradigm enhanced ERD intensity by a maximum of 117% in prefrontal area and 188% in the ipsilateral somatosensory-association cortex. And, a global decrease of RT was exhibited during SRF-MI tasks compared to RH-MI. Classification results indicate well separable performance among different mental tasks (88.1% maximum for 2-class and 88.2% maximum for 3-class). This work demonstrated the difference between the SRF-MI and RH-MI paradigms, widening the control bandwidth of the BCI system.
Collapse
|
17
|
Liu K, Yang T, Yu Z, Yi W, Yu H, Wang G, Wu W. MSVTNet: Multi-Scale Vision Transformer Neural Network for EEG-Based Motor Imagery Decoding. IEEE J Biomed Health Inform 2024; 28:7126-7137. [PMID: 39190517 DOI: 10.1109/jbhi.2024.3450753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
OBJECT Transformer-based neural networks have been applied to the electroencephalography (EEG) decoding for motor imagery (MI). However, most networks focus on applying the self-attention mechanism to extract global temporal information, while the cross-frequency coupling features between different frequencies have been neglected. Additionally, effectively integrating different neural networks poses challenges for the advanced design of decoding algorithms. METHODS This study proposes a novel end-to-end Multi-Scale Vision Transformer Neural Network (MSVTNet) for MI-EEG classification. MSVTNet first extracts local spatio-temporal features at different filtered scales through convolutional neural networks (CNNs). Then, these features are concatenated along the feature dimension to form local multi-scale spatio-temporal feature tokens. Finally, Transformers are utilized to capture cross-scale interaction information and global temporal correlations, providing more distinguishable feature embeddings for classification. Moreover, auxiliary branch loss is leveraged for intermediate supervision to ensure the effective integration of CNNs and Transformers. RESULTS The performance of MSVTNet was assessed through subject-dependent (session-dependent and session-independent) and subject-independent experiments on three MI datasets, i.e., the BCI competition IV 2a, 2b and OpenBMI datasets. The experimental results demonstrate that MSVTNet achieves state-of-the-art performance in all analyses. CONCLUSION MSVTNet shows superiority and robustness in enhancing MI decoding performance.
Collapse
|
18
|
Forenzo D, Zhu H, He B. A continuous pursuit dataset for online deep learning-based EEG brain-computer interface. Sci Data 2024; 11:1256. [PMID: 39567538 PMCID: PMC11579365 DOI: 10.1038/s41597-024-04090-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 11/04/2024] [Indexed: 11/22/2024] Open
Abstract
This dataset is from an EEG brain-computer interface (BCI) study investigating the use of deep learning (DL) for online continuous pursuit (CP) BCI. In this task, subjects use Motor Imagery (MI) to control a cursor to follow a randomly moving target, instead of a single stationary target used in other traditional BCI tasks. DL methods have recently achieved promising performance in traditional BCI tasks, but most studies investigate offline data analysis using DL algorithms. This dataset consists of ~168 hours of EEG recordings from complex CP BCI experiments, collected from 28 unique human subjects over multiple sessions each, with an online DL-based decoder. The large amount of subject specific data from multiple sessions may be useful for developing new BCI decoders, especially DL methods that require large amounts of training data. By providing this dataset to the public, we hope to help facilitate the development of new or improved BCI decoding algorithms for the complex CP paradigm for continuous object control, bringing EEG-based BCIs closer to real-world applications.
Collapse
Affiliation(s)
- Dylan Forenzo
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA
| | - Hao Zhu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA.
| |
Collapse
|
19
|
Wang X, Yang W, Qi W, Wang Y, Ma X, Wang W. STaRNet: A spatio-temporal and Riemannian network for high-performance motor imagery decoding. Neural Netw 2024; 178:106471. [PMID: 38945115 DOI: 10.1016/j.neunet.2024.106471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 06/11/2024] [Accepted: 06/16/2024] [Indexed: 07/02/2024]
Abstract
Brain-computer interfaces (BCIs), representing a transformative form of human-computer interaction, empower users to interact directly with external environments through brain signals. In response to the demands for high accuracy, robustness, and end-to-end capabilities within BCIs based on motor imagery (MI), this paper introduces STaRNet, a novel model that integrates multi-scale spatio-temporal convolutional neural networks (CNNs) with Riemannian geometry. Initially, STaRNet integrates a multi-scale spatio-temporal feature extraction module that captures both global and local features, facilitating the construction of Riemannian manifolds from these comprehensive spatio-temporal features. Subsequently, a matrix logarithm operation transforms the manifold-based features into the tangent space, followed by a dense layer for classification. Without preprocessing, STaRNet surpasses state-of-the-art (SOTA) models by achieving an average decoding accuracy of 83.29% and a kappa value of 0.777 on the BCI Competition IV 2a dataset, and 95.45% accuracy with a kappa value of 0.939 on the High Gamma Dataset. Additionally, a comparative analysis between STaRNet and several SOTA models, focusing on the most challenging subjects from both datasets, highlights exceptional robustness of STaRNet. Finally, the visualizations of learned frequency bands demonstrate that temporal convolutions have learned MI-related frequency bands, and the t-SNE analyses of features across multiple layers of STaRNet exhibit strong feature extraction capabilities. We believe that the accurate, robust, and end-to-end capabilities of the STaRNet will facilitate the advancement of BCIs.
Collapse
Affiliation(s)
- Xingfu Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wenjie Yang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wenxia Qi
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yu Wang
- National Engineering and Technology Research Center for ASIC Design, Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Xiaojun Ma
- National Engineering and Technology Research Center for ASIC Design, Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wei Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
20
|
Wang L, Li M, Xu D, Yang Y. Cortical ROI Importance Improves MI Decoding From EEG Using Fused Light Neural Network. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3636-3646. [PMID: 39283802 DOI: 10.1109/tnsre.2024.3461339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2024]
Abstract
Decoding motor imagery (MI) using deep learning in cortical level has potential in brain computer interface based intelligent rehabilitation. However, a mass of dipoles is inconvenient to extract the personalized features and requires a more complex neural network. In consideration of the structural and functional similarity of the neurons in a neuroanatomical region, i.e., a region of interest (ROI), we propose that the comprehensive performance of each ROI may be reflected by a specific representative dipole (RD), and the time-frequency spectrums of all RDs are applied simultaneously to Random Forest algorithm to give a quantitative metric of each ROI importance (RI). Then, the more divided sub-band spectral powers are reinforced by RI, and they are interpolated to a 2-dimensional (2D) plane transformed from 3D space of all RDs, yielding an ensemble representation of RD feature image sequences (ERDFIS). Furthermore, a lightweight network, including 2D separable convolution and gated recurrent unit (2DSCG), is developed to extract and classify the frequency-spatial and temporal features from ERDFIS, forming a novel MI decoding method in cortical level (called ERDFIS-2DSCG). Based on two public datasets, the decoding accuracies of ten-fold cross-validation are 89.89% and 94.35%, respectively. The results suggest that RD can embody the overall property of ROI in time-frequency-space domains, and ROI importance is helpful to highlight the subject-based characteristics of MI-EEG. Meanwhile, 2DSCG is matched well with ERDFIS, jointly improving the decoding performance.
Collapse
|
21
|
Frosolone M, Prevete R, Ognibeni L, Giugliano S, Apicella A, Pezzulo G, Donnarumma F. Enhancing EEG-Based MI-BCIs with Class-Specific and Subject-Specific Features Detected by Neural Manifold Analysis. SENSORS (BASEL, SWITZERLAND) 2024; 24:6110. [PMID: 39338854 PMCID: PMC11435739 DOI: 10.3390/s24186110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 09/12/2024] [Accepted: 09/18/2024] [Indexed: 09/30/2024]
Abstract
This paper presents an innovative approach leveraging Neuronal Manifold Analysis of EEG data to identify specific time intervals for feature extraction, effectively capturing both class-specific and subject-specific characteristics. Different pipelines were constructed and employed to extract distinctive features within these intervals, specifically for motor imagery (MI) tasks. The methodology was validated using the Graz Competition IV datasets 2A (four-class) and 2B (two-class) motor imagery classification, demonstrating an improvement in classification accuracy that surpasses state-of-the-art algorithms designed for MI tasks. A multi-dimensional feature space, constructed using NMA, was built to detect intervals that capture these critical characteristics, which led to significantly enhanced classification accuracy, especially for individuals with initially poor classification performance. These findings highlight the robustness of this method and its potential to improve classification performance in EEG-based MI-BCI systems.
Collapse
Affiliation(s)
- Mirco Frosolone
- Institute of Cognitive Sciences and Technologies, National Research Council, Via Gian Domenico Romagnosi, 00196 Rome, Italy
| | - Roberto Prevete
- Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, 80125 Naples, Italy
| | - Lorenzo Ognibeni
- Institute of Cognitive Sciences and Technologies, National Research Council, Via Gian Domenico Romagnosi, 00196 Rome, Italy
- Department of Computer, Control and Management Engineering 'Antonio Ruberti' (DIAG), Sapienza University of Rome, 00185 Rome, Italy
| | - Salvatore Giugliano
- Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, 80125 Naples, Italy
| | - Andrea Apicella
- Department of Electrical Engineering and Information Technology (DIETI), University of Naples Federico II, 80125 Naples, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via Gian Domenico Romagnosi, 00196 Rome, Italy
| | - Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council, Via Gian Domenico Romagnosi, 00196 Rome, Italy
| |
Collapse
|
22
|
Della Vedova G, Proverbio AM. Neural signatures of imaginary motivational states: desire for music, movement and social play. Brain Topogr 2024; 37:806-825. [PMID: 38625520 PMCID: PMC11393278 DOI: 10.1007/s10548-024-01047-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 03/12/2024] [Indexed: 04/17/2024]
Abstract
The literature has demonstrated the potential for detecting accurate electrical signals that correspond to the will or intention to move, as well as decoding the thoughts of individuals who imagine houses, faces or objects. This investigation examines the presence of precise neural markers of imagined motivational states through the combining of electrophysiological and neuroimaging methods. 20 participants were instructed to vividly imagine the desire to move, listen to music or engage in social activities. Their EEG was recorded from 128 scalp sites and analysed using individual standardized Low-Resolution Brain Electromagnetic Tomographies (LORETAs) in the N400 time window (400-600 ms). The activation of 1056 voxels was examined in relation to the 3 motivational states. The most active dipoles were grouped in eight regions of interest (ROI), including Occipital, Temporal, Fusiform, Premotor, Frontal, OBF/IF, Parietal, and Limbic areas. The statistical analysis revealed that all motivational imaginary states engaged the right hemisphere more than the left hemisphere. Distinct markers were identified for the three motivational states. Specifically, the right temporal area was more relevant for "Social Play", the orbitofrontal/inferior frontal cortex for listening to music, and the left premotor cortex for the "Movement" desire. This outcome is encouraging in terms of the potential use of neural indicators in the realm of brain-computer interface, for interpreting the thoughts and desires of individuals with locked-in syndrome.
Collapse
Affiliation(s)
- Giada Della Vedova
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy
| | - Alice Mado Proverbio
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano, Bicocca, Italy.
- NeuroMI, Milan Center for Neuroscience, Milan, Italy.
- Department of Psychology of University of Milano-Bicocca, Piazza dell'Ateneo nuovo 1, Milan, 20162, Italy.
| |
Collapse
|
23
|
Park H, Jun SC. Connectivity study on resting-state EEG between motor imagery BCI-literate and BCI-illiterate groups. J Neural Eng 2024; 21:046042. [PMID: 38986469 DOI: 10.1088/1741-2552/ad6187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 07/10/2024] [Indexed: 07/12/2024]
Abstract
Objective.Although motor imagery-based brain-computer interface (MI-BCI) holds significant potential, its practical application faces challenges such as BCI-illiteracy. To mitigate this issue, researchers have attempted to predict BCI-illiteracy by using the resting state, as this was found to be associated with BCI performance. As connectivity's significance in neuroscience has grown, BCI researchers have applied connectivity to it. However, the issues of connectivity have not been considered fully. First, although various connectivity metrics exist, only some have been used to predict BCI-illiteracy. This is problematic because each metric has a distinct hypothesis and perspective to estimate connectivity, resulting in different outcomes according to the metric. Second, the frequency range affects the connectivity estimation. In addition, it is still unknown whether each metric has its own optimal frequency range. Third, the way that estimating connectivity may vary depending upon the dataset has not been investigated. Meanwhile, we still do not know a great deal about how the resting state electroencephalography (EEG) network differs between BCI-literacy and -illiteracy.Approach.To address the issues above, we analyzed three large public EEG datasets using three functional connectivity and three effective connectivity metrics by employing diverse graph theory measures. Our analysis revealed that the appropriate frequency range to predict BCI-illiteracy varies depending upon the metric. The alpha range was found to be suitable for the metrics of the frequency domain, while alpha + theta were found to be appropriate for multivariate Granger causality. The difference in network efficiency between BCI-literate and -illiterate groups was constant regardless of the metrics and datasets used. Although we observed that BCI-literacy had stronger connectivity, no other significant constructional differences were found.Significance.Based upon our findings, we predicted MI-BCI performance for the entire dataset. We discovered that combining several graph features could improve the prediction's accuracy.
Collapse
Affiliation(s)
- Hanjin Park
- AI Graduate School, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
| | - Sung Chan Jun
- AI Graduate School, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
| |
Collapse
|
24
|
Meng J, Li S, Li G, Luo R, Sheng X, Zhu X. A model-based brain switch via periodic motor imagery modulation for asynchronous brain-computer interfaces. J Neural Eng 2024; 21:046035. [PMID: 39029496 DOI: 10.1088/1741-2552/ad6595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 07/19/2024] [Indexed: 07/21/2024]
Abstract
Objective.Brain switches provide a tangible solution to asynchronized brain-computer interface, which decodes user intention without a pre-programmed structure. However, most brain switches based on electroencephalography signals have high false positive rates (FPRs), resulting in less practicality. This research aims to improve the operating mode and usability of the brain switch.Approach.Here, we propose a novel virtual physical model-based brain switch that leverages periodic active modulation. An optimization problem of minimizing the triggering time subject to a required FPR is formulated, numerical and analytical approximate solutions are obtained based on the model.Main results.Our motor imagery (MI)-based brain switch can reach 0.8FP/h FPR with a median triggering time of 58 s. We evaluated the proposed brain switch during online device control, and their average FPRs substantially outperformed the conventional brain switches in the literature. We further improved the proposed brain switch with the Common Spatial Pattern (CSP) and optimization method. An average FPR of 0.3 FPs/h was obtained for the MI-CSP-based brain switch, and the average triggering time improved to 21.6 s.Significance.This study provides a new approach that could significantly reduce the brain switch's FPR to less than 1 Fps/h, which was less than 10% of the FPR (decreasing by more than a magnitude of order) by other endogenous methods, and the reaction time was comparable to the state-of-the-art approaches. This represents a significant advancement over the current non-invasive asynchronous BCI and will open widespread avenues for translating BCI towards clinical applications.
Collapse
Affiliation(s)
- Jianjun Meng
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Songwei Li
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Guangye Li
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Ruijie Luo
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xinjun Sheng
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiangyang Zhu
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
25
|
Mishra AR, Kumar R, Gupta V, Prabhu S, Upadhyay R, Chhipa PC, Rakesh S, Mokayed H, Das Chakladar D, De K, Liwicki M, Simistira Liwicki F, Saini R. SignEEG v1.0: Multimodal Dataset with Electroencephalography and Hand-written Signature for Biometric Systems. Sci Data 2024; 11:718. [PMID: 38956046 PMCID: PMC11220021 DOI: 10.1038/s41597-024-03546-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 06/18/2024] [Indexed: 07/04/2024] Open
Abstract
Handwritten signatures in biometric authentication leverage unique individual characteristics for identification, offering high specificity through dynamic and static properties. However, this modality faces significant challenges from sophisticated forgery attempts, underscoring the need for enhanced security measures in common applications. To address forgery in signature-based biometric systems, integrating a forgery-resistant modality, namely, noninvasive electroencephalography (EEG), which captures unique brain activity patterns, can significantly enhance system robustness by leveraging multimodality's strengths. By combining EEG, a physiological modality, with handwritten signatures, a behavioral modality, our approach capitalizes on the strengths of both, significantly fortifying the robustness of biometric systems through this multimodal integration. In addition, EEG's resistance to replication offers a high-security level, making it a robust addition to user identification and verification. This study presents a new multimodal SignEEG v1.0 dataset based on EEG and hand-drawn signatures from 70 subjects. EEG signals and hand-drawn signatures have been collected with Emotiv Insight and Wacom One sensors, respectively. The multimodal data consists of three paradigms based on mental, & motor imagery, and physical execution: i) thinking of the signature's image, (ii) drawing the signature mentally, and (iii) drawing a signature physically. Extensive experiments have been conducted to establish a baseline with machine learning classifiers. The results demonstrate that multimodality in biometric systems significantly enhances robustness, achieving high reliability even with limited sample sizes. We release the raw, pre-processed data and easy-to-follow implementation details.
Collapse
Affiliation(s)
- Ashish Ranjan Mishra
- Department of Computer Science and Engineering, Madan Mohan Malaviya University of Technology, Gorakhpur, UP, India.
| | - Rakesh Kumar
- Department of Computer Science and Engineering, Madan Mohan Malaviya University of Technology, Gorakhpur, UP, India
| | - Vibha Gupta
- Department of Molecular and Clinical Medicine, University of Gothenburg, Gothenburg, Sweden
| | - Sameer Prabhu
- Operation, Maintenance and Acoustics, Luleå University of Technology, Luleå, Sweden
| | - Richa Upadhyay
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Prakash Chandra Chhipa
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Sumit Rakesh
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Hamam Mokayed
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Debashis Das Chakladar
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Kanjar De
- Department of Video Communication and Applications, Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, Berlin, Germany
| | - Marcus Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Foteini Simistira Liwicki
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| | - Rajkumar Saini
- Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden
| |
Collapse
|
26
|
Li D, Shin HB, Yin K, Lee SW. Domain-Incremental Learning Framework for Continual Motor Imagery EEG Classification Task. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40040208 DOI: 10.1109/embc53108.2024.10781886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Due to inter-subject variability in electroencephalogram (EEG) signals, the generalization ability of many existing brain-computer interface (BCI) models is significantly limited. Although transfer learning (TL) offers a temporary solution, in scenarios requiring sustained knowledge transfer, the performance of TL-based models gradually declines as the number of transfers increases-a phenomenon known as catastrophic forgetting. To address this issue, we introduce a novel domain-incremental learning framework for the continual motor imagery (MI) EEG classification. Specifically, to learn and retain common features between subjects, we separate latent representations into subject-invariant and subject-specific features through adversarial training, while also proposing an extensible architecture to preserve features that are easily forgotten. Additionally, we incorporate a memory replay mechanism to reinforce previously acquired knowledge. Through extensive experiments, we demonstrate our framework's effectiveness in mitigating forgetting within the continual MI-EEG classification task.
Collapse
|
27
|
Kosnoff J, Yu K, Liu C, He B. Transcranial focused ultrasound to V5 enhances human visual motion brain-computer interface by modulating feature-based attention. Nat Commun 2024; 15:4382. [PMID: 38862476 PMCID: PMC11167030 DOI: 10.1038/s41467-024-48576-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/02/2024] [Indexed: 06/13/2024] Open
Abstract
A brain-computer interface (BCI) enables users to control devices with their minds. Despite advancements, non-invasive BCIs still exhibit high error rates, prompting investigation into the potential reduction through concurrent targeted neuromodulation. Transcranial focused ultrasound (tFUS) is an emerging non-invasive neuromodulation technology with high spatiotemporal precision. This study examines whether tFUS neuromodulation can improve BCI outcomes, and explores the underlying mechanism of action using high-density electroencephalography (EEG) source imaging (ESI). As a result, V5-targeted tFUS significantly reduced the error in a BCI speller task. Source analyses revealed a significantly increase in theta and alpha activities in the tFUS condition at both V5 and downstream in the dorsal visual processing pathway. Correlation analysis indicated that the connection within the dorsal processing pathway was preserved during tFUS stimulation, while the ventral connection was weakened. These findings suggest that V5-targeted tFUS enhances feature-based attention to visual motion.
Collapse
Affiliation(s)
- Joshua Kosnoff
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA
| | - Kai Yu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA
| | - Chang Liu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA.
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15237, USA.
| |
Collapse
|
28
|
Pan H, Ding P, Wang F, Li T, Zhao L, Nan W, Fu Y, Gong A. Comprehensive evaluation methods for translating BCI into practical applications: usability, user satisfaction and usage of online BCI systems. Front Hum Neurosci 2024; 18:1429130. [PMID: 38903409 PMCID: PMC11188342 DOI: 10.3389/fnhum.2024.1429130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Although brain-computer interface (BCI) is considered a revolutionary advancement in human-computer interaction and has achieved significant progress, a considerable gap remains between the current technological capabilities and their practical applications. To promote the translation of BCI into practical applications, the gold standard for online evaluation for classification algorithms of BCI has been proposed in some studies. However, few studies have proposed a more comprehensive evaluation method for the entire online BCI system, and it has not yet received sufficient attention from the BCI research and development community. Therefore, the qualitative leap from analyzing and modeling for offline BCI data to the construction of online BCI systems and optimizing their performance is elaborated, and then user-centred is emphasized, and then the comprehensive evaluation methods for translating BCI into practical applications are detailed and reviewed in the article, including the evaluation of the usability (including effectiveness and efficiency of systems), the evaluation of the user satisfaction (including BCI-related aspects, etc.), and the evaluation of the usage (including the match between the system and user, etc.) of online BCI systems. Finally, the challenges faced in the evaluation of the usability and user satisfaction of online BCI systems, the efficacy of online BCI systems, and the integration of BCI and artificial intelligence (AI) and/or virtual reality (VR) and other technologies to enhance the intelligence and user experience of the system are discussed. It is expected that the evaluation methods for online BCI systems elaborated in this review will promote the translation of BCI into practical applications.
Collapse
Affiliation(s)
- He Pan
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Peng Ding
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Fan Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Tianwen Li
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Lei Zhao
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Wenya Nan
- Department of Psychology, School of Education, Shanghai Normal University, Shanghai, China
| | - Yunfa Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Anmin Gong
- School of Information Engineering, Chinese People's Armed Police Force Engineering University, Xi’an, China
| |
Collapse
|
29
|
Ma X, Chen W, Pei Z, Zhang Y, Chen J. Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding. Comput Biol Med 2024; 175:108504. [PMID: 38701593 DOI: 10.1016/j.compbiomed.2024.108504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/15/2024] [Accepted: 04/21/2024] [Indexed: 05/05/2024]
Abstract
Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.
Collapse
Affiliation(s)
- Xinzhi Ma
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Weihai Chen
- School of Electrical Engineering and Automation, Anhui University, Hefei, China.
| | - Zhongcai Pei
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Yue Zhang
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Jianer Chen
- Department of Geriatric Rehabilitation, Third Affiliated Hospital, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
30
|
Huang D, Wang Y, Fan L, Yu Y, Zhao Z, Zeng P, Wang K, Li N, Shen H. Decoding Subject-Driven Cognitive States from EEG Signals for Cognitive Brain-Computer Interface. Brain Sci 2024; 14:498. [PMID: 38790476 PMCID: PMC11120245 DOI: 10.3390/brainsci14050498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/06/2024] [Accepted: 05/14/2024] [Indexed: 05/26/2024] Open
Abstract
In this study, we investigated the feasibility of using electroencephalogram (EEG) signals to differentiate between four distinct subject-driven cognitive states: resting state, narrative memory, music, and subtraction tasks. EEG data were collected from seven healthy male participants while performing these cognitive tasks, and the raw EEG signals were transformed into time-frequency maps using continuous wavelet transform. Based on these time-frequency maps, we developed a convolutional neural network model (TF-CNN-CFA) with a channel and frequency attention mechanism to automatically distinguish between these cognitive states. The experimental results demonstrated that the model achieved an average classification accuracy of 76.14% in identifying these four cognitive states, significantly outperforming traditional EEG signal processing methods and other classical image classification algorithms. Furthermore, we investigated the impact of varying lengths of EEG signals on classification performance and found that TF-CNN-CFA demonstrates consistent performance across different window lengths, indicating its strong generalization capability. This study validates the ability of EEG to differentiate higher cognitive states, which could potentially offer a novel BCI paradigm.
Collapse
Affiliation(s)
- Dingyong Huang
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| | - Yingjie Wang
- College of Physical Education and Health, Hebei Normal University of Science & Technology, Qinhuangdao 066004, China;
| | - Liangwei Fan
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| | - Yang Yu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| | - Ziyu Zhao
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| | - Pu Zeng
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| | - Kunqing Wang
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| | - Na Li
- Radiology Department, Xiangya 3rd Hospital, Central South University, Changsha 410013, China;
| | - Hui Shen
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China; (D.H.); (L.F.); (Y.Y.); (Z.Z.); (P.Z.); (K.W.)
| |
Collapse
|
31
|
Kim H, Won K, Ahn M, Jun SC. Comparison of recognition methods for an asynchronous (un-cued) BCI system: an investigation with 40-class SSVEP dataset. Biomed Eng Lett 2024; 14:617-630. [PMID: 38645586 PMCID: PMC11026332 DOI: 10.1007/s13534-024-00357-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/16/2024] [Accepted: 01/24/2024] [Indexed: 04/23/2024] Open
Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer Interface (BCI) has demonstrated the potential to manage multi-command targets to achieve high-speed communication. Recent studies on multi-class SSVEP-based BCI have focused on synchronous systems, which rely on predefined time and task indicators; thus, these systems that use passive approaches may be less suitable for practical applications. Asynchronous systems recognize the user's intention (whether or not the user is willing to use systems) from brain activity; then, after recognizing the user's willingness, they begin to operate by switching swiftly for real-time control. Consequently, various methodologies have been proposed to capture the user's intention. However, in-depth investigation of recognition methods in asynchronous BCI system is lacking. Thus, in this work, three recognition methods (power spectral density analysis, canonical correlation analysis (CCA), and support vector machine (SVM)) used widely in asynchronous SSVEP BCI systems were explored to compare their performance. Further, we categorized asynchronous systems into two approaches (1-stage and 2-stage) based upon the recognition process's design, and compared their performance. To do so, a 40-class SSVEP dataset collected from 40 subjects was introduced. Finally, we found that the CCA-based method in the 2-stage approach demonstrated statistically significantly higher performance with a sensitivity of 97.62 ± 02.06%, specificity of 76.50 ± 23.50%, and accuracy of 75.59 ± 10.09%. Thus, it is expected that the 2-stage approach together with CCA-based recognition and FB-CCA classification have good potential to be implemented in practical asynchronous SSVEP BCI systems.
Collapse
Affiliation(s)
- Heegyu Kim
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Bukgu, Gwangju, 61005 Korea
| | - Kyungho Won
- Hybrid Team, Inria, Univ Rennes, IRISA, CNRS, F35000 Rennes, France
| | - Minkyu Ahn
- School of Computer Science and Electrical Engineering, Handong Global University, Bukgu, Pohang, 37554 Korea
| | - Sung Chan Jun
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Bukgu, Gwangju, 61005 Korea
- School of Artificial Intelligence, Gwangju Institute of Science and Technology, Bukgu, Gwangju, 61005 Korea
| |
Collapse
|
32
|
Forenzo D, Zhu H, Shanahan J, Lim J, He B. Continuous Tracking using Deep Learning-based Decoding for Non-invasive Brain-Computer Interface. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.12.562084. [PMID: 37905046 PMCID: PMC10614823 DOI: 10.1101/2023.10.12.562084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Brain-computer interfaces (BCI) using electroencephalography (EEG) provide a non-invasive method for users to interact with external devices without the need for muscle activation. While noninvasive BCIs have the potential to improve the quality of lives of healthy and motor impaired individuals, they currently have limited applications due to inconsistent performance and low degrees of freedom. In this study, we use deep learning (DL)-based decoders for online Continuous Pursuit (CP), a complex BCI task requiring the user to track an object in two-dimensional space. We developed a labeling system to use CP data for supervised learning, trained DL-based decoders based on two architectures, including a newly proposed adaptation of the PointNet architecture, and evaluated the performance over several online sessions. We rigorously evaluated the DL-based decoders in a total of 28 human participants, and found that the DL-based models improved throughout the sessions as more training data became available and significantly outperformed a traditional BCI decoder by the last session. We also performed additional experiments to test an implementation of transfer learning by pre-training models on data from other subjects, and mid-session training to reduce inter-session variability. The results from these experiments showed that pre-training did not significantly improve performance, but updating the models mid-session may have some benefit. Overall, these findings support the use of DL-based decoders for improving BCI performance in complex tasks like CP, which can expand the potential applications of BCI devices and help improve the quality of lives of healthy and motor-impaired individuals. Significance Statement Brain-computer Interfaces (BCI) have the potential to replace or restore motor functions for patients and can benefit the general population by providing a direct link of the brain with robotics or other devices. In this work, we developed a paradigm using deep learning (DL)-based decoders for continuous control of a BCI system and demonstrated its capabilities through extensive online experiments. We also investigate how DL performance is affected by varying amounts of training data and collected more than 150 hours of BCI data that can be used to train new models. The results of this study provide valuable information for developing future DL-based BCI decoders which can improve performance and help bring BCIs closer to practical applications and wide-spread use.
Collapse
|
33
|
Huang C, Shi N, Miao Y, Chen X, Wang Y, Gao X. Visual tracking brain-computer interface. iScience 2024; 27:109376. [PMID: 38510138 PMCID: PMC10951983 DOI: 10.1016/j.isci.2024.109376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/25/2024] [Accepted: 02/27/2024] [Indexed: 03/22/2024] Open
Abstract
Brain-computer interfaces (BCIs) offer a way to interact with computers without relying on physical movements. Non-invasive electroencephalography-based visual BCIs, known for efficient speed and calibration ease, face limitations in continuous tasks due to discrete stimulus design and decoding methods. To achieve continuous control, we implemented a novel spatial encoding stimulus paradigm and devised a corresponding projection method to enable continuous modulation of decoded velocity. Subsequently, we conducted experiments involving 17 participants and achieved Fitt's information transfer rate (ITR) of 0.55 bps for the fixed tracking task and 0.37 bps for the random tracking task. The proposed BCI with a high Fitt's ITR was then integrated into two applications, including painting and gaming. In conclusion, this study proposed a visual BCI based-control method to go beyond discrete commands, allowing natural continuous control based on neural activity.
Collapse
Affiliation(s)
- Changxing Huang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Nanlin Shi
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Yining Miao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences Beijing, Beijing 100083, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
34
|
Liang G, Cao D, Wang J, Zhang Z, Wu Y. EISATC-Fusion: Inception Self-Attention Temporal Convolutional Network Fusion for Motor Imagery EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1535-1545. [PMID: 38536681 DOI: 10.1109/tnsre.2024.3382226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
The motor imagery brain-computer interface (MI-BCI) based on electroencephalography (EEG) is a widely used human-machine interface paradigm. However, due to the non-stationarity and individual differences among subjects in EEG signals, the decoding accuracy is limited, affecting the application of the MI-BCI. In this paper, we propose the EISATC-Fusion model for MI EEG decoding, consisting of inception block, multi-head self-attention (MSA), temporal convolutional network (TCN), and layer fusion. Specifically, we design a DS Inception block to extract multi-scale frequency band information. And design a new cnnCosMSA module based on CNN and cos attention to solve the attention collapse and improve the interpretability of the model. The TCN module is improved by the depthwise separable convolution to reduces the parameters of the model. The layer fusion consists of feature fusion and decision fusion, fully utilizing the features output by the model and enhances the robustness of the model. We improve the two-stage training strategy for model training. Early stopping is used to prevent model overfitting, and the accuracy and loss of the validation set are used as indicators for early stopping. The proposed model achieves within-subject classification accuracies of 84.57% and 87.58% on BCI Competition IV Datasets 2a and 2b, respectively. And the model achieves cross-subject classification accuracies of 67.42% and 71.23% (by transfer learning) when training the model with two sessions and one session of Dataset 2a, respectively. The interpretability of the model is demonstrated through weight visualization method.
Collapse
|
35
|
Yan L, Yu H, Liu Y, Xiang B, Cheng Y, Xu J, Wu Y, Yan F. Brain–Computer Interface Based on Motor Imagery With Visual Guidance and its Application in Control of Simulated Unmanned Aerial Vehicle. IEEE SENSORS JOURNAL 2024; 24:10779-10793. [DOI: 10.1109/jsen.2024.3363754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/22/2024]
Affiliation(s)
- Lirong Yan
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Hao Yu
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Yan Liu
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Biao Xiang
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Yu Cheng
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Jihong Xu
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Yibo Wu
- Wuhan Leishen Special Equipment Company Ltd., Wuhan, China
| | - Fuwu Yan
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| |
Collapse
|
36
|
Forenzo D, Zhu H, Shanahan J, Lim J, He B. Continuous tracking using deep learning-based decoding for noninvasive brain-computer interface. PNAS NEXUS 2024; 3:pgae145. [PMID: 38689706 PMCID: PMC11060102 DOI: 10.1093/pnasnexus/pgae145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 03/28/2024] [Indexed: 05/02/2024]
Abstract
Brain-computer interfaces (BCI) using electroencephalography provide a noninvasive method for users to interact with external devices without the need for muscle activation. While noninvasive BCIs have the potential to improve the quality of lives of healthy and motor-impaired individuals, they currently have limited applications due to inconsistent performance and low degrees of freedom. In this study, we use deep learning (DL)-based decoders for online continuous pursuit (CP), a complex BCI task requiring the user to track an object in 2D space. We developed a labeling system to use CP data for supervised learning, trained DL-based decoders based on two architectures, including a newly proposed adaptation of the PointNet architecture, and evaluated the performance over several online sessions. We rigorously evaluated the DL-based decoders in a total of 28 human participants, and found that the DL-based models improved throughout the sessions as more training data became available and significantly outperformed a traditional BCI decoder by the last session. We also performed additional experiments to test an implementation of transfer learning by pretraining models on data from other subjects, and midsession training to reduce intersession variability. The results from these experiments showed that pretraining did not significantly improve performance, but updating the models' midsession may have some benefit. Overall, these findings support the use of DL-based decoders for improving BCI performance in complex tasks like CP, which can expand the potential applications of BCI devices and help to improve the quality of lives of healthy and motor-impaired individuals.
Collapse
Affiliation(s)
- Dylan Forenzo
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Hao Zhu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jenn Shanahan
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jaehyun Lim
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
37
|
Welter M, Lotte F. Ecological decoding of visual aesthetic preference with oscillatory electroencephalogram features-A mini-review. FRONTIERS IN NEUROERGONOMICS 2024; 5:1341790. [PMID: 38450005 PMCID: PMC10914990 DOI: 10.3389/fnrgo.2024.1341790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 01/19/2024] [Indexed: 03/08/2024]
Abstract
In today's digital information age, human exposure to visual artifacts has reached an unprecedented quasi-omnipresence. Some of these cultural artifacts are elevated to the status of artworks which indicates a special appreciation of these objects. For many persons, the perception of such artworks coincides with aesthetic experiences (AE) that can positively affect health and wellbeing. AEs are composed of complex cognitive and affective mental and physiological states. More profound scientific understanding of the neural dynamics behind AEs would allow the development of passive Brain-Computer-Interfaces (BCI) that offer personalized art presentation to improve AE without the necessity of explicit user feedback. However, previous empirical research in visual neuroaesthetics predominantly investigated functional Magnetic Resonance Imaging and Event-Related-Potentials correlates of AE in unnaturalistic laboratory conditions which might not be the best features for practical neuroaesthetic BCIs. Furthermore, AE has, until recently, largely been framed as the experience of beauty or pleasantness. Yet, these concepts do not encompass all types of AE. Thus, the scope of these concepts is too narrow to allow personalized and optimal art experience across individuals and cultures. This narrative mini-review summarizes the state-of-the-art in oscillatory Electroencephalography (EEG) based visual neuroaesthetics and paints a road map toward the development of ecologically valid neuroaesthetic passive BCI systems that could optimize AEs, as well as their beneficial consequences. We detail reported oscillatory EEG correlates of AEs, as well as machine learning approaches to classify AE. We also highlight current limitations in neuroaesthetics and suggest future directions to improve EEG decoding of AE.
Collapse
Affiliation(s)
- Marc Welter
- Inria Center at the University of Bordeaux/LaBRI, Talence, France
| | | |
Collapse
|
38
|
Wang X, Wang Y, Qi W, Kong D, Wang W. BrainGridNet: A two-branch depthwise CNN for decoding EEG-based multi-class motor imagery. Neural Netw 2024; 170:312-324. [PMID: 38006734 DOI: 10.1016/j.neunet.2023.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/27/2023]
Abstract
Brain-computer interfaces (BCIs) based on motor imagery (MI) enable the disabled to interact with the world through brain signals. To meet demands of real-time, stable, and diverse interactions, it is crucial to develop lightweight networks that can accurately and reliably decode multi-class MI tasks. In this paper, we introduce BrainGridNet, a convolutional neural network (CNN) framework that integrates two intersecting depthwise CNN branches with 3D electroencephalography (EEG) data to decode a five-class MI task. The BrainGridNet attains competitive results in both the time and frequency domains, with superior performance in the frequency domain. As a result, an accuracy of 80.26 percent and a kappa value of 0.753 are achieved by BrainGridNet, surpassing the state-of-the-art (SOTA) model. Additionally, BrainGridNet shows optimal computational efficiency, excels in decoding the most challenging subject, and maintains robust accuracy despite the random loss of 16 electrode signals. Finally, the visualizations demonstrate that BrainGridNet learns discriminative features and identifies critical brain regions and frequency bands corresponding to each MI class. The convergence of BrainGridNet's strong feature extraction capability, high decoding accuracy, steady decoding efficacy, and low computational costs renders it an appealing choice for facilitating the development of BCIs.
Collapse
Affiliation(s)
- Xingfu Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yu Wang
- Neural Computation and Brain Computer Interaction (NeuBCI) Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wenxia Qi
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Delin Kong
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China
| | - Wei Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
39
|
Kumar S, Alawieh H, Racz FS, Fakhreddine R, Millán JDR. Transfer learning promotes acquisition of individual BCI skills. PNAS NEXUS 2024; 3:pgae076. [PMID: 38426121 PMCID: PMC10903645 DOI: 10.1093/pnasnexus/pgae076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 02/05/2024] [Indexed: 03/02/2024]
Abstract
Subject training is crucial for acquiring brain-computer interface (BCI) control. Typically, this requires collecting user-specific calibration data due to high inter-subject neural variability that limits the usability of generic decoders. However, calibration is cumbersome and may produce inadequate data for building decoders, especially with naïve subjects. Here, we show that a decoder trained on the data of a single expert is readily transferrable to inexperienced users via domain adaptation techniques allowing calibration-free BCI training. We introduce two real-time frameworks, (i) Generic Recentering (GR) through unsupervised adaptation and (ii) Personally Assisted Recentering (PAR) that extends GR by employing supervised recalibration of the decoder parameters. We evaluated our frameworks on 18 healthy naïve subjects over five online sessions, who operated a customary synchronous bar task with continuous feedback and a more challenging car racing game with asynchronous control and discrete feedback. We show that along with improved task-oriented BCI performance in both tasks, our frameworks promoted subjects' ability to acquire individual BCI skills, as the initial neurophysiological control features of an expert subject evolved and became subject specific. Furthermore, those features were task-specific and were learned in parallel as participants practiced the two tasks in every session. Contrary to previous findings implying that supervised methods lead to improved online BCI control, we observed that longitudinal training coupled with unsupervised domain matching (GR) achieved similar performance to supervised recalibration (PAR). Therefore, our presented frameworks facilitate calibration-free BCIs and have immediate implications for broader populations-such as patients with neurological pathologies-who might struggle to provide suitable initial calibration data.
Collapse
Affiliation(s)
- Satyam Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Hussein Alawieh
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - Frigyes Samuel Racz
- Department of Neurology, The University of Texas at Austin, Austin, TX 78712, USA
- Mulva Clinic for the Neurosciences, The University of Texas at Austin, Austin, TX 78712, USA
| | - Rawan Fakhreddine
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| | - José del R Millán
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Neurology, The University of Texas at Austin, Austin, TX 78712, USA
- Mulva Clinic for the Neurosciences, The University of Texas at Austin, Austin, TX 78712, USA
- Departement of Biomedical Engineering, The University of Texas at Austin, Austin, TX 78712, USA
| |
Collapse
|
40
|
Mei J, Luo R, Xu L, Zhao W, Wen S, Wang K, Xiao X, Meng J, Huang Y, Tang J, Cheng L, Xu M, Ming D. MetaBCI: An open-source platform for brain-computer interfaces. Comput Biol Med 2024; 168:107806. [PMID: 38081116 DOI: 10.1016/j.compbiomed.2023.107806] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 11/29/2023] [Accepted: 11/29/2023] [Indexed: 01/10/2024]
Abstract
BACKGROUND Recently, brain-computer interfaces (BCIs) have attracted worldwide attention for their great potential in clinical and real-life applications. To implement a complete BCI system, one must set up several links to translate the brain intent into computer commands. However, there is not an open-source software platform that can cover all links of the BCI chain. METHOD This study developed a one-stop open-source BCI software, namely MetaBCI, to facilitate the construction of a BCI system. MetaBCI is written in Python, and has the functions of stimulus presentation (Brainstim), data loading and processing (Brainda), and online information flow (Brainflow). This paper introduces the detailed information of MetaBCI and presents four typical application cases. RESULTS The results showed that MetaBCI was an extensible and feature-rich software platform for BCI research and application, which could effectively encode, decode, and feedback brain activities. CONCLUSIONS MetaBCI can greatly lower the BCI's technical threshold for BCI beginners and can save time and cost to build up a practical BCI system. The source code is available at https://github.com/TBC-TJU/MetaBCI, expecting new contributions from the BCI community.
Collapse
Affiliation(s)
- Jie Mei
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China.
| | - Ruixin Luo
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China.
| | - Lichao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China
| | - Wei Zhao
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China
| | - Shengfu Wen
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China
| | - Kun Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China
| | - Xiaolin Xiao
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China
| | - Jiayuan Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China
| | - Yongzhi Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China
| | - Jiabei Tang
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China; Tiankai Suishi (Tianjin) Intelligence Ltd., Tianjin, 300192, People's Republic of China
| | - Longlong Cheng
- China Electronics Cloud Brain (Tianjin) Technology Co., Ltd., Tianjin, 300392, People's Republic of China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, 300072, People's Republic of China; Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, 300072, People's Republic of China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, 300392, People's Republic of China
| |
Collapse
|
41
|
Luo R, Mai X, Meng J. Effect of motion state variability on error-related potentials during continuous feedback paradigms and their consequences for classification. J Neurosci Methods 2024; 401:109982. [PMID: 37839711 DOI: 10.1016/j.jneumeth.2023.109982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 09/11/2023] [Accepted: 10/11/2023] [Indexed: 10/17/2023]
Abstract
BACKGROUND An erroneous motion would elicit the error-related potential (ErrP) when humans monitor the behavior of the external devices. This EEG modality has been largely applied to brain-computer interface in an active or passive manner with discrete visual feedback. However, the effect of variable motion state on ErrP morphology and classification performance raises concerns when the interaction is conducted with continuous visual feedback. NEW METHOD In the present study, we designed a cursor control experiment. Participants monitored a continuously moving cursor to reach the target on one side of the screen. Motion state varied multiple times with two factors: (1) motion direction and (2) motion speed. The effects of these two factors on the morphological characteristics and classification performance of ErrP were analyzed. Furthermore, an offline simulation was performed to evaluate the effectiveness of the proposed extended ErrP-decoder in resolving the interference by motion direction changes. RESULTS The statistical analyses revealed that motion direction and motion speed significantly influenced the amplitude of feedback-ERN and frontal-Pe components, while only motion direction significantly affected the classification performance. COMPARISON WITH EXISTING METHODS Significant deviation was found in ErrP detection utilizing classical correct-versus-erroneous event training. However, this bias can be alleviated by 16% by the extended ErrP-decoder. CONCLUSION The morphology and classification performance of ErrP signal can be affected by motion state variability during continuous feedback paradigms. The results enhance the comprehension of ErrP morphological components and shed light on the detection of BCI's error behavior in practical continuous control.
Collapse
Affiliation(s)
- Ruijie Luo
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ximing Mai
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jianjun Meng
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
42
|
Forenzo D, Liu Y, Kim J, Ding Y, Yoon T, He B. Integrating Simultaneous Motor Imagery and Spatial Attention for EEG-BCI Control. IEEE Trans Biomed Eng 2024; 71:282-294. [PMID: 37494151 PMCID: PMC10803074 DOI: 10.1109/tbme.2023.3298957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
OBJECTIVE EEG-based brain-computer interfaces (BCI) are non-invasive approaches for replacing or restoring motor functions in impaired patients, and direct brain-to-device communication in the general population. Motor imagery (MI) is one of the most used BCI paradigms, but its performance varies across individuals and certain users require substantial training to develop control. In this study, we propose to integrate a MI paradigm simultaneously with a recently proposed Overt Spatial Attention (OSA) paradigm, to accomplish BCI control. METHODS We evaluated a cohort of 25 human subjects' ability to control a virtual cursor in one- and two-dimensions over 5 BCI sessions. The subjects used 5 different BCI paradigms: MI alone, OSA alone, MI, and OSA simultaneously towards the same target (MI+OSA), and MI for one axis while OSA controls the other (MI/OSA and OSA/MI). RESULTS Our results show that MI+OSA reached the highest average online performance in 2D tasks at 49% Percent Valid Correct (PVC), and statistically outperforms both MI alone (42%) and OSA alone (45%). MI+OSA had a similar performance to each subject's best individual method between MI alone and OSA alone (50%) and 9 subjects reached their highest average BCI performance using MI+OSA. CONCLUSION Integrating MI and OSA leads to improved performance over both individual methods at the group level and is the best BCI paradigm option for some subjects. SIGNIFICANCE This work proposes a new BCI control paradigm that integrates two existing paradigms and demonstrates its value by showing that it can improve users' BCI performance.
Collapse
Affiliation(s)
- Dylan Forenzo
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Yixuan Liu
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Jeehyun Kim
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Yidan Ding
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Taehyung Yoon
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Bin He
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| |
Collapse
|
43
|
Chan RW, Edelman BJ, Tsang SY, Gao K, Yu ACH. Opportunities for System Neuroscience. ADVANCES IN NEUROBIOLOGY 2024; 41:247-253. [PMID: 39589717 DOI: 10.1007/978-3-031-69188-1_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2024]
Abstract
Systems neuroscience explores the intricate organization and dynamic function of neural circuits and networks within the brain. By elucidating how these complex networks integrate to execute mental operations, this field aims to deepen our understanding of the biological basis of cognition, behavior, and consciousness. In this chapter, we outline the promising future of systems neuroscience, highlighting the emerging opportunities afforded by powerful technological innovations and their applications. Cutting-edge tools such as awake functional MRI, ultrahigh field strength neuroimaging, functional ultrasound imaging, and optoacoustic techniques have revolutionized the field, enabling unprecedented observation and analysis of brain activity. The insights gleaned from these advanced methodologies have empowered the development of a suite of exciting applications across diverse domains. These include brain-machine interfaces (BMIs) for neural prosthetics, cognitive enhancement therapies, personalized mental health interventions, and precision medicine approaches. As our comprehension of neural systems continues to grow, it is envisioned that these and related applications will become increasingly refined and impactful in improving human health and well-being.
Collapse
Affiliation(s)
- Russell W Chan
- Hai Kang Life Corporation Ltd, Hong Kong, China
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Bradley Jay Edelman
- Brain-Wide Circuits for Behavior Research Group, Max Planck Institute of Biological Intelligence, Planegg, Germany
- Emotion Research Department, Max Planck Institute of Psychiatry, Munich, Germany
| | | | - Kai Gao
- Children's Medical Center, Peking University First Hospital, Beijing, China
| | - Albert Cheung-Hoi Yu
- Hai Kang Life Corporation Ltd, Hong Kong, China.
- Neuroscience Research Institute, Peking University, Beijing, China.
| |
Collapse
|
44
|
Lin S, Jiang J, Huang K, Li L, He X, Du P, Wu Y, Liu J, Li X, Huang Z, Zhou Z, Yu Y, Gao J, Lei M, Wu H. Advanced Electrode Technologies for Noninvasive Brain-Computer Interfaces. ACS NANO 2023; 17:24487-24513. [PMID: 38064282 DOI: 10.1021/acsnano.3c06781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
Brain-computer interfaces (BCIs) have garnered significant attention in recent years due to their potential applications in medical, assistive, and communication technologies. Building on this, noninvasive BCIs stand out as they provide a safe and user-friendly method for interacting with the human brain. In this work, we provide a comprehensive overview of the latest developments and advancements in material, design, and application of noninvasive BCIs electrode technology. We also explore the challenges and limitations currently faced by noninvasive BCI electrode technology and sketch out the technological roadmap from three dimensions: Materials and Design; Performances; Mode and Function. We aim to unite research efforts within the field of noninvasive BCI electrode technology, focusing on the consolidation of shared goals and fostering integrated development strategies among a diverse array of multidisciplinary researchers.
Collapse
Affiliation(s)
- Sen Lin
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Jingjing Jiang
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Kai Huang
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
- State Key Laboratory of Information Photonics and Optical Communications and School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Lei Li
- National Engineering Research Center of Electric Vehicles, Beijing Institute of Technology, Beijing 100081, China
| | - Xian He
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| | - Peng Du
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| | - Yufeng Wu
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| | - Junchen Liu
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
- State Key Laboratory of Information Photonics and Optical Communications and School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Xilin Li
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
- Advanced Institute for Brain and Intelligence, Guangxi University, Nanning 530004, China
| | - Zhibao Huang
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Zenan Zhou
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Yuanhang Yu
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Jiaxin Gao
- School of Physical Science and Technology, Guangxi University, Nanning 530004, China
| | - Ming Lei
- State Key Laboratory of Information Photonics and Optical Communications and School of Science, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Hui Wu
- State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084, China
| |
Collapse
|
45
|
Jeong JH, Cho JH, Lee BH, Lee SW. Real-Time Deep Neurolinguistic Learning Enhances Noninvasive Neural Language Decoding for Brain-Machine Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7469-7482. [PMID: 36251899 DOI: 10.1109/tcyb.2022.3211694] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG)-based brain-machine interface (BMI) has been utilized to help patients regain motor function and has recently been validated for its use in healthy people because of its ability to directly decipher human intentions. In particular, neurolinguistic research using EEGs has been investigated as an intuitive and naturalistic communication tool between humans and machines. In this study, the human mind directly decoded the neural languages based on speech imagery using the proposed deep neurolinguistic learning. Through real-time experiments, we evaluated whether BMI-based cooperative tasks between multiple users could be accomplished using a variety of neural languages. We successfully demonstrated a BMI system that allows a variety of scenarios, such as essential activity, collaborative play, and emotional interaction. This outcome presents a novel BMI frontier that can interact at the level of human-like intelligence in real time and extends the boundaries of the communication paradigm.
Collapse
|
46
|
Wang W, Qi F, Wipf DP, Cai C, Yu T, Li Y, Zhang Y, Yu Z, Wu W. Sparse Bayesian Learning for End-to-End EEG Decoding. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:15632-15649. [PMID: 37506000 DOI: 10.1109/tpami.2023.3299568] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
Decoding brain activity from non-invasive electroencephalography (EEG) is crucial for brain-computer interfaces (BCIs) and the study of brain disorders. Notably, end-to-end EEG decoding has gained widespread popularity in recent years owing to the remarkable advances in deep learning research. However, many EEG studies suffer from limited sample sizes, making it difficult for existing deep learning models to effectively generalize to highly noisy EEG data. To address this fundamental limitation, this paper proposes a novel end-to-end EEG decoding algorithm that utilizes a low-rank weight matrix to encode both spatio-temporal filters and the classifier, all optimized under a principled sparse Bayesian learning (SBL) framework. Importantly, this SBL framework also enables us to learn hyperparameters that optimally penalize the model in a Bayesian fashion. The proposed decoding algorithm is systematically benchmarked on five motor imagery BCI EEG datasets ( N=192) and an emotion recognition EEG dataset ( N=45), in comparison with several contemporary algorithms, including end-to-end deep-learning-based EEG decoding algorithms. The classification results demonstrate that our algorithm significantly outperforms the competing algorithms while yielding neurophysiologically meaningful spatio-temporal patterns. Our algorithm therefore advances the state-of-the-art by providing a novel EEG-tailored machine learning tool for decoding brain activity.
Collapse
|
47
|
Vukelić M, Bui M, Vorreuther A, Lingelbach K. Combining brain-computer interfaces with deep reinforcement learning for robot training: a feasibility study in a simulation environment. FRONTIERS IN NEUROERGONOMICS 2023; 4:1274730. [PMID: 38234482 PMCID: PMC10790930 DOI: 10.3389/fnrgo.2023.1274730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/31/2023] [Indexed: 01/19/2024]
Abstract
Deep reinforcement learning (RL) is used as a strategy to teach robot agents how to autonomously learn complex tasks. While sparsity is a natural way to define a reward in realistic robot scenarios, it provides poor learning signals for the agent, thus making the design of good reward functions challenging. To overcome this challenge learning from human feedback through an implicit brain-computer interface (BCI) is used. We combined a BCI with deep RL for robot training in a 3-D physical realistic simulation environment. In a first study, we compared the feasibility of different electroencephalography (EEG) systems (wet- vs. dry-based electrodes) and its application for automatic classification of perceived errors during a robot task with different machine learning models. In a second study, we compared the performance of the BCI-based deep RL training to feedback explicitly given by participants. Our findings from the first study indicate the use of a high-quality dry-based EEG-system can provide a robust and fast method for automatically assessing robot behavior using a sophisticated convolutional neural network machine learning model. The results of our second study prove that the implicit BCI-based deep RL version in combination with the dry EEG-system can significantly accelerate the learning process in a realistic 3-D robot simulation environment. Performance of the BCI-based trained deep RL model was even comparable to that achieved by the approach with explicit human feedback. Our findings emphasize the usage of BCI-based deep RL methods as a valid alternative in those human-robot applications where no access to cognitive demanding explicit human feedback is available.
Collapse
Affiliation(s)
- Mathias Vukelić
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| | - Michael Bui
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| | - Anna Vorreuther
- Applied Neurocognitive Systems, Institute of Human Factors and Technology Management (IAT), University of Stuttgart, Stuttgart, Germany
| | - Katharina Lingelbach
- Applied Neurocognitive Systems, Fraunhofer Institute for Industrial Engineering (IAO), Stuttgart, Germany
| |
Collapse
|
48
|
Lin C, Zhang C, Xu J, Liu R, Leng Y, Fu C. Neural Correlation of EEG and Eye Movement in Natural Grasping Intention Estimation. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4329-4337. [PMID: 37883284 DOI: 10.1109/tnsre.2023.3327907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Decoding the user's natural grasp intent enhances the application of wearable robots, improving the daily lives of individuals with disabilities. Electroencephalogram (EEG) and eye movements are two natural representations when users generate grasp intent in their minds, with current studies decoding human intent by fusing EEG and eye movement signals. However, the neural correlation between these two signals remains unclear. Thus, this paper aims to explore the consistency between EEG and eye movement in natural grasping intention estimation. Specifically, six grasp intent pairs are decoded by combining feature vectors and utilizing the optimal classifier. Extensive experimental results indicate that the coupling between the EEG and eye movements intent patterns remains intact when the user generates a natural grasp intent, and concurrently, the EEG pattern is consistent with the eye movements pattern across the task pairs. Moreover, the findings reveal a solid connection between EEG and eye movements even when taking into account cortical EEG (originating from the visual cortex or motor cortex) and the presence of a suboptimal classifier. Overall, this work uncovers the coupling correlation between EEG and eye movements and provides a reference for intention estimation.
Collapse
|
49
|
Meng L, Jiang X, Huang J, Li W, Luo H, Wu D. User Identity Protection in EEG-Based Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3576-3586. [PMID: 37651476 DOI: 10.1109/tnsre.2023.3310883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
A brain-computer interface (BCI) establishes a direct communication pathway between the brain and an external device. Electroencephalogram (EEG) is the most popular input signal in BCIs, due to its convenience and low cost. Most research on EEG-based BCIs focuses on the accurate decoding of EEG signals; however, EEG signals also contain rich private information, e.g., user identity, emotion, and so on, which should be protected. This paper first exposes a serious privacy problem in EEG-based BCIs, i.e., the user identity in EEG data can be easily learned so that different sessions of EEG data from the same user can be associated together to more reliably mine private information. To address this issue, we further propose two approaches to convert the original EEG data into identity-unlearnable EEG data, i.e., removing the user identity information while maintaining the good performance on the primary BCI task. Experiments on seven EEG datasets from five different BCI paradigms showed that on average the generated identity-unlearnable EEG data can reduce the user identification accuracy from 70.01% to at most 21.36%, greatly facilitating user privacy protection in EEG-based BCIs.
Collapse
|
50
|
Kosnoff J, Yu K, Liu C, He B. Transcranial Focused Ultrasound to V5 Enhances Human Visual Motion Brain-Computer Interface by Modulating Feature-Based Attention. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.04.556252. [PMID: 37732253 PMCID: PMC10508752 DOI: 10.1101/2023.09.04.556252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Paralysis affects roughly 1 in 50 Americans. While there is no cure for the condition, brain-computer interfaces (BCI) can allow users to control a device with their mind, bypassing the paralyzed region. Non-invasive BCIs still have high error rates, which is hypothesized to be reduced with concurrent targeted neuromodulation. This study examines whether transcranial focused ultrasound (tFUS) modulation can improve BCI outcomes, and what the underlying mechanism of action might be through high-density electroencephalography (EEG)-based source imaging (ESI) analyses. V5-targeted tFUS significantly reduced the error for the BCI speller task. ESI analyses showed significantly increased theta activity in the tFUS condition at both V5 and downstream the dorsal visual processing pathway. Correlation analysis indicates that the dorsal processing pathway connection was preserved during tFUS stimulation, whereas extraneous connections were severed. These results suggest that V5-targeted tFUS' mechanism of action is to raise the brain's feature-based attention to visual motion.
Collapse
|