1
|
Pan L, Wang K, Huang Y, Sun X, Meng J, Yi W, Xu M, Jung TP, Ming D. Enhancing motor imagery EEG classification with a Riemannian geometry-based spatial filtering (RSF) method. Neural Netw 2025; 188:107511. [PMID: 40294568 DOI: 10.1016/j.neunet.2025.107511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 03/19/2025] [Accepted: 04/21/2025] [Indexed: 04/30/2025]
Abstract
Motor imagery (MI) refers to the mental simulation of movements without physical execution, and it can be captured using electroencephalography (EEG). This area has garnered significant research interest due to its substantial potential in brain-computer interface (BCI) applications, especially for individuals with physical disabilities. However, accurate classification of MI EEG signals remains a major challenge due to their non-stationary nature, low signal-to-noise ratio, and sensitivity to both external and physiological noise. Traditional classification methods, such as common spatial pattern (CSP), often assume that the data is stationary and Gaussian, which limits their applicability in real-world scenarios where these assumptions do not hold. These challenges highlight the need for more robust methods to improve classification accuracy in MI-BCI systems. To address these issues, this study introduces a Riemannian geometry-based spatial filtering (RSF) method that projects EEG signals into a lower-dimensional subspace, maximizing the Riemannian distance between covariance matrices from different classes. By leveraging the inherent geometric properties of EEG data, RSF enhances the discriminative power of the features while maintaining robustness against noise. The performance of RSF was evaluated in combination with ten commonly used MI decoding algorithms, including CSP with linear discriminant analysis (CSP-LDA), Filter Bank CSP (FBCSP), Minimum Distance to Riemannian Mean (MDM), Tangent Space Mapping (TSM), EEGNet, ShallowConvNet (sCNN), DeepConvNet (dCNN), FBCNet, Graph-CSPNet, and LMDA-Net, using six publicly available MI-BCI datasets. The results demonstrate that RSF significantly improves classification accuracy and reduces computational time, particularly for deep learning models with high computational complexity. These findings underscore the potential of RSF as an effective spatial filtering approach for MI EEG classification, providing new insights and opportunities for the development of robust MI-BCI systems. The code for this research is available at https://github.com/PLC-TJU/RSF.
Collapse
Affiliation(s)
- Lincong Pan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, PR China.
| | - Kun Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Yongzhi Huang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Xinwei Sun
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, PR China
| | - Jiayuan Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing 100192, PR China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| | - Tzyy-Ping Jung
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Swartz Center for Computational Neuroscience, University of California, San Diego, CA 92093, USA.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, PR China; Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, PR China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072, PR China.
| |
Collapse
|
2
|
Larsen OFP, Tresselt WG, Lorenz EA, Holt T, Sandstrak G, Hansen TI, Su X, Holt A. A method for synchronized use of EEG and eye tracking in fully immersive VR. Front Hum Neurosci 2024; 18:1347974. [PMID: 38468815 PMCID: PMC10925625 DOI: 10.3389/fnhum.2024.1347974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/06/2024] [Indexed: 03/13/2024] Open
Abstract
This study explores the synchronization of multimodal physiological data streams, in particular, the integration of electroencephalography (EEG) with a virtual reality (VR) headset featuring eye-tracking capabilities. A potential use case for the synchronized data streams is demonstrated by implementing a hybrid steady-state visually evoked potential (SSVEP) based brain-computer interface (BCI) speller within a fully immersive VR environment. The hardware latency analysis reveals an average offset of 36 ms between EEG and eye-tracking data streams and a mean jitter of 5.76 ms. The study further presents a proof of concept brain-computer interface (BCI) speller in VR, showcasing its potential for real-world applications. The findings highlight the feasibility of combining commercial EEG and VR technologies for neuroscientific research and open new avenues for studying brain activity in ecologically valid VR environments. Future research could focus on refining the synchronization methods and exploring applications in various contexts, such as learning and social interactions.
Collapse
Affiliation(s)
- Olav F. P. Larsen
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - William G. Tresselt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Emanuel A. Lorenz
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tomas Holt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Grethe Sandstrak
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tor I. Hansen
- Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Acquired Brain Injury, St. Olav's University Hospital, Trondheim, Norway
| | - Xiaomeng Su
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Alexander Holt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
3
|
Zhu S, Yang J, Ding P, Wang F, Gong A, Fu Y. Optimization of SSVEP-BCI Virtual Reality Stereo Stimulation Parameters Based on Knowledge Graph. Brain Sci 2023; 13:brainsci13050710. [PMID: 37239182 DOI: 10.3390/brainsci13050710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 04/16/2023] [Accepted: 04/20/2023] [Indexed: 05/28/2023] Open
Abstract
The steady-state visually evoked potential (SSVEP) is an important type of BCI that has various potential applications, including in virtual environments using virtual reality (VR). However, compared to VR research, the majority of visual stimuli used in the SSVEP-BCI are plane stimulation targets (PSTs), with only a few studies using stereo stimulation targets (SSTs). To explore the parameter optimization of the SSVEP-BCI virtual SSTs, this paper presents a parameter knowledge graph. First, an online VR stereoscopic stimulation SSVEP-BCI system is built, and a parameter dictionary for VR stereoscopic stimulation parameters (shape, color, and frequency) is established. The online experimental results of 10 subjects under different parameter combinations were collected, and a knowledge graph was constructed to optimize the SST parameters. The best classification performances of the shape, color, and frequency parameters were sphere (91.85%), blue (94.26%), and 13Hz (95.93%). With various combinations of virtual reality stereo stimulation parameters, the performance of the SSVEP-BCI varies. Using the knowledge graph of the stimulus parameters can help intuitively and effectively select appropriate SST parameters. The knowledge graph of the stereo target stimulation parameters presented in this work is expected to offer a way to convert the application of the SSVEP-BCI and VR.
Collapse
Affiliation(s)
- Shixuan Zhu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650032, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650032, China
| | - Jingcheng Yang
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650032, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650032, China
| | - Peng Ding
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650032, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650032, China
| | - Fan Wang
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650032, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650032, China
| | - Anmin Gong
- College of Information Engineering, Engineering University of PAP, Xi'an 710018, China
| | - Yunfa Fu
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650032, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650032, China
| |
Collapse
|
4
|
Andrews A. Mind Power: Thought-controlled Augmented Reality for Basic Science Education. MEDICAL SCIENCE EDUCATOR 2022; 32:1571-1573. [PMID: 36532389 PMCID: PMC9755389 DOI: 10.1007/s40670-022-01659-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/06/2022] [Indexed: 06/17/2023]
Abstract
The integration of augmented reality (AR) and brain-computer interface (BCI) technologies holds a tremendous potential to improve learning, communication, and teamwork in basic science education. The current study presents a novel interface technology solution to enable AR-BCI interoperability and allow learners to control digital objects in AR using neural commands.
Collapse
Affiliation(s)
- Anya Andrews
- University of Central Florida (UCF) College of Medicine, Orlando, FL USA
| |
Collapse
|
5
|
Shishkin SL. Active Brain-Computer Interfacing for Healthy Users. Front Neurosci 2022; 16:859887. [PMID: 35546879 PMCID: PMC9083451 DOI: 10.3389/fnins.2022.859887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 03/30/2022] [Indexed: 11/13/2022] Open
|
6
|
Andrews A. Integration of Augmented Reality and Brain-Computer Interface Technologies for Health Care Applications: Exploratory and Prototyping Study. JMIR Form Res 2022; 6:e18222. [PMID: 35451963 PMCID: PMC9073621 DOI: 10.2196/18222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 01/28/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Background Augmented reality (AR) and brain-computer interface (BCI) are promising technologies that have a tremendous potential to revolutionize health care. While there has been a growing interest in these technologies for medical applications in the recent years, the combined use of AR and BCI remains a fairly unexplored area that offers significant opportunities for improving health care professional education and clinical practice. This paper describes a recent study to explore the integration of AR and BCI technologies for health care applications. Objective The described effort aims to advance an understanding of how AR and BCI technologies can effectively work together to transform modern health care practice by providing new mechanisms to improve patient and provider learning, communication, and shared decision-making. Methods The study methods included an environmental scan of AR and BCI technologies currently used in health care, a use case analysis for a combined AR-BCI capability, and development of an integrated AR-BCI prototype solution for health care applications. Results The study resulted in a novel interface technology solution that enables interoperability between consumer-grade wearable AR and BCI devices and provides the users with an ability to control digital objects in augmented reality using neural commands. The article discusses this novel solution within the context of practical digital health use cases developed during the course of the study where the combined AR and BCI technologies are anticipated to produce the most impact. Conclusions As one of the pioneering efforts in the area of AR and BCI integration, the study presents a practical implementation pathway for AR-BCI integration and provides directions for future research and innovation in this area.
Collapse
Affiliation(s)
- Anya Andrews
- Department of Internal Medicine, College of Medicine, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
7
|
Ravi A, Lu J, Pearce S, Jiang N. Enhanced System Robustness of Asynchronous BCI in Augmented Reality using Steady-state Motion Visual Evoked Potential. IEEE Trans Neural Syst Rehabil Eng 2022; 30:85-95. [PMID: 34990366 DOI: 10.1109/tnsre.2022.3140772] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This study evaluated the effect of change in background on steady state visually evoked potentials (SSVEP) and steady state motion visually evoked potentials (SSMVEP) based brain computer interfaces (BCI) in a small-profile augmented reality (AR) headset. A four target SSVEP and SSMVEP BCI was implemented using the Cognixion AR headset prototype. An active (AB) and a non-active background (NB) were evaluated. The signal characteristics and classification performance of the two BCI paradigms were studied. Offline analysis was performed using canonical correlation analysis (CCA) and complex-spectrum based convolutional neural network (C-CNN). Finally, the asynchronous pseudo-online performance of the SSMVEP BCI was evaluated. Signal analysis revealed that the SSMVEP stimulus was more robust to change in background compared to SSVEP stimulus in AR. The decoding performance revealed that the C-CNN method outperformed CCA for both stimulus types and NB background, in agreement with results in the literature. The average offline accuracies for W=1s of C-CNN were (NB vs. AB): SSVEP: 82% ±15% vs. 60% ±21% and SSMVEP: 71.4% ± 22% vs. 63.5% ± 18%. Additionally, for W=2s, the AR-SSMVEP BCI with the C-CNN method was 83.3% ± 27% (NB) and 74.1% ±22% (AB). The results suggest that with the C-CNN method, the AR-SSMVEP BCI is both robust to change in background conditions and provides high decoding accuracy compared to the AR-SSVEP BCI. This study presents novel results that highlight the robustness and practical application of SSMVEP BCIs developed with a low-cost AR headset.
Collapse
|