1
|
Akhter J, Nazeer H, Naseer N, Naeem R, Kallu KD, Lee J, Ko SY. Improved performance of fNIRS-BCI by stacking of deep learning-derived frequency domain features. PLoS One 2025; 20:e0314447. [PMID: 40245060 PMCID: PMC12005509 DOI: 10.1371/journal.pone.0314447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 11/11/2024] [Indexed: 04/19/2025] Open
Abstract
The functional near-infrared spectroscopy-based brain-computer interface (fNIRS-BCI) systems recognize patterns in brain signals and generate control commands, thereby enabling individuals with motor disabilities to regain autonomy. In this study hand gripping data is acquired using fNIRS neuroimaging system, preprocessing is performed using nirsLAB and features extraction is performed using deep learning (DL) Algorithms. For feature extraction and classification stack and fft methods are proposed. Convolutional neural networks (CNN), long short-term memory (LSTM), and bidirectional long-short-term memory (Bi-LSTM) are employed to extract features. The stack method classifies these features using a stack model and the fft method enhances features by applying fast Fourier transformation which is followed by classification using a stack model. The proposed methods are applied to fNIRS signals from twenty participants engaged in a two-class hand-gripping motor activity. The classification performance of the proposed methods is compared with conventional CNN, LSTM, and Bi-LSTM algorithms and one another. The proposed fft and stack methods yield 90.11% and 87.00% classification accuracies respectively, which are significantly higher than those achieved by CNN (85.16%), LSTM (79.46%), and Bi-LSTM (81.88%) conventional algorithms. The results show that the proposed stack and fft methods can be effectively used for the classification of the two and three-class problems in fNIRS-BCI applications.
Collapse
Affiliation(s)
- Jamila Akhter
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Hammad Nazeer
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Noman Naseer
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Rehan Naeem
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Karam Dad Kallu
- MeRIC-Lab (Medical Robotics & Intelligent Control Laboratory), School of Mechanical Engineering, Chonnam National University, Gwangju, South Korea
| | - Jiye Lee
- MeRIC-Lab (Medical Robotics & Intelligent Control Laboratory), School of Mechanical Engineering, Chonnam National University, Gwangju, South Korea
| | - Seong Young Ko
- MeRIC-Lab (Medical Robotics & Intelligent Control Laboratory), School of Mechanical Engineering, Chonnam National University, Gwangju, South Korea
| |
Collapse
|
2
|
Hernández-Gloria JJ, Jaramillo-Gonzalez A, Savić AM, Mrachacz-Kersting N. Toward brain-computer interface speller with movement-related cortical potentials as control signals. Front Hum Neurosci 2025; 19:1539081. [PMID: 40241786 PMCID: PMC11999959 DOI: 10.3389/fnhum.2025.1539081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Accepted: 03/07/2025] [Indexed: 04/18/2025] Open
Abstract
Brain Computer Interface spellers offer a promising alternative for individuals with Amyotrophic Lateral Sclerosis (ALS) by facilitating communication without relying on muscle activity. This study assessed the feasibility of using movement related cortical potentials (MRCPs) as a control signal for a Brain-Computer Interface speller in an offline setting. Unlike motor imagery-based BCIs, this study focused on executed movements. Fifteen healthy subjects performed three spelling tasks that involved choosing specific letters displayed on a computer screen by performing a ballistic dorsiflexion of the dominant foot. Electroencephalographic signals were recorded from 10 sites centered around Cz. Three conditions were tested to evaluate MRCP performance under varying task demands: a control condition using repeated selections of the letter "O" to isolate movement-related brain activity; a phrase spelling condition with structured text ("HELLO IM FINE") to simulate a meaningful spelling task with moderate cognitive load; and a random condition using a randomized sequence of letters to introduce higher task complexity by removing linguistic or semantic context. The success rate, defined as the presence of an MRCP, was manually determined. It was approximately 69% for both the control and phrase conditions, with a slight decrease in the random condition, likely due to increased task complexity. Significant differences in MRCP features were observed between conditions with Laplacian filtering, whereas no significant differences were found in single-site Cz recordings. These results contribute to the development of MRCP-based BCI spellers by demonstrating their feasibility in a spelling task. However, further research is required to implement and validate real-time applications.
Collapse
Affiliation(s)
- José Jesús Hernández-Gloria
- Laboratory for Biomedical Microtechnology, Department of Microsystems Engineering-IMTEK, University of Freiburg, Freiburg, Germany
- Institute of Sport and Sport Science, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
| | | | - Andrej M. Savić
- Science and Research Centre, University of Belgrade – School of Electrical Engineering, Belgrade, Serbia
| | - Natalie Mrachacz-Kersting
- Institute of Sport and Sport Science, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany
- BrainLinks-BrainTools Center, IMBIT, Albert-Ludwigs University of Freiburg, Freiburg, Germany
| |
Collapse
|
3
|
Zhong XC, Wang Q, Liu D, Chen Z, Liao JX, Sun J, Zhang Y, Fan FL. EEG-DG: A Multi-Source Domain Generalization Framework for Motor Imagery EEG Classification. IEEE J Biomed Health Inform 2025; 29:2484-2495. [PMID: 39052465 DOI: 10.1109/jbhi.2024.3431230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2024]
Abstract
Motorimagery EEG classification plays a crucial role in non-invasive Brain-Computer Interface (BCI) research. However, the performance of classification is affected by the non-stationarity and individual variations of EEG signals. Simply pooling EEG data with different statistical distributions to train a classification model can severely degrade the generalization performance. To address this issue, the existing methods primarily focus on domain adaptation, which requires access to the test data during training. This is unrealistic and impractical in many EEG application scenarios. In this paper, we propose a novel multi-source domain generalization framework called EEG-DG, which leverages multiple source domains with different statistical distributions to build generalizable models on unseen target EEG data. We optimize both the marginal and conditional distributions to ensure the stability of the joint distribution across source domains and extend it to a multi-source domain generalization framework to achieve domain-invariant feature representation, thereby alleviating calibration efforts. Systematic experiments conducted on a simulative dataset, BCI competition IV 2a, 2b, and OpenBMI datasets, demonstrate the superiority and competitive performance of our proposed framework over other state-of-the-art methods. Specifically, EEG-DG achieves average classification accuracies of 81.79% and 87.12% on datasets IV-2a and IV-2b, respectively, and 78.37% and 76.94% for inter-session and inter-subject evaluations on dataset OpenBMI, which even outperforms some domain adaptation methods.
Collapse
|
4
|
Liu XY, Wang WL, Liu M, Chen MY, Pereira T, Doda DY, Ke YF, Wang SY, Wen D, Tong XG, Li WG, Yang Y, Han XD, Sun YL, Song X, Hao CY, Zhang ZH, Liu XY, Li CY, Peng R, Song XX, Yasi A, Pang MJ, Zhang K, He RN, Wu L, Chen SG, Chen WJ, Chao YG, Hu CG, Zhang H, Zhou M, Wang K, Liu PF, Chen C, Geng XY, Qin Y, Gao DR, Song EM, Cheng LL, Chen X, Ming D. Recent applications of EEG-based brain-computer-interface in the medical field. Mil Med Res 2025; 12:14. [PMID: 40128831 PMCID: PMC11931852 DOI: 10.1186/s40779-025-00598-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Accepted: 02/21/2025] [Indexed: 03/26/2025] Open
Abstract
Brain-computer interfaces (BCIs) represent an emerging technology that facilitates direct communication between the brain and external devices. In recent years, numerous review articles have explored various aspects of BCIs, including their fundamental principles, technical advancements, and applications in specific domains. However, these reviews often focus on signal processing, hardware development, or limited applications such as motor rehabilitation or communication. This paper aims to offer a comprehensive review of recent electroencephalogram (EEG)-based BCI applications in the medical field across 8 critical areas, encompassing rehabilitation, daily communication, epilepsy, cerebral resuscitation, sleep, neurodegenerative diseases, anesthesiology, and emotion recognition. Moreover, the current challenges and future trends of BCIs were also discussed, including personal privacy and ethical concerns, network security vulnerabilities, safety issues, and biocompatibility.
Collapse
Affiliation(s)
- Xiu-Yun Liu
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, 300380, China
- School of Pharmaceutical Science and Technology, Tianjin University, Tianjin, 300072, China
| | - Wen-Long Wang
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Miao Liu
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Ming-Yi Chen
- Department of Micro/Nano Electronics, Shanghai Jiaotong University, Shanghai, 200240, China
| | - Tânia Pereira
- Institute for Systems and Computer Engineering, Technology and Science, 4099-002, Porto, Portugal
| | - Desta Yakob Doda
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Yu-Feng Ke
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Shou-Yan Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, 200433, China
| | - Dong Wen
- School of Intelligence Science and Technology, University of Sciences and Technology Beijing, Beijing, 100083, China
| | | | - Wei-Guang Li
- The State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong SAR, 999077, China
- State Key Laboratory for Quality Ensurance and Sustainable Use of Dao-Di Herbs, Artemisinin Research Center, and Institute of Chinese Materia Medica, China Academy of Chinese Medical Sciences, Beijing, 100700, China
| | - Yi Yang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
- China National Clinical Research Center for Neurological Diseases, Beijing, 100070, China
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, OX1 3TH, UK
| | - Xiao-Di Han
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Yu-Lin Sun
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Xin Song
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Cong-Ying Hao
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Zi-Hua Zhang
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Xin-Yang Liu
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Chun-Yang Li
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Rui Peng
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Xiao-Xin Song
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Abi Yasi
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Mei-Jun Pang
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Kuo Zhang
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Run-Nan He
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Le Wu
- Department of Electric Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China
| | - Shu-Geng Chen
- Department of Rehabilitation, Huashan Hospital, Fudan University, Shanghai, 200040, China
| | - Wen-Jin Chen
- Xuanwu Hospital of Capital Medical University, Beijing, 100053, China
| | - Yan-Gong Chao
- The First Hospital of Tsinghua University, Beijing, 100016, China
| | - Cheng-Gong Hu
- Department of Critical Care Medicine, West China Hospital of Sichuan University, Chengdu, 610041, China
| | - Heng Zhang
- Department of Neurosurgery, The First Hospital of China Medical University, Beijing, 110122, China
| | - Min Zhou
- Department of Critical Care Medicine, Division of Life Sciences and Medicine, The First Affiliated Hospital of University of Science and Technology of China, University of Science and Technology of China, Hefei, 230031, China
| | - Kun Wang
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Peng-Fei Liu
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China
| | - Chen Chen
- School of Computer Science, Fudan University, Shanghai, 200438, China
| | - Xin-Yi Geng
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yun Qin
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Dong-Rui Gao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - En-Ming Song
- Shanghai Frontiers Science Research Base of Intelligent Optoelectronics and Perception, Institute of Optoelectronics, Fudan University, Shanghai, 200433, China
| | - Long-Long Cheng
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China.
| | - Xun Chen
- Department of Electric Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China.
| | - Dong Ming
- State Key Laboratory of Advanced Medical Materials and Devices, Medical School, Tianjin University, Tianjin, 300072, China.
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, 300380, China.
| |
Collapse
|
5
|
Edelman BJ, Zhang S, Schalk G, Brunner P, Muller-Putz G, Guan C, He B. Non-Invasive Brain-Computer Interfaces: State of the Art and Trends. IEEE Rev Biomed Eng 2025; 18:26-49. [PMID: 39186407 PMCID: PMC11861396 DOI: 10.1109/rbme.2024.3449790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/28/2024]
Abstract
Brain-computer interface (BCI) is a rapidly evolving technology that has the potential to widely influence research, clinical and recreational use. Non-invasive BCI approaches are particularly common as they can impact a large number of participants safely and at a relatively low cost. Where traditional non-invasive BCIs were used for simple computer cursor tasks, it is now increasingly common for these systems to control robotic devices for complex tasks that may be useful in daily life. In this review, we provide an overview of the general BCI framework as well as the various methods that can be used to record neural activity, extract signals of interest, and decode brain states. In this context, we summarize the current state-of-the-art of non-invasive BCI research, focusing on trends in both the application of BCIs for controlling external devices and algorithm development to optimize their use. We also discuss various open-source BCI toolboxes and software, and describe their impact on the field at large.
Collapse
|
6
|
Afrah R, Amini Z, Kafieh R. An Unsupervised Feature Extraction Method based on CLSTM-AE for Accurate P300 Classification in Brain-Computer Interface Systems. J Biomed Phys Eng 2024; 14:579-592. [PMID: 39726882 PMCID: PMC11668936 DOI: 10.31661/jbpe.v0i0.2207-1521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 02/23/2023] [Indexed: 12/28/2024]
Abstract
Background The P300 signal, an endogenous component of event-related potentials, is extracted from an electroencephalography signal and employed in Brain-computer Interface (BCI) devices. Objective The current study aimed to address challenges in extracting useful features from P300 components and detecting P300 through a hybrid unsupervised manner based on Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM). Material and Methods In this cross-sectional study, CNN as a useful method for the P300 classification task emphasizes spatial characteristics of data. However, CNN and LSTM networks are combined to modify the classification system by extracting both spatial and temporal features. Then, the CNN-LSTM network was trained in an unsupervised learning method based on an autoencoder to improve Signal-to-noise Ratio (SNR) by extracting main components from latent space. To deal with imbalanced data, an Adaptive Synthetic Sampling Approach (ADASYN) is used and augmented without any duplication. Results The trained model, tested on the BCI competition III dataset, including two normal subjects, with an accuracy of 95% and 94% for subjects A and B in P300 detection, respectively. Conclusion CNN-LSTM, was embedded into an autoencoder and introduced to simultaneously extract spatial and temporal features and manage the computational complexity of the method. Further, ADASYN as an augmentation method was proposed to deal with the imbalanced nature of data, which not only maintained feature space as before but also preserved anatomical features of P300. High-quality results highlight the suitable efficiency of the proposed method.
Collapse
Affiliation(s)
- Ramin Afrah
- School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Zahra Amini
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Rahele Kafieh
- Medical Image and Signal Processing Research Center, School of Advanced Technologies in Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
7
|
Bai G, Jin J, Xu R, Wang X, Cichocki A. A novel dual-step transfer framework based on domain selection and feature alignment for motor imagery decoding. Cogn Neurodyn 2024; 18:3549-3563. [PMID: 39712143 PMCID: PMC11655754 DOI: 10.1007/s11571-023-10053-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 10/09/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2024] Open
Abstract
In brain-computer interfaces (BCIs) based on motor imagery (MI), reducing calibration time is gradually becoming an urgent issue in practical applications. Recently, transfer learning (TL) has demonstrated its effectiveness in reducing calibration time in MI-BCI. However, the different data distribution of subjects greatly affects the application effect of TL in MI-BCI. Therefore, this paper combines data alignment, source domain selection, and feature alignment into the MI-TL. We propose a novel dual-step transfer framework based on source domain selection and feature alignment. First, the source and target domains are aligned using a pre-calibration strategy (PS), and then a sequential reverse selection method is proposed to match the optimal source domain for each target domain with the designed dual model selection strategy. We use filter bank regularization common space pattern (FBRCSP) to obtain more features and introduce manifold embedded distribution alignment (MEDA) to correct the prediction results of the support vector machine (SVM). The experimental results on two competition public datasets (BCI competition IV Dataset 1 and Dataset 2a) and our dataset show that the average classification accuracy of the proposed framework is higher than the baseline method (no domain selection and no feature alignment), which reaches 84.12%, 79.91%, and 78.45%, respectively. And the computational cost is reduced by half compared with the baseline method.
Collapse
Affiliation(s)
- Guanglian Bai
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, 200237 China
| | - Jing Jin
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, 200237 China
- Shenzhen Research Institute of East China University of Science and Technology, Shenzhen, 518063 People’s Republic of China
| | - Ren Xu
- Guger Technologies OG, Graz, Austria
| | - Xingyu Wang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, 200237 China
| | - Andrzej Cichocki
- Systems Research Institute of Polish Academy of Science, Warsaw, Poland
- Department of Informatics, Nicolaus Copernicus University, Torun, Poland
| |
Collapse
|
8
|
Dash D, Ferrari P, Wang J. Neural Decoding of Spontaneous Overt and Intended Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4216-4225. [PMID: 39106199 DOI: 10.1044/2024_jslhr-24-00046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/09/2024]
Abstract
PURPOSE The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli). METHOD Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals. RESULTS LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN. CONCLUSIONS This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.
Collapse
Affiliation(s)
- Debadatta Dash
- Department of Neurology, The University of Texas at Austin
| | - Paul Ferrari
- Helen DeVos Children's Hospital, Corewell Health, Grand Rapids, MI
| | - Jun Wang
- Department of Neurology, The University of Texas at Austin
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin
| |
Collapse
|
9
|
Fodor MA, Herschel H, Cantürk A, Heisenberg G, Volosyak I. Evaluation of Different Visual Feedback Methods for Brain-Computer Interfaces (BCI) Based on Code-Modulated Visual Evoked Potentials (cVEP). Brain Sci 2024; 14:846. [PMID: 39199537 PMCID: PMC11352856 DOI: 10.3390/brainsci14080846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Revised: 08/08/2024] [Accepted: 08/19/2024] [Indexed: 09/01/2024] Open
Abstract
Brain-computer interfaces (BCIs) enable direct communication between the brain and external devices using electroencephalography (EEG) signals. BCIs based on code-modulated visual evoked potentials (cVEPs) are based on visual stimuli, thus appropriate visual feedback on the interface is crucial for an effective BCI system. Many previous studies have demonstrated that implementing visual feedback can improve information transfer rate (ITR) and reduce fatigue. This research compares a dynamic interface, where target boxes change their sizes based on detection certainty, with a threshold bar interface in a three-step cVEP speller. In this study, we found that both interfaces perform well, with slight variations in accuracy, ITR, and output characters per minute (OCM). Notably, some participants showed significant performance improvements with the dynamic interface and found it less distracting compared to the threshold bars. These results suggest that while average performance metrics are similar, the dynamic interface can provide significant benefits for certain users. This study underscores the potential for personalized interface choices to enhance BCI user experience and performance. By improving user friendliness, performance, and reducing distraction, dynamic visual feedback could optimize BCI technology for a broader range of users.
Collapse
Affiliation(s)
- Milán András Fodor
- Faculty of Technology and Bionics, Rhine-Waal University of Applied Sciences, 47533 Kleve, Germany
| | - Hannah Herschel
- Faculty of Technology and Bionics, Rhine-Waal University of Applied Sciences, 47533 Kleve, Germany
| | - Atilla Cantürk
- Faculty of Technology and Bionics, Rhine-Waal University of Applied Sciences, 47533 Kleve, Germany
| | - Gernot Heisenberg
- Institute of Information Science, Technical University of Applied Sciences Cologne, 50678 Cologne, Germany
| | - Ivan Volosyak
- Faculty of Technology and Bionics, Rhine-Waal University of Applied Sciences, 47533 Kleve, Germany
| |
Collapse
|
10
|
Zhang J, Zhang Y, Zhang X, Xu B, Zhao H, Sun T, Wang J, Lu S, Shen X. A high-performance general computer cursor control scheme based on a hybrid BCI combining motor imagery and eye-tracking. iScience 2024; 27:110164. [PMID: 38974471 PMCID: PMC11225862 DOI: 10.1016/j.isci.2024.110164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 03/21/2024] [Accepted: 05/29/2024] [Indexed: 07/09/2024] Open
Abstract
This study introduces a novel virtual cursor control system designed to empower individuals with neuromuscular disabilities in the digital world. By combining eye-tracking with motor imagery (MI) in a hybrid brain-computer interface (BCI), the system enhances cursor control accuracy and simplicity. Real-time classification accuracy reaches 87.92% (peak of 93.33%), with cursor stability in the gazing state at 96.1%. Integrated into common operating systems, it enables tasks like text entry, online chatting, email, web surfing, and picture dragging, with an average text input rate of 53.2 characters per minute (CPM). This technology facilitates fundamental computing tasks for patients, fostering their integration into the online community and paving the way for future developments in BCI systems.
Collapse
Affiliation(s)
- Jiakai Zhang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Yuqi Zhang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Xinlong Zhang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Boyang Xu
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Huanqing Zhao
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Tinghui Sun
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Ju Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Shaojie Lu
- School of Information Science and Technology, Nantong University, Nantong 226019, China
| | - Xiaoyan Shen
- School of Information Science and Technology, Nantong University, Nantong 226019, China
- Nantong Research Institute for Advanced Communication Technologies, Nantong University, Nantong 226019, China
| |
Collapse
|
11
|
Larsen OFP, Tresselt WG, Lorenz EA, Holt T, Sandstrak G, Hansen TI, Su X, Holt A. A method for synchronized use of EEG and eye tracking in fully immersive VR. Front Hum Neurosci 2024; 18:1347974. [PMID: 38468815 PMCID: PMC10925625 DOI: 10.3389/fnhum.2024.1347974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/06/2024] [Indexed: 03/13/2024] Open
Abstract
This study explores the synchronization of multimodal physiological data streams, in particular, the integration of electroencephalography (EEG) with a virtual reality (VR) headset featuring eye-tracking capabilities. A potential use case for the synchronized data streams is demonstrated by implementing a hybrid steady-state visually evoked potential (SSVEP) based brain-computer interface (BCI) speller within a fully immersive VR environment. The hardware latency analysis reveals an average offset of 36 ms between EEG and eye-tracking data streams and a mean jitter of 5.76 ms. The study further presents a proof of concept brain-computer interface (BCI) speller in VR, showcasing its potential for real-world applications. The findings highlight the feasibility of combining commercial EEG and VR technologies for neuroscientific research and open new avenues for studying brain activity in ecologically valid VR environments. Future research could focus on refining the synchronization methods and exploring applications in various contexts, such as learning and social interactions.
Collapse
Affiliation(s)
- Olav F. P. Larsen
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - William G. Tresselt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Emanuel A. Lorenz
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tomas Holt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Grethe Sandstrak
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Tor I. Hansen
- Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Acquired Brain Injury, St. Olav's University Hospital, Trondheim, Norway
| | - Xiaomeng Su
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Alexander Holt
- Motion Capture and Visualization Laboratory, Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
12
|
Sun Y, Liang L, Li Y, Chen X, Gao X. Dual-Alpha: a large EEG study for dual-frequency SSVEP brain-computer interface. Gigascience 2024; 13:giae041. [PMID: 39110623 PMCID: PMC11304967 DOI: 10.1093/gigascience/giae041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/25/2024] [Accepted: 06/20/2024] [Indexed: 08/10/2024] Open
Abstract
BACKGROUND The domain of brain-computer interface (BCI) technology has experienced significant expansion in recent years. However, the field continues to face a pivotal challenge due to the dearth of high-quality datasets. This lack of robust datasets serves as a bottleneck, constraining the progression of algorithmic innovations and, by extension, the maturation of the BCI field. FINDINGS This study details the acquisition and compilation of electroencephalogram data across 3 distinct dual-frequency steady-state visual evoked potential (SSVEP) paradigms, encompassing over 100 participants. Each experimental condition featured 40 individual targets with 5 repetitions per target, culminating in a comprehensive dataset consisting of 21,000 trials of dual-frequency SSVEP recordings. We performed an exhaustive validation of the dataset through signal-to-noise ratio analyses and task-related component analysis, thereby substantiating its reliability and effectiveness for classification tasks. CONCLUSIONS The extensive dataset presented is set to be a catalyst for the accelerated development of BCI technologies. Its significance extends beyond the BCI sphere and holds considerable promise for propelling research in psychology and neuroscience. The dataset is particularly invaluable for discerning the complex dynamics of binocular visual resource distribution.
Collapse
Affiliation(s)
- Yike Sun
- The School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Liyan Liang
- The China Academy of Information and Communications Technology, Beijing 100191, China
| | - Yuhan Li
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, China
- The School of Life Sciences, Tiangong University, Tianjin 300387, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, China
| | - Xiaorong Gao
- The School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| |
Collapse
|
13
|
Azadi Moghadam M, Maleki A. Fatigue factors and fatigue indices in SSVEP-based brain-computer interfaces: a systematic review and meta-analysis. Front Hum Neurosci 2023; 17:1248474. [PMID: 38053651 PMCID: PMC10694510 DOI: 10.3389/fnhum.2023.1248474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 10/16/2023] [Indexed: 12/07/2023] Open
Abstract
Background Fatigue is a serious challenge when applying a steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) in the real world. Many researchers have used quantitative indices to study the effect of visual stimuli on fatigue. According to a wide range of studies in fatigue analysis, there are contradictions and inconsistencies in the behavior of fatigue indicators. New method In this study, for the first time, a systematic review and meta-analysis were performed on fatigue indices and fatigue caused by stimulation paradigm. We queried three scientific search engines for studies published between 2000 and 2022. The inclusion criteria were papers investigating mental and visual fatigue from performing a visual task using electroencephalogram (EEG) signals. Results Attractiveness and variation are the most effective ways to reduce BCI fatigue. Therefore, zoom motion, Newton's ring motion, and cue patterns reduce fatigue. While the color of the cue could effectively reduce fatigue, its shape and background had no effect on fatigue. Additionally, the questionnaire and quantitative indicators such as frequency indices, signal-to-noise ratio (SNR), SSVEP amplitude, and multiscale entropy were utilized to assess fatigue. Meta-analysis indicated that when a person is fatigued, the spectrum amplitude of alpha, theta, and α + θ / β increase significantly, while SNR and SSVEP amplitude decrease significantly. Conclusion The outcomes of this study can be used to design more optimal stimulation protocols that cause less fatigue. Moreover, the level of fatigue can be quantitatively assessed with indicators without the participant's self-reports.
Collapse
Affiliation(s)
- Maedeh Azadi Moghadam
- Department of Biotechnology, Faculty of New Sciences and Technologies, Semnan University, Semnan, Iran
| | - Ali Maleki
- Department of Biomedical Engineering, Semnan University, Semnan, Iran
| |
Collapse
|
14
|
Velasco I, Sipols A, De Blas CS, Pastor L, Bayona S. Motor imagery EEG signal classification with a multivariate time series approach. Biomed Eng Online 2023; 22:29. [PMID: 36959601 PMCID: PMC10035287 DOI: 10.1186/s12938-023-01079-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 02/10/2023] [Indexed: 03/25/2023] Open
Abstract
BACKGROUND Electroencephalogram (EEG) signals record electrical activity on the scalp. Measured signals, especially EEG motor imagery signals, are often inconsistent or distorted, which compromises their classification accuracy. Achieving a reliable classification of motor imagery EEG signals opens the door to possibilities such as the assessment of consciousness, brain computer interfaces or diagnostic tools. We seek a method that works with a reduced number of variables, in order to avoid overfitting and to improve interpretability. This work aims to enhance EEG signal classification accuracy by using methods based on time series analysis. Previous work on this line, usually took a univariate approach, thus losing the possibility to take advantage of the correlation information existing within the time series provided by the different electrodes. To overcome this problem, we propose a multivariate approach that can fully capture the relationships among the different time series included in the EEG data. To perform the multivariate time series analysis, we use a multi-resolution analysis approach based on the discrete wavelet transform, together with a stepwise discriminant that selects the most discriminant variables provided by the discrete wavelet transform analysis RESULTS: Applying this methodology to EEG data to differentiate between the motor imagery tasks of moving either hands or feet has yielded very good classification results, achieving in some cases up to 100% of accuracy for this 2-class pre-processed dataset. Besides, the fact that these results were achieved using a reduced number of variables (55 out of 22,176) can shed light on the relevance and impact of those variables. CONCLUSIONS This work has a potentially large impact, as it enables classification of EEG data based on multivariate time series analysis in an interpretable way with high accuracy. The method allows a model with a reduced number of features, facilitating its interpretability and improving overfitting. Future work will extend the application of this classification method to help in diagnosis procedures for detecting brain pathologies and for its use in brain computer interfaces. In addition, the results presented here suggest that this method could be applied to other fields for the successful analysis of multivariate temporal data.
Collapse
Affiliation(s)
- I Velasco
- Department of Computer Science and Statistics, Rey Juan Carlos University, Madrid, Spain.
| | - A Sipols
- Department of Applied Mathematics, Science and Engineering of Materials and Electronic Technology, Rey Juan Carlos University, Madrid, Spain
| | - C Simon De Blas
- Department of Computer Science and Statistics, Rey Juan Carlos University, Madrid, Spain
| | - L Pastor
- Department of Computer Science and Statistics, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
| | - S Bayona
- Department of Computer Science and Statistics, Rey Juan Carlos University, Madrid, Spain
- Center for Computational Simulation, Universidad Politecnica de Madrid, Madrid, Spain
| |
Collapse
|
15
|
Bai X, Li M, Qi S, Ng ACM, Ng T, Qian W. A hybrid P300-SSVEP brain-computer interface speller with a frequency enhanced row and column paradigm. Front Neurosci 2023; 17:1133933. [PMID: 37008204 PMCID: PMC10050351 DOI: 10.3389/fnins.2023.1133933] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 02/27/2023] [Indexed: 03/17/2023] Open
Abstract
ObjectiveThis study proposes a new hybrid brain-computer interface (BCI) system to improve spelling accuracy and speed by stimulating P300 and steady-state visually evoked potential (SSVEP) in electroencephalography (EEG) signals.MethodsA frequency enhanced row and column (FERC) paradigm is proposed to incorporate the frequency coding into the row and column (RC) paradigm so that the P300 and SSVEP signals can be evoked simultaneously. A flicker (white-black) with a specific frequency from 6.0 to 11.5 Hz with an interval of 0.5 Hz is assigned to one row or column of a 6 × 6 layout, and the row/column flashes are carried out in a pseudorandom sequence. A wavelet and support vector machine (SVM) combination is adopted for P300 detection, an ensemble task-related component analysis (TRCA) method is used for SSVEP detection, and the two detection possibilities are fused using a weight control approach.ResultsThe implemented BCI speller achieved an accuracy of 94.29% and an information transfer rate (ITR) of 28.64 bit/min averaged across 10 subjects during the online tests. An accuracy of 96.86% is obtained during the offline calibration tests, higher than that of only using P300 (75.29%) or SSVEP (89.13%). The SVM in P300 outperformed the previous linear discrimination classifier and its variants (61.90–72.22%), and the ensemble TRCA in SSVEP outperformed the canonical correlation analysis method (73.33%).ConclusionThe proposed hybrid FERC stimulus paradigm can improve the performance of the speller compared with the classical single stimulus paradigm. The implemented speller can achieve comparable accuracy and ITR to its state-of-the-art counterparts with advanced detection algorithms.
Collapse
Affiliation(s)
- Xin Bai
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Minglun Li
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
- *Correspondence: Shouliang Qi,
| | | | - Tit Ng
- Shenzhen Jingmei Health Technology Co., Ltd., Shenzhen, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
16
|
Zhang Z, Li D, Zhao Y, Fan Z, Xiang J, Wang X, Cui X. A flexible speller based on time-space frequency conversion SSVEP stimulation paradigm under dry electrode. Front Comput Neurosci 2023; 17:1101726. [PMID: 36817318 PMCID: PMC9929550 DOI: 10.3389/fncom.2023.1101726] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/10/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Speller is the best way to express the performance of the brain-computer interface (BCI) paradigm. Due to its advantages of short analysis time and high accuracy, the SSVEP paradigm has been widely used in the BCI speller system based on the wet electrode. It is widely known that the wet electrode operation is cumbersome and that the subjects have a poor experience. In addition, in the asynchronous SSVEP system based on threshold analysis, the system flickers continuously from the beginning to the end of the experiment, which leads to visual fatigue. The dry electrode has a simple operation and provides a comfortable experience for subjects. The EOG signal can avoid the stimulation of SSVEP for a long time, thus reducing fatigue. Methods This study first designed the brain-controlled switch based on continuous blinking EOG signal and SSVEP signal to improve the flexibility of the BCI speller. Second, in order to increase the number of speller instructions, we designed the time-space frequency conversion (TSFC) SSVEP stimulus paradigm by constantly changing the time and space frequency of SSVEP sub-stimulus blocks, and designed a speller in a dry electrode environment. Results Seven subjects participated and completed the experiments. The results showed that the accuracy of the brain-controlled switch designed in this study was up to 94.64%, and all the subjects could use the speller flexibly. The designed 60-character speller based on the TSFC-SSVEP stimulus paradigm has an accuracy rate of 90.18% and an information transmission rate (ITR) of 117.05 bits/min. All subjects can output the specified characters in a short time. Discussion This study designed and implemented a multi-instruction SSVEP speller based on dry electrode. Through the combination of EOG and SSVEP signals, the speller can be flexibly controlled. The frequency of SSVEP stimulation sub-block is recoded in time and space by TSFC-SSVEP stimulation paradigm, which greatly improves the number of output instructions of BCI system in dry electrode environment. This work only uses FBCCA algorithm to test the stimulus paradigm, which requires a long stimulus time. In the future, we will use trained algorithms to study stimulus paradigm to improve its overall performance.
Collapse
|
17
|
Alharbi H. Identifying Thematics in a Brain-Computer Interface Research. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:2793211. [PMID: 36643889 PMCID: PMC9833923 DOI: 10.1155/2023/2793211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/21/2022] [Accepted: 12/24/2022] [Indexed: 01/05/2023]
Abstract
This umbrella review is motivated to understand the shift in research themes on brain-computer interfacing (BCI) and it determined that a shift away from themes that focus on medical advancement and system development to applications that included education, marketing, gaming, safety, and security has occurred. The background of this review examined aspects of BCI categorisation, neuroimaging methods, brain control signal classification, applications, and ethics. The specific area of BCI software and hardware development was not examined. A search using One Search was undertaken and 92 BCI reviews were selected for inclusion. Publication demographics indicate the average number of authors on review papers considered was 4.2 ± 1.8. The results also indicate a rapid increase in the number of BCI reviews from 2003, with only three reviews before that period, two in 1972, and one in 1996. While BCI authors were predominantly Euro-American in early reviews, this shifted to a more global authorship, which China dominated by 2020-2022. The review revealed six disciplines associated with BCI systems: life sciences and biomedicine (n = 42), neurosciences and neurology (n = 35), and rehabilitation (n = 20); (2) the second domain centred on the theme of functionality: computer science (n = 20), engineering (n = 28) and technology (n = 38). There was a thematic shift from understanding brain function and modes of interfacing BCI systems to more applied research novel areas of research-identified surround artificial intelligence, including machine learning, pre-processing, and deep learning. As BCI systems become more invasive in the lives of "normal" individuals, it is expected that there will be a refocus and thematic shift towards increased research into ethical issues and the need for legal oversight in BCI application.
Collapse
Affiliation(s)
- Hadeel Alharbi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha'il, Ha'il 81481, Saudi Arabia
| |
Collapse
|
18
|
Neghabi M, Marateb HR, Mahnam A. Novel frequency-based approach for detection of steady-state visual evoked potentials for realization of practical brain computer interfaces. BRAIN-COMPUTER INTERFACES 2022. [DOI: 10.1080/2326263x.2022.2050513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Mehrnoosh Neghabi
- Biomedical Engineering Department, Engineering Faculty, University of Isfahan, Isfahan, Iran
| | - Hamid Reza Marateb
- Biomedical Engineering Department, Engineering Faculty, University of Isfahan, Isfahan, Iran
- Biomedical Engineering Research Centre (CREB), Automatic Control Department (ESAII), Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Amin Mahnam
- Biomedical Engineering Department, Engineering Faculty, University of Isfahan, Isfahan, Iran
| |
Collapse
|