1
|
Forenzo D, Zhu H, He B. A continuous pursuit dataset for online deep learning-based EEG brain-computer interface. Sci Data 2024; 11:1256. [PMID: 39567538 PMCID: PMC11579365 DOI: 10.1038/s41597-024-04090-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 11/04/2024] [Indexed: 11/22/2024] Open
Abstract
This dataset is from an EEG brain-computer interface (BCI) study investigating the use of deep learning (DL) for online continuous pursuit (CP) BCI. In this task, subjects use Motor Imagery (MI) to control a cursor to follow a randomly moving target, instead of a single stationary target used in other traditional BCI tasks. DL methods have recently achieved promising performance in traditional BCI tasks, but most studies investigate offline data analysis using DL algorithms. This dataset consists of ~168 hours of EEG recordings from complex CP BCI experiments, collected from 28 unique human subjects over multiple sessions each, with an online DL-based decoder. The large amount of subject specific data from multiple sessions may be useful for developing new BCI decoders, especially DL methods that require large amounts of training data. By providing this dataset to the public, we hope to help facilitate the development of new or improved BCI decoding algorithms for the complex CP paradigm for continuous object control, bringing EEG-based BCIs closer to real-world applications.
Collapse
Affiliation(s)
- Dylan Forenzo
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA
| | - Hao Zhu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, USA.
| |
Collapse
|
2
|
Belwafi K, Ghaffari F. Thought-Controlled Computer Applications: A Brain-Computer Interface System for Severe Disability Support. SENSORS (BASEL, SWITZERLAND) 2024; 24:6759. [PMID: 39460240 PMCID: PMC11511559 DOI: 10.3390/s24206759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/26/2024] [Accepted: 10/08/2024] [Indexed: 10/28/2024]
Abstract
This study introduces an integrated computational environment that leverages Brain-Computer Interface (BCI) technology to enhance information access for individuals with severe disabilities. Traditional assistive technologies often rely on physical interactions, which can be challenging for this demographic. Our innovation focuses on creating new assistive technologies that use novel Human-Computer interfaces to provide a more intuitive and accessible experience. The proposed system offers four key applications to users controlled by four thoughts: an email client, a web browser, an e-learning tool, and both command-line and graphical user interfaces for managing computer resources. The BCI framework translates ElectroEncephaloGraphy (EEG) signals into commands or events using advanced signal processing and machine learning techniques. These identified commands are then processed by an integrative strategy that triggers the appropriate actions and provides real-time feedback on the screen. Our study shows that our framework achieved an 82% average classification accuracy using four distinct thoughts of 62 subjects and a 95% recognition rate for P300 signals from two users, highlighting its effectiveness in translating brain signals into actionable commands. Unlike most existing prototypes that rely on visual stimulation, our system is controlled by thought, inducing brain activity to manage the system's Application Programming Interfaces (APIs). It switches to P300 mode for a virtual keyboard and text input. The proposed BCI system significantly improves the ability of individuals with severe disabilities to interact with various applications and manage computer resources. Our approach demonstrates superior performance in terms of classification accuracy and signal recognition compared to existing methods.
Collapse
Affiliation(s)
- Kais Belwafi
- Department of Computer Engineering, College of Computing & Informatics, University of Sharjah, Sharjah 26666, United Arab Emirates
| | - Fakhreddine Ghaffari
- Équipes de Traitement de l’Information et Systèmes, UMR 8051, CY Cergy Paris Université, École Nationale Supérieure de l’Electronique et de ses Applications (ENSEA), Centre National de la Recherche Scientifique (CNRS), 95000 Cergy, France;
| |
Collapse
|
3
|
Mahalungkar SP, Shrivastava R, Angadi S. A brief survey on human activity recognition using motor imagery of EEG signals. Electromagn Biol Med 2024; 43:312-327. [PMID: 39425602 DOI: 10.1080/15368378.2024.2415089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 10/06/2024] [Indexed: 10/21/2024]
Abstract
Human being's biological processes and psychological activities are jointly connected to the brain. So, the examination of human activity is more significant for the well-being of humans. There are various models for brain activity detection considering neuroimaging for attaining decreased time requirement, increased control commands, and enhanced accuracy. Motor Imagery (MI)-based Brain-Computer Interface (BCI) systems create a way in which the brain can interact with the environment by processing Electroencephalogram (EEG) signals. Human Activity Recognition (HAR) deals with identifying the physiological activities of human beings based on sensory signals. This survey reviews the different methods available for HAR based on MI-EEG signals. A total of 50 research articles based on HAR from EEG signals are considered in this survey. This survey discusses the challenges faced by various techniques for HAR. Moreover, the papers are assessed considering various parameters, techniques, publication year, performance metrics, utilized tools, employed databases, etc. There were many techniques developed to solve the problem of HAR and they are classified as Machine Learning (ML) and Deep Learning (DL)models. At last, the research gaps and limitations of the techniques were discussed that contribute to developing an effective HAR.
Collapse
Affiliation(s)
- Seema Pankaj Mahalungkar
- Department of Computer Science and Engineering, Mansarovar Global University, Bhopal, Madhya Pradesh, India
- Computer Science and Engineering, Nutan College of Engineering and Research, Talegaon Dabhade, Pune, India
| | - Rahul Shrivastava
- School of Computer Science and Engineering, VIT Bhopal University, Bhopal, Madhya Pradesh, India
| | - Sanjeevkumar Angadi
- Computer Science and Engineering, Nutan College of Engineering and Research, Talegaon Dabhade, Pune, India
| |
Collapse
|
4
|
Ail BE, Ramele R, Gambini J, Santos JM. An Intrinsically Explainable Method to Decode P300 Waveforms from EEG Signal Plots Based on Convolutional Neural Networks. Brain Sci 2024; 14:836. [PMID: 39199527 PMCID: PMC11487430 DOI: 10.3390/brainsci14080836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Revised: 08/15/2024] [Accepted: 08/16/2024] [Indexed: 09/01/2024] Open
Abstract
This work proposes an intrinsically explainable, straightforward method to decode P300 waveforms from electroencephalography (EEG) signals, overcoming the black box nature of deep learning techniques. The proposed method allows convolutional neural networks to decode information from images, an area where they have achieved astonishing performance. By plotting the EEG signal as an image, it can be both visually interpreted by physicians and technicians and detected by the network, offering a straightforward way of explaining the decision. The identification of this pattern is used to implement a P300-based speller device, which can serve as an alternative communication channel for persons affected by amyotrophic lateral sclerosis (ALS). This method is validated by identifying this signal by performing a brain-computer interface simulation on a public dataset from ALS patients. Letter identification rates from the speller on the dataset show that this method can identify the P300 signature on the set of 8 patients. The proposed approach achieves similar performance to other state-of-the-art proposals while providing clinically relevant explainability (XAI).
Collapse
Affiliation(s)
- Brian Ezequiel Ail
- Instituto Tecnológico de Buenos Aires (ITBA), Buenos Aires C1437, Argentina;
| | - Rodrigo Ramele
- Instituto Tecnológico de Buenos Aires (ITBA), Buenos Aires C1437, Argentina;
| | - Juliana Gambini
- Centro de Investigación en Informática Aplicada (CIDIA), Universidad Nacional de Hurlingham (UNAHUR), Hurlingham B1688, Argentina; (J.G.); (J.M.S.)
- CPSI—Universidad Tecnológica Nacional, FRBA, Buenos Aires C1041, Argentina
| | - Juan Miguel Santos
- Centro de Investigación en Informática Aplicada (CIDIA), Universidad Nacional de Hurlingham (UNAHUR), Hurlingham B1688, Argentina; (J.G.); (J.M.S.)
| |
Collapse
|
5
|
Yuan Z, Zhou Q, Wang B, Zhang Q, Yang Y, Zhao Y, Guo Y, Zhou J, Wang C. PSAEEGNet: pyramid squeeze attention mechanism-based CNN for single-trial EEG classification in RSVP task. Front Hum Neurosci 2024; 18:1385360. [PMID: 38756843 PMCID: PMC11097777 DOI: 10.3389/fnhum.2024.1385360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 04/08/2024] [Indexed: 05/18/2024] Open
Abstract
Introduction Accurate classification of single-trial electroencephalogram (EEG) is crucial for EEG-based target image recognition in rapid serial visual presentation (RSVP) tasks. P300 is an important component of a single-trial EEG for RSVP tasks. However, single-trial EEG are usually characterized by low signal-to-noise ratio and limited sample sizes. Methods Given these challenges, it is necessary to optimize existing convolutional neural networks (CNNs) to improve the performance of P300 classification. The proposed CNN model called PSAEEGNet, integrates standard convolutional layers, pyramid squeeze attention (PSA) modules, and deep convolutional layers. This approach arises the extraction of temporal and spatial features of the P300 to a finer granularity level. Results Compared with several existing single-trial EEG classification methods for RSVP tasks, the proposed model shows significantly improved performance. The mean true positive rate for PSAEEGNet is 0.7949, and the mean area under the receiver operating characteristic curve (AUC) is 0.9341 (p < 0.05). Discussion These results suggest that the proposed model effectively extracts features from both temporal and spatial dimensions of P300, leading to a more accurate classification of single-trial EEG during RSVP tasks. Therefore, this model has the potential to significantly enhance the performance of target recognition systems based on EEG, contributing to the advancement and practical implementation of target recognition in this field.
Collapse
Affiliation(s)
- Zijian Yuan
- School of Intelligent Medicine and Biotechnology, Guilin Medical University, Guangxi, China
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Qian Zhou
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Baozeng Wang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Qi Zhang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Yang Yang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Yuwei Zhao
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Yong Guo
- School of Intelligent Medicine and Biotechnology, Guilin Medical University, Guangxi, China
| | - Jin Zhou
- Beijing Institute of Basic Medical Sciences, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Changyong Wang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| |
Collapse
|
6
|
Deng H, Li M, Li J, Guo M, Xu G. A robust multi-branch multi-attention-mechanism EEGNet for motor imagery BCI decoding. J Neurosci Methods 2024; 405:110108. [PMID: 38458260 DOI: 10.1016/j.jneumeth.2024.110108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 02/28/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024]
Abstract
BACKGROUND Motor-Imagery-based Brain-Computer Interface (MI-BCI) is a promising technology to assist communication, movement, and neurological rehabilitation for motor-impaired individuals. Electroencephalography (EEG) decoding techniques using deep learning (DL) possess noteworthy advantages due to automatic feature extraction and end-to-end learning. However, the DL-based EEG decoding models tend to show large variations due to intersubject variability of EEG, which results from inconsistencies of different subjects' optimal hyperparameters. NEW METHODS This study proposes a multi-branch multi-attention mechanism EEGNet model (MBMANet) for robust decoding. It applies the multi-branch EEGNet structure to achieve various feature extractions. Further, the different attention mechanisms introduced in each branch attain diverse adaptive weight adjustments. This combination of multi-branch and multi-attention mechanisms allows for multi-level feature fusion to provide robust decoding for different subjects. RESULTS The MBMANet model has a four-classification accuracy of 83.18% and kappa of 0.776 on the BCI Competition IV-2a dataset, which outperforms other eight CNN-based decoding models. This consistently satisfactory performance across all nine subjects indicates that the proposed model is robust. CONCLUSIONS The combine of multi-branch and multi-attention mechanisms empowers the DL-based models to adaptively learn different EEG features, which provides a feasible solution for dealing with data variability. It also gives the MBMANet model more accurate decoding of motion intentions and lower training costs, thus improving the MI-BCI's utility and robustness.
Collapse
Affiliation(s)
- Haodong Deng
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Mengfan Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China.
| | - Jundi Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Miaomiao Guo
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| |
Collapse
|
7
|
Forenzo D, Zhu H, Shanahan J, Lim J, He B. Continuous Tracking using Deep Learning-based Decoding for Non-invasive Brain-Computer Interface. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.12.562084. [PMID: 37905046 PMCID: PMC10614823 DOI: 10.1101/2023.10.12.562084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Brain-computer interfaces (BCI) using electroencephalography (EEG) provide a non-invasive method for users to interact with external devices without the need for muscle activation. While noninvasive BCIs have the potential to improve the quality of lives of healthy and motor impaired individuals, they currently have limited applications due to inconsistent performance and low degrees of freedom. In this study, we use deep learning (DL)-based decoders for online Continuous Pursuit (CP), a complex BCI task requiring the user to track an object in two-dimensional space. We developed a labeling system to use CP data for supervised learning, trained DL-based decoders based on two architectures, including a newly proposed adaptation of the PointNet architecture, and evaluated the performance over several online sessions. We rigorously evaluated the DL-based decoders in a total of 28 human participants, and found that the DL-based models improved throughout the sessions as more training data became available and significantly outperformed a traditional BCI decoder by the last session. We also performed additional experiments to test an implementation of transfer learning by pre-training models on data from other subjects, and mid-session training to reduce inter-session variability. The results from these experiments showed that pre-training did not significantly improve performance, but updating the models mid-session may have some benefit. Overall, these findings support the use of DL-based decoders for improving BCI performance in complex tasks like CP, which can expand the potential applications of BCI devices and help improve the quality of lives of healthy and motor-impaired individuals. Significance Statement Brain-computer Interfaces (BCI) have the potential to replace or restore motor functions for patients and can benefit the general population by providing a direct link of the brain with robotics or other devices. In this work, we developed a paradigm using deep learning (DL)-based decoders for continuous control of a BCI system and demonstrated its capabilities through extensive online experiments. We also investigate how DL performance is affected by varying amounts of training data and collected more than 150 hours of BCI data that can be used to train new models. The results of this study provide valuable information for developing future DL-based BCI decoders which can improve performance and help bring BCIs closer to practical applications and wide-spread use.
Collapse
|
8
|
Forenzo D, Zhu H, Shanahan J, Lim J, He B. Continuous tracking using deep learning-based decoding for noninvasive brain-computer interface. PNAS NEXUS 2024; 3:pgae145. [PMID: 38689706 PMCID: PMC11060102 DOI: 10.1093/pnasnexus/pgae145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 03/28/2024] [Indexed: 05/02/2024]
Abstract
Brain-computer interfaces (BCI) using electroencephalography provide a noninvasive method for users to interact with external devices without the need for muscle activation. While noninvasive BCIs have the potential to improve the quality of lives of healthy and motor-impaired individuals, they currently have limited applications due to inconsistent performance and low degrees of freedom. In this study, we use deep learning (DL)-based decoders for online continuous pursuit (CP), a complex BCI task requiring the user to track an object in 2D space. We developed a labeling system to use CP data for supervised learning, trained DL-based decoders based on two architectures, including a newly proposed adaptation of the PointNet architecture, and evaluated the performance over several online sessions. We rigorously evaluated the DL-based decoders in a total of 28 human participants, and found that the DL-based models improved throughout the sessions as more training data became available and significantly outperformed a traditional BCI decoder by the last session. We also performed additional experiments to test an implementation of transfer learning by pretraining models on data from other subjects, and midsession training to reduce intersession variability. The results from these experiments showed that pretraining did not significantly improve performance, but updating the models' midsession may have some benefit. Overall, these findings support the use of DL-based decoders for improving BCI performance in complex tasks like CP, which can expand the potential applications of BCI devices and help to improve the quality of lives of healthy and motor-impaired individuals.
Collapse
Affiliation(s)
- Dylan Forenzo
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Hao Zhu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jenn Shanahan
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jaehyun Lim
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| |
Collapse
|
9
|
Yu S, Wang Z, Wang F, Chen K, Yao D, Xu P, Zhang Y, Wang H, Zhang T. Multiclass classification of motor imagery tasks based on multi-branch convolutional neural network and temporal convolutional network model. Cereb Cortex 2024; 34:bhad511. [PMID: 38183186 DOI: 10.1093/cercor/bhad511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/06/2023] [Accepted: 12/08/2023] [Indexed: 01/07/2024] Open
Abstract
Motor imagery (MI) is a cognitive process wherein an individual mentally rehearses a specific movement without physically executing it. Recently, MI-based brain-computer interface (BCI) has attracted widespread attention. However, accurate decoding of MI and understanding of neural mechanisms still face huge challenges. These seriously hinder the clinical application and development of BCI systems based on MI. Thus, it is very necessary to develop new methods to decode MI tasks. In this work, we propose a multi-branch convolutional neural network (MBCNN) with a temporal convolutional network (TCN), an end-to-end deep learning framework to decode multi-class MI tasks. We first used MBCNN to capture the MI electroencephalography signals information on temporal and spectral domains through different convolutional kernels. Then, we introduce TCN to extract more discriminative features. The within-subject cross-session strategy is used to validate the classification performance on the dataset of BCI Competition IV-2a. The results showed that we achieved 75.08% average accuracy for 4-class MI task classification, outperforming several state-of-the-art approaches. The proposed MBCNN-TCN-Net framework successfully captures discriminative features and decodes MI tasks effectively, improving the performance of MI-BCIs. Our findings could provide significant potential for improving the clinical application and development of MI-based BCI systems.
Collapse
Affiliation(s)
- Shiqi Yu
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
| | - Zedong Wang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Fei Wang
- School of Computer and Software, Chengdu Jincheng College, Chengdu 610097, China
| | - Kai Chen
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
| | - Dezhong Yao
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Peng Xu
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yong Zhang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Hesong Wang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Tao Zhang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
10
|
Wang L, Liu Y, Li Y, Chen R, Liu X, Fu L, Wang Y. Navigation Learning Assessment Using EEG-Based Multi-Time Scale Spatiotemporal Compound Model. IEEE Trans Neural Syst Rehabil Eng 2024; 32:537-547. [PMID: 38145526 DOI: 10.1109/tnsre.2023.3346766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023]
Abstract
This study presents a novel method to assess the learning effectiveness using Electroencephalography (EEG)-based deep learning model. It is difficult to assess the learning effectiveness of professional courses in cultivating students' ability objectively by questionnaire or other assessment methods. Research in the field of brain has shown that innovation ability can be reflected from cognitive ability which can be embodied by EEG signal features. Three navigation tasks of increasing cognitive difficulty were designed and a total of 41 subjects participated in the experiment. For the classification and tracking of the subjects' EEG signals, a convolutional neural network (CNN)-based Multi-Time Scale Spatiotemporal Compound Model (MTSC) is proposed in this paper to extract and classify the features of the subjects' EEG signals. Furthermore, Spiking neural networks (SNN) -based NeuCube is used to assess the learning effectiveness and demonstrate cognitive processes, acknowledging that NeuCube is an excellent method to display the spatiotemporal differences between spikes emitted by neurons. The results of the classification experiment show that the cognitive training traces of different students in solving three navigational problems can be effectively distinguished. More importantly, new information about navigation is revealed through the analysis of feature vector visualization and model dynamics. This work provides a foundation for future research on cognitive navigation and the training of students' navigational skills.
Collapse
|
11
|
Forenzo D, Liu Y, Kim J, Ding Y, Yoon T, He B. Integrating Simultaneous Motor Imagery and Spatial Attention for EEG-BCI Control. IEEE Trans Biomed Eng 2024; 71:282-294. [PMID: 37494151 PMCID: PMC10803074 DOI: 10.1109/tbme.2023.3298957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
OBJECTIVE EEG-based brain-computer interfaces (BCI) are non-invasive approaches for replacing or restoring motor functions in impaired patients, and direct brain-to-device communication in the general population. Motor imagery (MI) is one of the most used BCI paradigms, but its performance varies across individuals and certain users require substantial training to develop control. In this study, we propose to integrate a MI paradigm simultaneously with a recently proposed Overt Spatial Attention (OSA) paradigm, to accomplish BCI control. METHODS We evaluated a cohort of 25 human subjects' ability to control a virtual cursor in one- and two-dimensions over 5 BCI sessions. The subjects used 5 different BCI paradigms: MI alone, OSA alone, MI, and OSA simultaneously towards the same target (MI+OSA), and MI for one axis while OSA controls the other (MI/OSA and OSA/MI). RESULTS Our results show that MI+OSA reached the highest average online performance in 2D tasks at 49% Percent Valid Correct (PVC), and statistically outperforms both MI alone (42%) and OSA alone (45%). MI+OSA had a similar performance to each subject's best individual method between MI alone and OSA alone (50%) and 9 subjects reached their highest average BCI performance using MI+OSA. CONCLUSION Integrating MI and OSA leads to improved performance over both individual methods at the group level and is the best BCI paradigm option for some subjects. SIGNIFICANCE This work proposes a new BCI control paradigm that integrates two existing paradigms and demonstrates its value by showing that it can improve users' BCI performance.
Collapse
Affiliation(s)
- Dylan Forenzo
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Yixuan Liu
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Jeehyun Kim
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Yidan Ding
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Taehyung Yoon
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| | - Bin He
- Department of Biomedical Engineering at Carnegie Mellon University, Pittsburgh, PA
| |
Collapse
|
12
|
Wang L, Li M, Zhang L. Recognize enhanced temporal-spatial-spectral features with a parallel multi-branch CNN and GRU. Med Biol Eng Comput 2023:10.1007/s11517-023-02857-4. [PMID: 37294411 DOI: 10.1007/s11517-023-02857-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 05/17/2023] [Indexed: 06/10/2023]
Abstract
Deep learning has been applied to the recognition of motor imagery electroencephalograms (MI-EEG) in brain-computer interface, and the performance results depend on data representation as well as neural network structure. Especially, MI-EEG is so complex with the characteristics of non-stationarity, specific rhythms, and uneven distribution; however, its multidimensional feature information is difficult to be fused and enhanced simultaneously in the existing recognition methods. In this paper, a novel channel importance (NCI) based on time-frequency analysis is proposed to develop an image sequence generation method (NCI-ISG) for enhancing the integrity of data representation and highlighting the contribution inequalities of different channels as well. Each electrode of MI-EEG is converted to a time-frequency spectrum by utilizing short-time Fourier transform; the corresponding part to 8-30 Hz is combined with random forest algorithm for computing NCI; and it is further divided into three sub-images covered by α (8-13 Hz), β1 (13-21 Hz), and β2 (21-30 Hz) bands; their spectral powers are further weighted by NCI and interpolated to 2-dimensional electrode coordinates, producing three main sub-band image sequences. Then, a parallel multi-branch convolutional neural network and gate recurrent unit (PMBCG) is designed to successively extract and identify the spatial-spectral and temporal features from the image sequences. Two public four-class MI-EEG datasets are adopted; the proposed classification method respectively achieves the average accuracies of 98.26% and 80.62% by 10-fold cross-validation experiment; and its statistical performance is also evaluated by multi-indexes, such as Kappa value, confusion matrix, and ROC curve. Extensive experiment results show that NCI-ISG + PMBCG can yield great performance on MI-EEG classification compared to state-of-the-art methods. The proposed NCI-ISG can enhance the feature representation of time-frequency-space domains and match well with PMBCG, which improves the recognition accuracies of MI tasks and demonstrates the preferable reliability and distinguishable ability. This paper proposes a novel channel importance (NCI) based on time-frequency analysis to develop an image sequences generation method (NCI-ISG) for enhancing the integrity of data representation and highlighting the contribution inequalities of different channels as well. Then, a parallel multi-branch convolutional neural network and gate recurrent unit (PMBCG) is designed to successively extract and identify the spatial-spectral and temporal features from the image sequences.
Collapse
Affiliation(s)
- Linlin Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China
| | - Mingai Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China.
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124, China.
- Engineering Research Center of Digital Community, Ministry of Education, Beijing, 100124, China.
| | - Liyuan Zhang
- Faculty of Environment and Life, Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
13
|
Nam H, Kim JM, Choi W, Bak S, Kam TE. The effects of layer-wise relevance propagation-based feature selection for EEG classification: a comparative study on multiple datasets. Front Hum Neurosci 2023; 17:1205881. [PMID: 37342822 PMCID: PMC10277566 DOI: 10.3389/fnhum.2023.1205881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 05/17/2023] [Indexed: 06/23/2023] Open
Abstract
Introduction The brain-computer interface (BCI) allows individuals to control external devices using their neural signals. One popular BCI paradigm is motor imagery (MI), which involves imagining movements to induce neural signals that can be decoded to control devices according to the user's intention. Electroencephalography (EEG) is frequently used for acquiring neural signals from the brain in the fields of MI-BCI due to its non-invasiveness and high temporal resolution. However, EEG signals can be affected by noise and artifacts, and patterns of EEG signals vary across different subjects. Therefore, selecting the most informative features is one of the essential processes to enhance classification performance in MI-BCI. Methods In this study, we design a layer-wise relevance propagation (LRP)-based feature selection method which can be easily integrated into deep learning (DL)-based models. We assess its effectiveness for reliable class-discriminative EEG feature selection on two different publicly available EEG datasets with various DL-based backbone models in the subject-dependent scenario. Results and discussion The results show that LRP-based feature selection enhances the performance for MI classification on both datasets for all DL-based backbone models. Based on our analysis, we believe that it can broad its capability to different research domains.
Collapse
Affiliation(s)
| | | | | | | | - Tae-Eui Kam
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| |
Collapse
|
14
|
Forenzo D, Liu Y, Kim J, Ding Y, Yoon T, He B. Integrating simultaneous motor imagery and spatial attention for EEG-BCI control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.20.529307. [PMID: 36865207 PMCID: PMC9980047 DOI: 10.1101/2023.02.20.529307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
OBJECTIVE EEG-based brain-computer interfaces (BCI) are non-invasive approaches for replacing or restoring motor functions in impaired patients, and direct brain-to-device communication in the general population. Motor imagery (MI) is one of the most used BCI paradigms, but its performance varies across individuals and certain users require substantial training to develop control. In this study, we propose to integrate a MI paradigm simultaneously with a recently proposed Overt Spatial Attention (OSA) paradigm, to accomplish BCI control. METHODS We evaluated a cohort of 25 human subjects' ability to control a virtual cursor in one- and two-dimensions over 5 BCI sessions. The subjects used 5 different BCI paradigms: MI alone, OSA alone, MI and OSA simultaneously towards the same target (MI+OSA), and MI for one axis while OSA controls the other (MI/OSA and OSA/MI). RESULTS Our results show that MI+OSA reached the highest average online performance in 2D tasks at 49% Percent Valid Correct (PVC), statistically outperforms MI alone (42%), and was higher, but not statistically significant, than OSA alone (45%). MI+OSA had a similar performance to each subject's best individual method between MI alone and OSA alone (50%) and 9 subjects reached their highest average BCI performance using MI+OSA. CONCLUSION Integrating MI and OSA leads to improved performance over MI alone at the group level and is the best BCI paradigm option for some subjects. SIGNIFICANCE This work proposes a new BCI control paradigm that integrates two existing paradigms and demonstrates its value by showing that it can improve users' BCI performance.
Collapse
|
15
|
Pham TD. Classification of Motor-Imagery Tasks Using a Large EEG Dataset by Fusing Classifiers Learning on Wavelet-Scattering Features. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1097-1107. [PMID: 37022234 DOI: 10.1109/tnsre.2023.3241241] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Brain-computer or brain-machine interface technology allows humans to control machines using their thoughts via brain signals. In particular, these interfaces can assist people with neurological diseases for speech understanding or physical disabilities for operating devices such as wheelchairs. Motor-imagery tasks play a basic role in brain-computer interfaces. This study introduces an approach for classifying motor-imagery tasks in a brain-computer interface environment, which remains a challenge for rehabilitation technology using electroencephalogram sensors. Methods used and developed for addressing the classification include wavelet time and image scattering networks, fuzzy recurrence plots, support vector machines, and classifier fusion. The rationale for combining outputs from two classifiers learning on wavelet-time and wavelet-image scattering features of brain signals, respectively, is that they are complementary and can be effectively fused using a novel fuzzy rule-based system. A large-scale challenging electroencephalogram dataset of motor imagery-based brain-computer interface was used to test the efficacy of the proposed approach. Experimental results obtained from within-session classification show the potential application of the new model that achieves an improvement of 7% in classification accuracy over the best existing classifier using state-of-the-art artificial intelligence (76% versus 69%, respectively). For the cross-session experiment, which imposes a more challenging and practical classification task, the proposed fusion model improves the accuracy by 11% (54% versus 65%). The technical novelty presented herein and its further exploration are promising for developing a reliable sensor-based intervention for assisting people with neurodisability to improve their quality of life.
Collapse
|
16
|
Hossain KM, Islam MA, Hossain S, Nijholt A, Ahad MAR. Status of deep learning for EEG-based brain-computer interface applications. Front Comput Neurosci 2023; 16:1006763. [PMID: 36726556 PMCID: PMC9885375 DOI: 10.3389/fncom.2022.1006763] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 12/23/2022] [Indexed: 01/18/2023] Open
Abstract
In the previous decade, breakthroughs in the central nervous system bioinformatics and computational innovation have prompted significant developments in brain-computer interface (BCI), elevating it to the forefront of applied science and research. BCI revitalization enables neurorehabilitation strategies for physically disabled patients (e.g., disabled patients and hemiplegia) and patients with brain injury (e.g., patients with stroke). Different methods have been developed for electroencephalogram (EEG)-based BCI applications. Due to the lack of a large set of EEG data, methods using matrix factorization and machine learning were the most popular. However, things have changed recently because a number of large, high-quality EEG datasets are now being made public and used in deep learning-based BCI applications. On the other hand, deep learning is demonstrating great prospects for solving complex relevant tasks such as motor imagery classification, epileptic seizure detection, and driver attention recognition using EEG data. Researchers are doing a lot of work on deep learning-based approaches in the BCI field right now. Moreover, there is a great demand for a study that emphasizes only deep learning models for EEG-based BCI applications. Therefore, we introduce this study to the recent proposed deep learning-based approaches in BCI using EEG data (from 2017 to 2022). The main differences, such as merits, drawbacks, and applications are introduced. Furthermore, we point out current challenges and the directions for future studies. We argue that this review study will help the EEG research community in their future research.
Collapse
Affiliation(s)
- Khondoker Murad Hossain
- Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, United States
| | - Md. Ariful Islam
- Department of Robotics and Mechatronics Engineering, University of Dhaka, Dhaka, Bangladesh
| | | | - Anton Nijholt
- Human Media Interaction, University of Twente, Enschede, Netherlands
| | - Md Atiqur Rahman Ahad
- Department of Computer Science and Digital Technology, University of East London, London, United Kingdom,*Correspondence: Md Atiqur Rahman Ahad ✉
| |
Collapse
|
17
|
Kim J, Jiang X, Forenzo D, Liu Y, Anderson N, Greco CM, He B. Immediate effects of short-term meditation on sensorimotor rhythm-based brain-computer interface performance. Front Hum Neurosci 2022; 16:1019279. [PMID: 36606248 PMCID: PMC9807599 DOI: 10.3389/fnhum.2022.1019279] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 11/25/2022] [Indexed: 12/24/2022] Open
Abstract
Introduction Meditation has been shown to enhance a user's ability to control a sensorimotor rhythm (SMR)-based brain-computer interface (BCI). For example, prior work have demonstrated that long-term meditation practices and an 8-week mindfulness-based stress reduction (MBSR) training have positive behavioral and neurophysiological effects on SMR-based BCI. However, the effects of short-term meditation practice on SMR-based BCI control are still unknown. Methods In this study, we investigated the immediate effects of a short, 20-minute meditation on SMR-based BCI control. Thirty-seven subjects performed several runs of one-dimensional cursor control tasks before and after two types of 20-minute interventions: a guided mindfulness meditation exercise and a recording of a narrator reading a journal article. Results We found that there is no significant change in BCI performance and Electroencephalography (EEG) BCI control signal following either 20-minute intervention. Moreover, the change in BCI performance between the meditation group and the control group was found to be not significant. Discussion The present results suggest that a longer period of meditation is needed to improve SMR-based BCI control.
Collapse
Affiliation(s)
- Jeehyun Kim
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Xiyuan Jiang
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Dylan Forenzo
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Yixuan Liu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Nancy Anderson
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Carol M. Greco
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, United States
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
18
|
Exploiting Asymmetric EEG Signals with EFD in Deep Learning Domain for Robust BCI. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Motor imagery (MI) is a domineering paradigm in brain–computer interface (BCI) composition, personifying the imaginary limb motion into digital commandments for neural rehabilitation and automation exertions, while many researchers fathomed myriad solutions for asymmetric MI EEG signals classification, the existence of a robust, non-complex, and subject-invariant system is far-reaching. Thereupon, we put forward an MI EEG segregation pipeline in the deep-learning domain in an effort to curtail the existing limitations. Our method amalgamates multiscale principal component analysis (MSPCA), a novel empirical Fourier decomposition (EFD) signal resolution method with Hilbert transform (HT), followed by four pre-trained convolutional neural networks for automatic feature estimation and segregation. The conceived architecture is validated upon three binary class datasets: IVa, IVb from BCI Competition III, GigaDB from the GigaScience repository, and one tertiary class dataset V from BCI competition III. The average 10-fold outcomes capitulate 98.63%, 96.33%, and 89.96%, the highest classification accuracy for the aforesaid datasets accordingly using the AlexNet CNN model in a subject-dependent context, while in subject-independent cases, the highest success score was 97.69%, outperforming the contemporary studies by a fair margin. Further experiments such as the resolution scale of EFD, comparison with other signal decomposition (SD) methods, deep feature extraction, and classification with machine learning methods also accredits the supremacy of our proposed EEG signal processing pipeline. The overall findings imply that pre-trained models are reliable in identifying EEG signals due to their capacity to maintain the time-frequency structure of EEG signals, non-complex architecture, and their potential for robust classification performance.
Collapse
|