1
|
Chai C, Yang X, Zheng Y, Bin Heyat MB, Li Y, Yang D, Chen YH, Sawan M. Multimodal fusion of magnetoencephalography and photoacoustic imaging based on optical pump: Trends for wearable and noninvasive Brain-Computer interface. Biosens Bioelectron 2025; 278:117321. [PMID: 40049046 DOI: 10.1016/j.bios.2025.117321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Revised: 02/19/2025] [Accepted: 02/26/2025] [Indexed: 03/30/2025]
Abstract
Wearable noninvasive brain-computer interface (BCI) technologies, such as electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), have experienced significant progress since their inception. However, these technologies have not achieved revolutionary advancements, largely because of their inherently low signal-to-noise ratio and limited penetration depth. In recent years, the application of quantum-theory-based optically pumped (OP) technologies, particularly optical pumped magnetometers (OPMs) for magnetoencephalography (MEG) and photoacoustic imaging (PAI) utilizing OP pulsed laser sources, has opened new avenues for development in noninvasive BCIs. These advanced technologies have garnered considerable attention owing to their high sensitivity in tracking neural activity and detecting blood oxygen saturation. This paper represents the first attempt to discuss and compare technologies grounded in OP theory by examining the technical advantages of OPM-MEG and PAI over traditional EEG and fNIRS. Furthermore, the paper investigates the theoretical and structural feasibility of hardware reuse in OPM-MEG and PAI applications.
Collapse
Affiliation(s)
- Chengpeng Chai
- CenBRAIN Neurotech, School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, Zhejiang, 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Street, Xihu District, Hangzhou, Zhejiang, 310024, China
| | - Xi Yang
- CenBRAIN Neurotech, School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, Zhejiang, 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Street, Xihu District, Hangzhou, Zhejiang, 310024, China
| | - Yuqiao Zheng
- CenBRAIN Neurotech, School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, Zhejiang, 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Street, Xihu District, Hangzhou, Zhejiang, 310024, China
| | - Md Belal Bin Heyat
- CenBRAIN Neurotech, School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, Zhejiang, 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Street, Xihu District, Hangzhou, Zhejiang, 310024, China
| | - Yifan Li
- Faculty of Engineering, University of Bristol, Bristol, BS8 1QU, United Kingdom
| | - Dingbo Yang
- Department of Neurosurgery, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, Hangzhou, 310000, China; Department of Neurosurgery, Nanjing Medical University Affiliated Hangzhou Hospital, Hangzhou First People's Hospital, Hangzhou, 310000, China
| | - Yun-Hsuan Chen
- CenBRAIN Neurotech, School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, Zhejiang, 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Street, Xihu District, Hangzhou, Zhejiang, 310024, China.
| | - Mohamad Sawan
- CenBRAIN Neurotech, School of Engineering, Westlake University, 600 Dunyu Road, Xihu District, Hangzhou, Zhejiang, 310030, China; Institute of Advanced Technology, Westlake Institute for Advanced Study, 18 Shilongshan Street, Xihu District, Hangzhou, Zhejiang, 310024, China.
| |
Collapse
|
2
|
Akhter J, Nazeer H, Naseer N, Naeem R, Kallu KD, Lee J, Ko SY. Improved performance of fNIRS-BCI by stacking of deep learning-derived frequency domain features. PLoS One 2025; 20:e0314447. [PMID: 40245060 PMCID: PMC12005509 DOI: 10.1371/journal.pone.0314447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 11/11/2024] [Indexed: 04/19/2025] Open
Abstract
The functional near-infrared spectroscopy-based brain-computer interface (fNIRS-BCI) systems recognize patterns in brain signals and generate control commands, thereby enabling individuals with motor disabilities to regain autonomy. In this study hand gripping data is acquired using fNIRS neuroimaging system, preprocessing is performed using nirsLAB and features extraction is performed using deep learning (DL) Algorithms. For feature extraction and classification stack and fft methods are proposed. Convolutional neural networks (CNN), long short-term memory (LSTM), and bidirectional long-short-term memory (Bi-LSTM) are employed to extract features. The stack method classifies these features using a stack model and the fft method enhances features by applying fast Fourier transformation which is followed by classification using a stack model. The proposed methods are applied to fNIRS signals from twenty participants engaged in a two-class hand-gripping motor activity. The classification performance of the proposed methods is compared with conventional CNN, LSTM, and Bi-LSTM algorithms and one another. The proposed fft and stack methods yield 90.11% and 87.00% classification accuracies respectively, which are significantly higher than those achieved by CNN (85.16%), LSTM (79.46%), and Bi-LSTM (81.88%) conventional algorithms. The results show that the proposed stack and fft methods can be effectively used for the classification of the two and three-class problems in fNIRS-BCI applications.
Collapse
Affiliation(s)
- Jamila Akhter
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Hammad Nazeer
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Noman Naseer
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Rehan Naeem
- Department of Mechatronics and Biomedical Engineering, Air University, Islamabad, Pakistan
| | - Karam Dad Kallu
- MeRIC-Lab (Medical Robotics & Intelligent Control Laboratory), School of Mechanical Engineering, Chonnam National University, Gwangju, South Korea
| | - Jiye Lee
- MeRIC-Lab (Medical Robotics & Intelligent Control Laboratory), School of Mechanical Engineering, Chonnam National University, Gwangju, South Korea
| | - Seong Young Ko
- MeRIC-Lab (Medical Robotics & Intelligent Control Laboratory), School of Mechanical Engineering, Chonnam National University, Gwangju, South Korea
| |
Collapse
|
3
|
Nicora G, Pe S, Santangelo G, Billeci L, Aprile IG, Germanotta M, Bellazzi R, Parimbelli E, Quaglini S. Systematic review of AI/ML applications in multi-domain robotic rehabilitation: trends, gaps, and future directions. J Neuroeng Rehabil 2025; 22:79. [PMID: 40205472 PMCID: PMC11984262 DOI: 10.1186/s12984-025-01605-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 03/04/2025] [Indexed: 04/11/2025] Open
Abstract
Robotic technology is expected to transform rehabilitation settings, by providing precise, repetitive, and task-specific interventions, thereby potentially improving patients' clinical outcomes. Artificial intelligence (AI) and machine learning (ML) have been widely applied in different areas to support robotic rehabilitation, from controlling robot movements to real-time patient assessment. To provide an overview of the current landscape and the impact of AI/ML use in robotics rehabilitation, we performed a systematic review focusing on the use of AI and robotics in rehabilitation from a broad perspective, encompassing different pathologies and body districts, and considering both motor and neurocognitive rehabilitation. We searched the Scopus and IEEE Xplore databases, focusing on the studies involving human participants. After article retrieval, a tagging phase was carried out to devise a comprehensive and easily-interpretable taxonomy: its categories include the aim of the AI/ML within the rehabilitation system, the type of algorithms used, and the location of robots and sensors. The 201 selected articles span multiple domains and diverse aims, such as movement classification, trajectory prediction, and patient evaluation, demonstrating the potential of ML to revolutionize personalized therapy and improve patient engagement. ML is reported as highly effective in predicting movement intentions, assessing clinical outcomes, and detecting compensatory movements, providing insights into the future of personalized rehabilitation interventions. Our analysis also reveals pitfalls in the current use of AI/ML in this area, such as potential explainability issues and poor generalization ability when these systems are applied in real-world settings.
Collapse
Grants
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- # PNC0000007 Ministero dell'Istruzione, dell'Università e della Ricerca
- Ministero dell’Istruzione, dell’Università e della Ricerca
Collapse
Affiliation(s)
- Giovanna Nicora
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy.
| | - Samuele Pe
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Gabriele Santangelo
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Lucia Billeci
- Institute of Clinical Physiology, National Research Council of Italy (CNR-IFC), Pisa, Italy
| | - Irene Giovanna Aprile
- Neuromotor Rehabilitation Department, IRCCS Fondazione Don Carlo Gnocchi ONLUS, Florence, Italy
| | - Marco Germanotta
- Neuromotor Rehabilitation Department, IRCCS Fondazione Don Carlo Gnocchi ONLUS, Florence, Italy
| | - Riccardo Bellazzi
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Enea Parimbelli
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Silvana Quaglini
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| |
Collapse
|
4
|
Arpaia P, Esposito A, Galasso E, Galdieri F, Natalizio A. A wearable brain-computer interface to play an endless runner game by self-paced motor imagery. J Neural Eng 2025; 22:026032. [PMID: 40101362 DOI: 10.1088/1741-2552/adc205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2024] [Accepted: 03/18/2025] [Indexed: 03/20/2025]
Abstract
Objective.A wearable brain-computer interface is proposed and validated experimentally in relation to the real-time control of an endless runner game by self-paced motor imagery(MI).Approach.Electroencephalographic signals were recorded via eight wet electrodes. The processing pipeline involved a filter-bank common spatial pattern approach and the combination of three binary classifiers exploiting linear discriminant analysis. This enabled the discrimination between imagining left-hand, right-hand, and no movement. Each mental task corresponded to an avatar horizontal motion within the game. Twenty-three healthy subjects participated to the experiments and their data are made publicly available. A custom metric was proposed to assess avatar control performance during the gaming phase. The game consisted of two levels, and after each, participants completed a questionnaire to self-assess their engagement and gaming experience.Main results.The mean classification accuracies resulted 73%, 73%, and 67% for left-rest, right-rest, and left-right discrimination, respectively. In the gaming phase, subjects with higher accuracies for left-rest and right-rest pair exhibited higher performance in terms of the custom metric. Correlation of the offline and real-time performance was investigated. The left-right MI did not correlate to the gaming phase performance due to the poor mean accuracy of the calibration. Finally, the engagement questionnaires revealed that level 1 and level 2 were not perceived as frustrating, despite the increasing difficulty.Significance.The work contributes to the development of wearable and self-paced interfaces for real-time control. These enhance user experience by guaranteeing a more natural interaction with respect to synchronous neural interfaces. Moving beyond benchmark datasets, the work paves the way to future applications on mobile devices for everyday use.
Collapse
Affiliation(s)
- Pasquale Arpaia
- Augmented Reality for Health Monitoring Laboratory (ARHeMLab), DIETI, University of Naples Federico II, Naples, Italy
- Department of Electrical Engineering and Information Technology (DIETI), Università degli Studi di Napoli Federico II, Naples, Italy
- Centro Interdipartimentale di Ricerca in Management Sanitario e Innovazione in Sanità (CIRMIS), Università degli Studi di Napoli Federico II, Naples, Italy
| | - Antonio Esposito
- Augmented Reality for Health Monitoring Laboratory (ARHeMLab), DIETI, University of Naples Federico II, Naples, Italy
- Department of Electrical Engineering and Information Technology (DIETI), Università degli Studi di Napoli Federico II, Naples, Italy
| | - Enza Galasso
- Augmented Reality for Health Monitoring Laboratory (ARHeMLab), DIETI, University of Naples Federico II, Naples, Italy
- Department of Chemical, Materials and Industrial Production Engineering (DICMaPI), Università degli Studi di Napoli Federico II, Naples, Italy
| | - Fortuna Galdieri
- Augmented Reality for Health Monitoring Laboratory (ARHeMLab), DIETI, University of Naples Federico II, Naples, Italy
- Department of Electrical Engineering and Information Technology (DIETI), Università degli Studi di Napoli Federico II, Naples, Italy
| | - Angela Natalizio
- Augmented Reality for Health Monitoring Laboratory (ARHeMLab), DIETI, University of Naples Federico II, Naples, Italy
- Department of Electronics and Telecommunications (DET),Polytechnic of Turin, Turin, Italy
| |
Collapse
|
5
|
Blanco-Diaz CF, Serafini ERDS, Bastos-Filho T, Dantas AFODA, Santo CCDE, Delisle-Rodriguez D. A Gait Imagery-Based Brain-Computer Interface With Visual Feedback for Spinal Cord Injury Rehabilitation on Lokomat. IEEE Trans Biomed Eng 2025; 72:102-111. [PMID: 39110553 DOI: 10.1109/tbme.2024.3440036] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
OBJECTIVE Motor Imagery (MI)-based Brain-Computer Interfaces (BCIs) have been proposed for the rehabilitation of people with disabilities, being a big challenge their successful application to restore motor functions in individuals with Spinal Cord Injury (SCI). This work proposes an Electroencephalography (EEG) gait imagery-based BCI to promote motor recovery on the Lokomat platform, in order to allow a clinical intervention by acting simultaneously on both central and peripheral nervous mechanisms. METHODS As a novelty, our BCI system accurately discriminates gait imagery tasks during walking and further provides a multi-channel EEG-based Visual Neurofeedback (VNFB) linked to (8-12 Hz) and (15-20 Hz) rhythms around Cz. VNFB is carried out through a cluster analysis strategy-based Euclidean distance, where the weighted mean MI feature vector is used as a reference to teach individuals with SCI to modulate their cortical rhythms. RESULTS The developed BCI reached an average classification accuracy of 74.4%. In addition, feature analysis demonstrated a reduction in cluster variance after several sessions, whereas metrics associated with self-modulation indicated a greater distance between both classes: passive walking with gait MI and passive walking without MI. CONCLUSION The results suggest that intervention with a gait MI-based BCI with VNFB may allow the individuals to appropriately modulate their rhythms of interest around Cz. SIGNIFICANCE This work contributes to the development of advanced systems for gait rehabilitation by integrating Machine Learning and neurofeedback techniques, to restore lower-limb functions of SCI individuals.
Collapse
|
6
|
Blanco-Diaz CF, Guerrero-Mendez CD, de Andrade RM, Badue C, De Souza AF, Delisle-Rodriguez D, Bastos-Filho T. Decoding lower-limb kinematic parameters during pedaling tasks using deep learning approaches and EEG. Med Biol Eng Comput 2024; 62:3763-3779. [PMID: 39028484 DOI: 10.1007/s11517-024-03147-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/29/2024] [Indexed: 07/20/2024]
Abstract
Stroke is a neurological condition that usually results in the loss of voluntary control of body movements, making it difficult for individuals to perform activities of daily living (ADLs). Brain-computer interfaces (BCIs) integrated into robotic systems, such as motorized mini exercise bikes (MMEBs), have been demonstrated to be suitable for restoring gait-related functions. However, kinematic estimation of continuous motion in BCI systems based on electroencephalography (EEG) remains a challenge for the scientific community. This study proposes a comparative analysis to evaluate two artificial neural network (ANN)-based decoders to estimate three lower-limb kinematic parameters: x- and y-axis position of the ankle and knee joint angle during pedaling tasks. Long short-term memory (LSTM) was used as a recurrent neural network (RNN), which reached Pearson correlation coefficient (PCC) scores close to 0.58 by reconstructing kinematic parameters from the EEG features on the delta band using a time window of 250 ms. These estimates were evaluated through kinematic variance analysis, where our proposed algorithm showed promising results for identifying pedaling and rest periods, which could increase the usability of classification tasks. Additionally, negative linear correlations were found between pedaling speed and decoder performance, thereby indicating that kinematic parameters between slower speeds may be easier to estimate. The results allow concluding that the use of deep learning (DL)-based methods is feasible for the estimation of lower-limb kinematic parameters during pedaling tasks using EEG signals. This study opens new possibilities for implementing controllers most robust for MMEBs and BCIs based on continuous decoding, which may allow for maximizing the degrees of freedom and personalized rehabilitation.
Collapse
Affiliation(s)
| | | | | | - Claudine Badue
- Department of Informatics, Federal University of Espirito Santo, Vitoria, Brazil
| | | | - Denis Delisle-Rodriguez
- Edmond and Lily Safra International Institute of Neurosciences, Santos Dumont Institute, Macaiba, RN, Brazil
| | - Teodiano Bastos-Filho
- Postgraduate Program in Electrical Engineering, Federal University of Espirito Santo, Vitoria, Brazil
| |
Collapse
|
7
|
Li LL, Cao GZ, Zhang YP, Li WC, Cui F. MACNet: A Multidimensional Attention-Based Convolutional Neural Network for Lower-Limb Motor Imagery Classification. SENSORS (BASEL, SWITZERLAND) 2024; 24:7611. [PMID: 39686148 DOI: 10.3390/s24237611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2024] [Revised: 11/24/2024] [Accepted: 11/27/2024] [Indexed: 12/18/2024]
Abstract
Decoding lower-limb motor imagery (MI) is highly important in brain-computer interfaces (BCIs) and rehabilitation engineering. However, it is challenging to classify lower-limb MI from electroencephalogram (EEG) signals, because lower-limb motions (LLMs) including MI are excessively close to physiological representations in the human brain and generate low-quality EEG signals. To address this challenge, this paper proposes a multidimensional attention-based convolutional neural network (CNN), termed MACNet, which is specifically designed for lower-limb MI classification. MACNet integrates a temporal refining module and an attention-enhanced convolutional module by leveraging the local and global feature representation abilities of CNNs and attention mechanisms. The temporal refining module adaptively investigates critical information from each electrode channel to refine EEG signals along the temporal dimension. The attention-enhanced convolutional module extracts temporal and spatial features while refining the feature maps across the channel and spatial dimensions. Owing to the scarcity of public datasets available for lower-limb MI, a specified lower-limb MI dataset involving four routine LLMs is built, consisting of 10 subjects over 20 sessions. Comparison experiments and ablation studies are conducted on this dataset and a public BCI Competition IV 2a EEG dataset. The experimental results show that MACNet achieves state-of-the-art performance and outperforms alternative models for the subject-specific mode. Visualization analysis reveals the excellent feature learning capabilities of MACNet and the potential relationship between lower-limb MI and brain activity. The effectiveness and generalizability of MACNet are verified.
Collapse
Affiliation(s)
- Ling-Long Li
- Guangdong Key Laboratory of Electromagnetic Control and Intelligent Robots, College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, China
| | - Guang-Zhong Cao
- Guangdong Key Laboratory of Electromagnetic Control and Intelligent Robots, College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, China
| | - Yue-Peng Zhang
- Shenzhen Institute of Information Technology, Shenzhen 518172, China
| | - Wan-Chen Li
- School of Psychology, Shenzhen University, Shenzhen 518060, China
| | - Fang Cui
- School of Psychology, Shenzhen University, Shenzhen 518060, China
| |
Collapse
|
8
|
Sung DJ, Kim KT, Jeong JH, Kim L, Lee SJ, Kim H, Kim SJ. Improving inter-session performance via relevant session-transfer for multi-session motor imagery classification. Heliyon 2024; 10:e37343. [PMID: 39296025 PMCID: PMC11409124 DOI: 10.1016/j.heliyon.2024.e37343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 09/02/2024] [Indexed: 09/21/2024] Open
Abstract
Motor imagery (MI)-based brain-computer interfaces (BCIs) using electroencephalography (EEG) have found practical applications in external device control. However, the non-stationary nature of EEG signals remains to obstruct BCI performance across multiple sessions, even for the same user. In this study, we aim to address the impact of non-stationarity, also known as inter-session variability, on multi-session MI classification performance by introducing a novel approach, the relevant session-transfer (RST) method. Leveraging the cosine similarity as a benchmark, the RST method transfers relevant EEG data from the previous session to the current one. The effectiveness of the proposed RST method was investigated through performance comparisons with the self-calibrating method, which uses only the data from the current session, and the whole-session transfer method, which utilizes data from all prior sessions. We validated the effectiveness of these methods using two datasets: a large MI public dataset (Shu Dataset) and our own dataset of gait-related MI, which includes both healthy participants and individuals with spinal cord injuries. Our experimental results revealed that the proposed RST method leads to a 2.29 % improvement (p < 0.001) in the Shu Dataset and up to a 6.37 % improvement in our dataset when compared to the self-calibrating method. Moreover, our method surpassed the performance of the recent highest-performing method that utilized the Shu Dataset, providing further support for the efficacy of the RST method in improving multi-session MI classification performance. Consequently, our findings confirm that the proposed RST method can improve classification performance across multiple sessions in practical MI-BCIs.
Collapse
Affiliation(s)
- Dong-Jin Sung
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Department of Biomedical Engineering, Korea University College of Medicine, Seoul, 02841, Republic of Korea
| | - Keun-Tae Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- College of Information Science, Hallym University, Chuncheon, 24252, Republic of Korea
| | - Ji-Hyeok Jeong
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Laehyun Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
| | - Song Joo Lee
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul, 02792, Republic of Korea
| | - Hyungmin Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Division of Bio-Medical Science and Technology, KIST School, Korea University of Science and Technology, Seoul, 02792, Republic of Korea
| | - Seung-Jong Kim
- Department of Biomedical Engineering, Korea University College of Medicine, Seoul, 02841, Republic of Korea
| |
Collapse
|
9
|
Li Z, Zhang R, Li W, Li M, Chen X, Cui H. Enhancement of Hybrid BCI System Performance Based on Motor Imagery and SSVEP by Transcranial Alternating Current Stimulation. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3222-3230. [PMID: 39196738 DOI: 10.1109/tnsre.2024.3451015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/30/2024]
Abstract
The hybrid brain-computer interface (BCI) is verified to reduce disadvantages of conventional BCI systems. Transcranial electrical stimulation (tES) can also improve the performance and applicability of BCI. However, enhancement in BCI performance attained solely from the perspective of users or solely from the angle of BCI system design is limited. In this study, a hybrid BCI system combining MI and SSVEP was proposed. Furthermore, transcranial alternating current stimulation (tACS) was utilized to enhance the performance of the proposed hybrid BCI system. The stimulation interface presented a depiction of grabbing a ball with both of hands, with left-hand and right-hand flickering at frequencies of 34 Hz and 35 Hz. Subjects watched the interface and imagined grabbing a ball with either left hand or right hand to perform SSVEP and MI task. The MI and SSVEP signals were processed separately using filter bank common spatial patterns (FBCSP) and filter bank canonical correlation analysis (FBCCA) algorithms, respectively. A fusion method was proposed to fuse the features extracted from MI and SSVEP. Twenty healthy subjects took part in the online experiment and underwent tACS sequentially. The fusion accuracy post-tACS reached 90.25% ± 11.40%, which was significantly different from pre-tACS. The fusion accuracy also surpassed MI accuracy and SSVEP accuracy respectively. These results indicated the superior performance of the hybrid BCI system and tACS would improve the performance of the hybrid BCI system.
Collapse
|
10
|
Mwata-Velu T, Zamora E, Vasquez-Gomez JI, Ruiz-Pinales J, Sossa H. Multiclass Classification of Visual Electroencephalogram Based on Channel Selection, Minimum Norm Estimation Algorithm, and Deep Network Architectures. SENSORS (BASEL, SWITZERLAND) 2024; 24:3968. [PMID: 38931751 PMCID: PMC11207572 DOI: 10.3390/s24123968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/04/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024]
Abstract
This work addresses the challenge of classifying multiclass visual EEG signals into 40 classes for brain-computer interface applications using deep learning architectures. The visual multiclass classification approach offers BCI applications a significant advantage since it allows the supervision of more than one BCI interaction, considering that each class label supervises a BCI task. However, because of the nonlinearity and nonstationarity of EEG signals, using multiclass classification based on EEG features remains a significant challenge for BCI systems. In the present work, mutual information-based discriminant channel selection and minimum-norm estimate algorithms were implemented to select discriminant channels and enhance the EEG data. Hence, deep EEGNet and convolutional recurrent neural networks were separately implemented to classify the EEG data for image visualization into 40 labels. Using the k-fold cross-validation approach, average classification accuracies of 94.8% and 89.8% were obtained by implementing the aforementioned network architectures. The satisfactory results obtained with this method offer a new implementation opportunity for multitask embedded BCI applications utilizing a reduced number of both channels (<50%) and network parameters (<110 K).
Collapse
Affiliation(s)
- Tat’y Mwata-Velu
- Robotics and Mechatronics Lab, Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC–IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Vallejo CP, Gustavo A. Madero, Mexico City 07738, Mexico; (T.M.-V.); (H.S.)
- Section Électricité, Institut Supérieur Pédagogique Technique de Kinshasa (I.S.P.T.-KIN), Av. de la Science 5, Gombe, Kinshasa 03287, Democratic Republic of the Congo
- Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, Universidad de Guanajuato, Salamanca 36885, Mexico (J.R.-P.)
| | - Erik Zamora
- Robotics and Mechatronics Lab, Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC–IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Vallejo CP, Gustavo A. Madero, Mexico City 07738, Mexico; (T.M.-V.); (H.S.)
| | - Juan Irving Vasquez-Gomez
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Gustavo A. Madero, Mexico City 07738, Mexico;
| | - Jose Ruiz-Pinales
- Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, Universidad de Guanajuato, Salamanca 36885, Mexico (J.R.-P.)
| | - Humberto Sossa
- Robotics and Mechatronics Lab, Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC–IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Vallejo CP, Gustavo A. Madero, Mexico City 07738, Mexico; (T.M.-V.); (H.S.)
| |
Collapse
|
11
|
Ferrero L, Soriano-Segura P, Navarro J, Jones O, Ortiz M, Iáñez E, Azorín JM, Contreras-Vidal JL. Brain-machine interface based on deep learning to control asynchronously a lower-limb robotic exoskeleton: a case-of-study. J Neuroeng Rehabil 2024; 21:48. [PMID: 38581031 PMCID: PMC10996198 DOI: 10.1186/s12984-024-01342-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 03/15/2024] [Indexed: 04/07/2024] Open
Abstract
BACKGROUND This research focused on the development of a motor imagery (MI) based brain-machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will. METHODS A total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants' neural activity using the second deep learning approach for the decoding. RESULTS The three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance. CONCLUSION This research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study's discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.
Collapse
Affiliation(s)
- Laura Ferrero
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain.
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain.
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain.
- NSF IUCRC BRAIN, University of Houston, Houston, USA.
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA.
| | - Paula Soriano-Segura
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
| | - Jacobo Navarro
- NSF IUCRC BRAIN, University of Houston, Houston, USA
- International Affiliate NSF IUCRC BRAIN Site, Tecnológico de Monterrey, Monterrey, Mexico
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA
| | - Oscar Jones
- NSF IUCRC BRAIN, University of Houston, Houston, USA
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA
| | - Mario Ortiz
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
| | - Eduardo Iáñez
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
| | - José M Azorín
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
- Valencian Graduate School and Research Network of Artificial Intelligence-valgrAI, Valencia, Spain
| | - José L Contreras-Vidal
- NSF IUCRC BRAIN, University of Houston, Houston, USA
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA
| |
Collapse
|
12
|
Qin Y, Yang B, Ke S, Liu P, Rong F, Xia X. M-FANet: Multi-Feature Attention Convolutional Neural Network for Motor Imagery Decoding. IEEE Trans Neural Syst Rehabil Eng 2024; 32:401-411. [PMID: 38194394 DOI: 10.1109/tnsre.2024.3351863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Motor imagery (MI) decoding methods are pivotal in advancing rehabilitation and motor control research. Effective extraction of spectral-spatial-temporal features is crucial for MI decoding from limited and low signal-to-noise ratio electroencephalogram (EEG) signal samples based on brain-computer interface (BCI). In this paper, we propose a lightweight Multi-Feature Attention Neural Network (M-FANet) for feature extraction and selection of multi-feature data. M-FANet employs several unique attention modules to eliminate redundant information in the frequency domain, enhance local spatial feature extraction and calibrate feature maps. We introduce a training method called Regularized Dropout (R-Drop) to address training-inference inconsistency caused by dropout and improve the model's generalization capability. We conduct extensive experiments on the BCI Competition IV 2a (BCIC-IV-2a) dataset and the 2019 World robot conference contest-BCI Robot Contest MI (WBCIC-MI) dataset. M-FANet achieves superior performance compared to state-of-the-art MI decoding methods, with 79.28% 4-class classification accuracy (kappa: 0.7259) on the BCIC-IV-2a dataset and 77.86% 3-class classification accuracy (kappa: 0.6650) on the WBCIC-MI dataset. The application of multi-feature attention modules and R-Drop in our lightweight model significantly enhances its performance, validated through comprehensive ablation experiments and visualizations.
Collapse
|
13
|
Chen X, An J, Wu H, Li S, Liu B, Wu D. Front-End Replication Dynamic Window (FRDW) for Online Motor Imagery Classification. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3906-3914. [PMID: 37792658 DOI: 10.1109/tnsre.2023.3321640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
Motor imagery (MI) is a classical paradigm in electroencephalogram (EEG) based brain-computer interfaces (BCIs). Online accurate and fast decoding is very important to its successful applications. This paper proposes a simple yet effective front-end replication dynamic window (FRDW) algorithm for this purpose. Dynamic windows enable the classification based on a test EEG trial shorter than those used in training, improving the decision speed; front-end replication fills a short test EEG trial to the length used in training, improving the classification accuracy. Within-subject and cross-subject online MI classification experiments on three public datasets, with three different classifiers and three different data augmentation approaches, demonstrated that FRDW can significantly increase the information transfer rate in MI decoding. Additionally, FR can also be used in training data augmentation. FRDW helped win national champion of the China BCI Competition in 2022.
Collapse
|
14
|
Tortora S, Tonin L, Sieghartsleitner S, Ortner R, Guger C, Lennon O, Coyle D, Menegatti E, Del Felice A. Effect of Lower Limb Exoskeleton on the Modulation of Neural Activity and Gait Classification. IEEE Trans Neural Syst Rehabil Eng 2023; 31:2988-3003. [PMID: 37432820 DOI: 10.1109/tnsre.2023.3294435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2023]
Abstract
Neurorehabilitation with robotic devices requires a paradigm shift to enhance human-robot interaction. The coupling of robot assisted gait training (RAGT) with a brain-machine interface (BMI) represents an important step in this direction but requires better elucidation of the effect of RAGT on the user's neural modulation. Here, we investigated how different exoskeleton walking modes modify brain and muscular activity during exoskeleton assisted gait. We recorded electroencephalographic (EEG) and electromyographic (EMG) activity from ten healthy volunteers walking with an exoskeleton with three modes of user assistance (i.e., transparent, adaptive and full assistance) and during free overground gait. Results identified that exoskeleton walking (irrespective of the exoskeleton mode) induces a stronger modulation of central mid-line mu (8-13 Hz) and low-beta (14-20 Hz) rhythms compared to free overground walking. These modifications are accompanied by a significant re-organization of the EMG patterns in exoskeleton walking. On the other hand, we observed no significant differences in neural activity during exoskeleton walking with the different assistance levels. We subsequently implemented four gait classifiers based on deep neural networks trained on the EEG data during the different walking conditions. Our hypothesis was that exoskeleton modes could impact the creation of a BMI-driven RAGT. We demonstrated that all classifiers achieved an average accuracy of 84.13±3.49% in classifying swing and stance phases on their respective datasets. In addition, we demonstrated that the classifier trained on the transparent mode exoskeleton data can classify gait phases during adaptive and full modes with an accuracy of 78.3±4.8% , while the classifier trained on free overground walking data fails to classify the gait during exoskeleton walking (accuracy of 59.4±11.8% ). These findings provide important insights into the effect of robotic training on neural activity and contribute to the advancement of BMI technology for improving robotic gait rehabilitation therapy.
Collapse
|
15
|
An Y, Wong JKW, Ling SH. An EEG-based brain-computer interface for real-time multi-task robotic control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082620 DOI: 10.1109/embc40787.2023.10340310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The Brain Computer Interface (BCI) is the communication between the human brain and the computer. Electroencephalogram (EEG) is one of the biomedical signals which can be obtained by attaching electrodes to the scalp. Some EEG related applications can be developed to help disabled people, such as EEG based wheelchair or robotic arm. A hybrid BCI real-time control system is proposed to control a multi-tasks BCI robot. In this system, a sliding window based online data segmentation strategy is proposed to segment training data, which enable the system to learn the dynamic features when the subject's brain state transfer from a rest state to a task execution state. The features help the system achieve real-time control and ensure the continuity of executing actions. In addition, Common Spatial Pattern (CSP) can better extract the spatial features of these continuous actions from the dynamic data to ensure that multiple control commands are accurately classified. In the experiment, three subjects' EEG data is collected, trained and tested the performance and reliability of the proposed control system. The system records the robot's spending time, moving distance, and the number of objects pushing down. Experimental results are given to show the feasibility of the real-time control system. Compared to real-time remote controller, the proposed system can achieve similar performance. Thus, the proposed hybrid BCI real-time control system is able to control the robot in the real-time environment and can be used to develop robot-aided arm training methods based on neurological rehabilitation principles for stroke and brain injury patients.
Collapse
|
16
|
Ferrero L, Quiles V, Ortiz M, Iáñez E, Gil-Agudo Á, Azorín JM. Brain-computer interface enhanced by virtual reality training for controlling a lower limb exoskeleton. iScience 2023; 26:106675. [PMID: 37250318 PMCID: PMC10214472 DOI: 10.1016/j.isci.2023.106675] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 03/06/2023] [Accepted: 04/11/2023] [Indexed: 05/31/2023] Open
Abstract
This study explores the use of a brain-computer interface (BCI) based on motor imagery (MI) for the control of a lower limb exoskeleton to aid in motor recovery after a neural injury. The BCI was evaluated in ten able-bodied subjects and two patients with spinal cord injuries. Five able-bodied subjects underwent a virtual reality (VR) training session to accelerate training with the BCI. Results from this group were compared with a control group of five able-bodied subjects, and it was found that the employment of shorter training by VR did not reduce the effectiveness of the BCI and even improved it in some cases. Patients gave positive feedback about the system and were able to handle experimental sessions without reaching high levels of physical and mental exertion. These results are promising for the inclusion of BCI in rehabilitation programs, and future research should investigate the potential of the MI-based BCI system.
Collapse
Affiliation(s)
- Laura Ferrero
- Brain-Machine Interface System Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- The European University of Brain and Technology (NeurotechEU)
| | - Vicente Quiles
- Brain-Machine Interface System Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
| | - Mario Ortiz
- Brain-Machine Interface System Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- The European University of Brain and Technology (NeurotechEU)
| | - Eduardo Iáñez
- Brain-Machine Interface System Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
| | | | - José M. Azorín
- Brain-Machine Interface System Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- Valencian Graduate School and Research Network of Artificial Intelligence (valgrAI), Valencia, Spain
- The European University of Brain and Technology (NeurotechEU)
| |
Collapse
|
17
|
Wang F, Wen Y, Bi J, Li H, Sun J. A portable SSVEP-BCI system for rehabilitation exoskeleton in augmented reality environment. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
18
|
Zhang X, Li H, Dong R, Lu Z, Li C. Electroencephalogram and surface electromyogram fusion-based precise detection of lower limb voluntary movement using convolution neural network-long short-term memory model. Front Neurosci 2022; 16:954387. [PMID: 36213740 PMCID: PMC9538146 DOI: 10.3389/fnins.2022.954387] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 08/26/2022] [Indexed: 11/13/2022] Open
Abstract
The electroencephalogram (EEG) and surface electromyogram (sEMG) fusion has been widely used in the detection of human movement intention for human–robot interaction, but the internal relationship of EEG and sEMG signals is not clear, so their fusion still has some shortcomings. A precise fusion method of EEG and sEMG using the CNN-LSTM model was investigated to detect lower limb voluntary movement in this study. At first, the EEG and sEMG signal processing of each stage was analyzed so that the response time difference between EEG and sEMG can be estimated to detect lower limb voluntary movement, and it can be calculated by the symbolic transfer entropy. Second, the data fusion and feature of EEG and sEMG were both used for obtaining a data matrix of the model, and a hybrid CNN-LSTM model was established for the EEG and sEMG-based decoding model of lower limb voluntary movement so that the estimated value of time difference was about 24 ∼ 26 ms, and the calculated value was between 25 and 45 ms. Finally, the offline experimental results showed that the accuracy of data fusion was significantly higher than feature fusion-based accuracy in 5-fold cross-validation, and the average accuracy of EEG and sEMG data fusion was more than 95%; the improved average accuracy for eliminating the response time difference between EEG and sEMG was about 0.7 ± 0.26% in data fusion. In the meantime, the online average accuracy of data fusion-based CNN-LSTM was more than 87% in all subjects. These results demonstrated that the time difference had an influence on the EEG and sEMG fusion to detect lower limb voluntary movement, and the proposed CNN-LSTM model can achieve high performance. This work provides a stable and reliable basis for human–robot interaction of the lower limb exoskeleton.
Collapse
Affiliation(s)
- Xiaodong Zhang
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
- Shaanxi Key Laboratory of Intelligent Robots, Xi’an Jiaotong University, Xi’an, Shaanxi, China
- Wearable Human Enhancement Technology Innovation Center, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Hanzhe Li
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
- Wearable Human Enhancement Technology Innovation Center, Xi’an Jiaotong University, Xi’an, Shaanxi, China
- *Correspondence: Hanzhe Li,
| | - Runlin Dong
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Zhufeng Lu
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Cunxin Li
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| |
Collapse
|
19
|
Triana-Guzman N, Orjuela-Cañon AD, Jutinico AL, Mendoza-Montoya O, Antelis JM. Decoding EEG rhythms offline and online during motor imagery for standing and sitting based on a brain-computer interface. Front Neuroinform 2022; 16:961089. [PMID: 36120085 PMCID: PMC9481272 DOI: 10.3389/fninf.2022.961089] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Accepted: 08/03/2022] [Indexed: 12/02/2022] Open
Abstract
Motor imagery (MI)-based brain-computer interface (BCI) systems have shown promising advances for lower limb motor rehabilitation. The purpose of this study was to develop an MI-based BCI for the actions of standing and sitting. Thirty-two healthy subjects participated in the study using 17 active EEG electrodes. We used a combination of the filter bank common spatial pattern (FBCSP) method and the regularized linear discriminant analysis (RLDA) technique for decoding EEG rhythms offline and online during motor imagery for standing and sitting. The offline analysis indicated the classification of motor imagery and idle state provided a mean accuracy of 88.51 ± 1.43% and 85.29 ± 1.83% for the sit-to-stand and stand-to-sit transitions, respectively. The mean accuracies of the sit-to-stand and stand-to-sit online experiments were 94.69 ± 1.29% and 96.56 ± 0.83%, respectively. From these results, we believe that the MI-based BCI may be useful to future brain-controlled standing systems.
Collapse
Affiliation(s)
| | | | - Andres L. Jutinico
- Facultad de Ingeniería Mecánica, Electrónica y Biomédica, Universidad Antonio Nariño, Bogota, Colombia
| | - Omar Mendoza-Montoya
- Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
- *Correspondence: Omar Mendoza-Montoya
| | - Javier M. Antelis
- Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Monterrey, Mexico
| |
Collapse
|
20
|
High-Frequency Vibrating Stimuli Using the Low-Cost Coin-Type Motors for SSSEP-Based BCI. BIOMED RESEARCH INTERNATIONAL 2022; 2022:4100381. [PMID: 36060141 PMCID: PMC9436568 DOI: 10.1155/2022/4100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 06/23/2022] [Accepted: 08/08/2022] [Indexed: 11/17/2022]
Abstract
Steady-state somatosensory-evoked potential- (SSSEP-) based brain-computer interfaces (BCIs) have been applied for assisting people with physical disabilities since it does not require gaze fixation or long-time training. Despite the advancement of various noninvasive electroencephalogram- (EEG-) based BCI paradigms, researches on SSSEP with the various frequency range and related classification algorithms are relatively unsettled. In this study, we investigated the feasibility of classifying the SSSEP within high-frequency vibration stimuli induced by a versatile coin-type eccentric rotating mass (ERM) motor. Seven healthy subjects performed selective attention (SA) tasks with vibration stimuli attached to the left and right index fingers. Three EEG feature extraction methods, followed by a support vector machine (SVM) classifier, have been tested: common spatial pattern (CSP), filter-bank CSP (FBCSP), and mutual information-based best individual feature (MIBIF) selection after the FBCSP. Consequently, the FBCSP showed the highest performance at
% for classifying the left and right-hand SA tasks than the other two methods (i.e., CSP and FBCSP-MIBIF). Based on our findings and approach, the high-frequency vibration stimuli using low-cost coin motors with the FBCSP-based feature selection can be potentially applied to developing practical SSSEP-based BCI systems.
Collapse
|
21
|
A Comprehensive Review of Endogenous EEG-Based BCIs for Dynamic Device Control. SENSORS 2022; 22:s22155802. [PMID: 35957360 PMCID: PMC9370865 DOI: 10.3390/s22155802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/23/2022] [Accepted: 07/30/2022] [Indexed: 11/28/2022]
Abstract
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) provide a novel approach for controlling external devices. BCI technologies can be important enabling technologies for people with severe mobility impairment. Endogenous paradigms, which depend on user-generated commands and do not need external stimuli, can provide intuitive control of external devices. This paper discusses BCIs to control various physical devices such as exoskeletons, wheelchairs, mobile robots, and robotic arms. These technologies must be able to navigate complex environments or execute fine motor movements. Brain control of these devices presents an intricate research problem that merges signal processing and classification techniques with control theory. In particular, obtaining strong classification performance for endogenous BCIs is challenging, and EEG decoder output signals can be unstable. These issues present myriad research questions that are discussed in this review paper. This review covers papers published until the end of 2021 that presented BCI-controlled dynamic devices. It discusses the devices controlled, EEG paradigms, shared control, stabilization of the EEG signal, traditional machine learning and deep learning techniques, and user experience. The paper concludes with a discussion of open questions and avenues for future work.
Collapse
|
22
|
Jeong JH, Kim KT, Kim DJ, Lee SJ, Kim H. Subject-Transfer Decoding using the Convolutional Neural Network for Motor Imagery-based Brain-Computer Interface. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:48-51. [PMID: 36086005 DOI: 10.1109/embc48229.2022.9871463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Various pattern-recognition or machine learning-based methods have recently been developed to improve the accuracy of the motor imagery (MI)-based brain-computer interface (BCI). However, more research is needed to reduce the training time to apply it to the real-world environment. In this study, we propose a subject-transfer decoding method based on a convolutional neural network (CNN) which is robust even with a small number of training trials. The proposed CNN was pre-trained with other subjects' MI data and then fine-tuned to the target subject's training MI data. We evaluated the proposed method using the BCI competition IV data2a, which had the 4-class MIs. Consequently, on the same test dataset, with changing the number of training trials, the proposed method showed better accuracy than the self-training method, which used only the target subject's data for training, as averaged 86.54±7.78% (288 trials), 85.76 ±8.00% (240 trials), 84.65±8.11% (192 trials), and 83.29 ±8.25% (144 trials), respectively, which was 4.94% (288 trials), 6.10% (240 trials), 9.03% (192 trials), and 12.31% (144 trials)-point higher than the self-training method. Consequently, the proposed method was shown to be effective in maintaining classification accuracy even with the reduced training trials.
Collapse
|
23
|
Liu C, Jin J, Daly I, Sun H, Huang Y, Wang X, Cichocki A. Bispectrum-based Hybrid Neural Network for Motor Imagery Classification. J Neurosci Methods 2022; 375:109593. [DOI: 10.1016/j.jneumeth.2022.109593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/27/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
|
24
|
Le DT, Watanabe K, Ogawa H, Matsushita K, Imada N, Taki S, Iwamoto Y, Imura T, Araki H, Araki O, Ono T, Nishijo H, Fujita N, Urakawa S. Involvement of the Rostromedial Prefrontal Cortex in Human-Robot Interaction: fNIRS Evidence From a Robot-Assisted Motor Task. Front Neurorobot 2022; 16:795079. [PMID: 35370598 PMCID: PMC8970051 DOI: 10.3389/fnbot.2022.795079] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 02/17/2022] [Indexed: 11/28/2022] Open
Abstract
Assistive exoskeleton robots are being widely applied in neurorehabilitation to improve upper-limb motor and somatosensory functions. During robot-assisted exercises, the central nervous system appears to highly attend to external information-processing (IP) to efficiently interact with robotic assistance. However, the neural mechanisms underlying this process remain unclear. The rostromedial prefrontal cortex (rmPFC) may be the core of the executive resource allocation that generates biases in the allocation of processing resources toward an external IP according to current behavioral demands. Here, we used functional near-infrared spectroscopy to investigate the cortical activation associated with executive resource allocation during a robot-assisted motor task. During data acquisition, participants performed a right-arm motor task using elbow flexion-extension movements in three different loading conditions: robotic assistive loading (ROB), resistive loading (RES), and non-loading (NON). Participants were asked to strive for kinematic consistency in their movements. A one-way repeated measures analysis of variance and general linear model-based methods were employed to examine task-related activity. We demonstrated that hemodynamic responses in the ventral and dorsal rmPFC were higher during ROB than during NON. Moreover, greater hemodynamic responses in the ventral rmPFC were observed during ROB than during RES. Increased activation in ventral and dorsal rmPFC subregions may be involved in the executive resource allocation that prioritizes external IP during human-robot interactions. In conclusion, these findings provide novel insights regarding the involvement of executive control during a robot-assisted motor task.
Collapse
Affiliation(s)
- Duc Trung Le
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
- Department of Neurology, Vietnam Military Medical University, Hanoi, Vietnam
| | - Kazuki Watanabe
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Hiroki Ogawa
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Kojiro Matsushita
- Department of Mechanical Engineering, Facility of Engineering, Gifu University, Gifu, Japan
| | - Naoki Imada
- Department of Rehabilitation, Araki Neurosurgical Hospital, Hiroshima, Japan
| | - Shingo Taki
- Department of Rehabilitation, Araki Neurosurgical Hospital, Hiroshima, Japan
| | - Yuji Iwamoto
- Department of Rehabilitation, Araki Neurosurgical Hospital, Hiroshima, Japan
| | - Takeshi Imura
- Department of Rehabilitation, Faculty of Health Sciences, Hiroshima Cosmopolitan University, Hiroshima, Japan
| | - Hayato Araki
- Department of Neurosurgery, Araki Neurosurgical Hospital, Hiroshima, Japan
| | - Osamu Araki
- Department of Neurosurgery, Araki Neurosurgical Hospital, Hiroshima, Japan
| | - Taketoshi Ono
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama, Japan
| | - Hisao Nishijo
- Department of System Emotional Science, Faculty of Medicine, University of Toyama, Toyama, Japan
- Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, Japan
| | - Naoto Fujita
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Susumu Urakawa
- Department of Musculoskeletal Functional Research and Regeneration, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
- *Correspondence: Susumu Urakawa
| |
Collapse
|
25
|
Asanza V, Peláez E, Loayza F, Lorente-Leyva LL, Peluffo-Ordóñez DH. Identification of Lower-Limb Motor Tasks via Brain-Computer Interfaces: A Topical Overview. SENSORS (BASEL, SWITZERLAND) 2022; 22:2028. [PMID: 35271175 PMCID: PMC8914806 DOI: 10.3390/s22052028] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 02/11/2022] [Accepted: 02/23/2022] [Indexed: 02/01/2023]
Abstract
Recent engineering and neuroscience applications have led to the development of brain-computer interface (BCI) systems that improve the quality of life of people with motor disabilities. In the same area, a significant number of studies have been conducted in identifying or classifying upper-limb movement intentions. On the contrary, few works have been concerned with movement intention identification for lower limbs. Notwithstanding, lower-limb neurorehabilitation is a major topic in medical settings, as some people suffer from mobility problems in their lower limbs, such as those diagnosed with neurodegenerative disorders, such as multiple sclerosis, and people with hemiplegia or quadriplegia. Particularly, the conventional pattern recognition (PR) systems are one of the most suitable computational tools for electroencephalography (EEG) signal analysis as the explicit knowledge of the features involved in the PR process itself is crucial for both improving signal classification performance and providing more interpretability. In this regard, there is a real need for outline and comparative studies gathering benchmark and state-of-art PR techniques that allow for a deeper understanding thereof and a proper selection of a specific technique. This study conducted a topical overview of specialized papers covering lower-limb motor task identification through PR-based BCI/EEG signal analysis systems. To do so, we first established search terms and inclusion and exclusion criteria to find the most relevant papers on the subject. As a result, we identified the 22 most relevant papers. Next, we reviewed their experimental methodologies for recording EEG signals during the execution of lower limb tasks. In addition, we review the algorithms used in the preprocessing, feature extraction, and classification stages. Finally, we compared all the algorithms and determined which of them are the most suitable in terms of accuracy.
Collapse
Affiliation(s)
- Víctor Asanza
- Facultad de Ingeniería en Electricidad y Computación, Escuela Superior Politécnica del Litoral (ESPOL), Campus Gustavo Galindo km 30.5 Vía Perimetral, Guayaquil P.O. Box 09-01-5863, Ecuador;
| | - Enrique Peláez
- Facultad de Ingeniería en Electricidad y Computación, Escuela Superior Politécnica del Litoral (ESPOL), Campus Gustavo Galindo km 30.5 Vía Perimetral, Guayaquil P.O. Box 09-01-5863, Ecuador;
| | - Francis Loayza
- Neuroimaging and Bioengineering Laboratory (LNB), Facultad de Ingeniería en Mecánica y Ciencias de la Producción, Escuela Superior Politécnica del Litoral (ESPOL), Campus Gustavo Galindo km 30.5 Vía Perimetral, Guayaquil P.O. Box 09-01-5863, Ecuador;
| | | | - Diego H. Peluffo-Ordóñez
- Faculty of Engineering, Corporación Universitaria Autónoma de Nariño, Pasto 520001, Colombia;
- Modeling, Simulation and Data Analysis (MSDA) Research Program, Mohammed VI Polytechnic University, Ben Guerir 43150, Morocco
| |
Collapse
|
26
|
Liu C, Jin J, Daly I, Li S, Sun H, Huang Y, Wang X, Cichocki A. SincNet-based Hybrid Neural Network for Motor Imagery EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2022; 30:540-549. [PMID: 35235515 DOI: 10.1109/tnsre.2022.3156076] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
It is difficult to identify optimal cut-off frequencies for filters used with the common spatial pattern (CSP) method in motor imagery (MI)-based brain-computer interfaces (BCIs). Most current studies choose filter cut-frequencies based on experience or intuition, resulting in sub-optimal use of MI-related spectral information in the electroencephalography (EEG). To improve information utilization, we propose a SincNet-based hybrid neural network (SHNN) for MI-based BCIs. First, raw EEG is segmented into different time windows and mapped into the CSP feature space. Then, SincNets are used as filter bank band-pass filters to automatically filter the data. Next, we used squeeze-and-excitation modules to learn a sparse representation of the filtered data. The resulting sparse data were fed into convolutional neural networks to learn deep feature representations. Finally, these deep features were fed into a gated recurrent unit module to seek sequential relations, and a fully connected layer was used for classification. We used the BCI competition IV datasets 2a and 2b to verify the effectiveness of our SHNN method. The mean classification accuracies (kappa values) of our SHNN method are 0.7426 (0.6648) on dataset 2a and 0.8349 (0.6697) on dataset 2b, respectively. The statistical test results demonstrate that our SHNN can significantly outperform other state-of-the-art methods on these datasets.
Collapse
|
27
|
Tiboni M, Borboni A, Vérité F, Bregoli C, Amici C. Sensors and Actuation Technologies in Exoskeletons: A Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:884. [PMID: 35161629 PMCID: PMC8839165 DOI: 10.3390/s22030884] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 01/16/2022] [Accepted: 01/19/2022] [Indexed: 02/06/2023]
Abstract
Exoskeletons are robots that closely interact with humans and that are increasingly used for different purposes, such as rehabilitation, assistance in the activities of daily living (ADLs), performance augmentation or as haptic devices. In the last few decades, the research activity on these robots has grown exponentially, and sensors and actuation technologies are two fundamental research themes for their development. In this review, an in-depth study of the works related to exoskeletons and specifically to these two main aspects is carried out. A preliminary phase investigates the temporal distribution of scientific publications to capture the interest in studying and developing novel ideas, methods or solutions for exoskeleton design, actuation and sensors. The distribution of the works is also analyzed with respect to the device purpose, body part to which the device is dedicated, operation mode and design methods. Subsequently, actuation and sensing solutions for the exoskeletons described by the studies in literature are analyzed in detail, highlighting the main trends in their development and spread. The results are presented with a schematic approach, and cross analyses among taxonomies are also proposed to emphasize emerging peculiarities.
Collapse
Affiliation(s)
- Monica Tiboni
- Department of Mechanical and Industrial Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy; (M.T.); (C.A.)
| | - Alberto Borboni
- Department of Mechanical and Industrial Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy; (M.T.); (C.A.)
| | - Fabien Vérité
- Agathe Group INSERM U 1150, UMR 7222 CNRS, ISIR (Institute of Intelligent Systems and Robotics), Sorbonne Université, 75005 Paris, France;
| | - Chiara Bregoli
- Institute of Condensed Matter Chemistry and Technologies for Energy (ICMATE), National Research Council (CNR), Via Previati 1/E, 23900 Lecco, Italy;
| | - Cinzia Amici
- Department of Mechanical and Industrial Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy; (M.T.); (C.A.)
| |
Collapse
|
28
|
Motor Imagination of Lower Limb Movements at Different Frequencies. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2021:4073739. [PMID: 34976324 PMCID: PMC8716247 DOI: 10.1155/2021/4073739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 11/10/2021] [Accepted: 11/20/2021] [Indexed: 11/26/2022]
Abstract
Motor imagination (MI) is the mental process of only imagining an action without an actual movement. Research on MI has made significant progress in feature information detection and machine learning decoding algorithms, but there are still problems, such as a low overall recognition rate and large differences in individual execution effects, which make the development of MI run into a bottleneck. Aiming at solving this bottleneck problem, the current study optimized the quality of the MI original signal by “enhancing the difficulty of imagination tasks,” conducted the qualitative and quantitative analyses of EEG rhythm characteristics, and used quantitative indicators, such as ERD mean value and recognition rate. Research on the comparative analysis of the lower limb MI of different tasks, namely, high-frequency motor imagination (HFMI) and low-frequency motor imagination (LFMI), was conducted. The results validate the following: the average ERD of HFMI (−1.827) is less than that of LFMI (−1.3487) in the alpha band, so did (−3.4756 < −2.2891) in the beta band. In the alpha and beta characteristic frequency bands, the average ERD of HFMI is smaller than that of LFMI, and the ERD values of the two are significantly different (p=0.0074 < 0.01; r = 0.945). The ERD intensity STD values of HFMI are less than those of LFMI. which suggests that the ERD intensity individual difference among the subjects is smaller in the HFMI mode than in the LFMI mode. The average recognition rate of HFMI is higher than that of LFMI (87.84% > 76.46%), and the recognition rate of the two modes is significantly different (p=0.0034 < 0.01; r = 0.429). In summary, this research optimizes the quality of MI brain signal sources by enhancing the difficulty of imagination tasks, achieving the purpose of improving the overall recognition rate of the lower limb MI of the participants and reducing the differences of individual execution effects and signal quality among the subjects.
Collapse
|
29
|
Gutierrez-Martinez J, Mercado-Gutierrez JA, Carvajal-Gámez BE, Rosas-Trigueros JL, Contreras-Martinez AE. Artificial Intelligence Algorithms in Visual Evoked Potential-Based Brain-Computer Interfaces for Motor Rehabilitation Applications: Systematic Review and Future Directions. Front Hum Neurosci 2021; 15:772837. [PMID: 34899220 PMCID: PMC8656949 DOI: 10.3389/fnhum.2021.772837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 11/04/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-Computer Interface (BCI) is a technology that uses electroencephalographic (EEG) signals to control external devices, such as Functional Electrical Stimulation (FES). Visual BCI paradigms based on P300 and Steady State Visually Evoked potentials (SSVEP) have shown high potential for clinical purposes. Numerous studies have been published on P300- and SSVEP-based non-invasive BCIs, but many of them present two shortcomings: (1) they are not aimed for motor rehabilitation applications, and (2) they do not report in detail the artificial intelligence (AI) methods used for classification, or their performance metrics. To address this gap, in this paper the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology was applied to prepare a systematic literature review (SLR). Papers older than 10 years, repeated or not related to a motor rehabilitation application, were excluded. Of all the studies, 51.02% referred to theoretical analysis of classification algorithms. Of the remaining, 28.48% were for spelling, 12.73% for diverse applications (control of wheelchair or home appliances), and only 7.77% were focused on motor rehabilitation. After the inclusion and exclusion criteria were applied and quality screening was performed, 34 articles were selected. Of them, 26.47% used the P300 and 55.8% the SSVEP signal. Five applications categories were established: Rehabilitation Systems (17.64%), Virtual Reality environments (23.52%), FES (17.64%), Orthosis (29.41%), and Prosthesis (11.76%). Of all the works, only four performed tests with patients. The most reported machine learning (ML) algorithms used for classification were linear discriminant analysis (LDA) (48.64%) and support vector machine (16.21%), while only one study used a deep learning algorithm: a Convolutional Neural Network (CNN). The reported accuracy ranged from 38.02 to 100%, and the Information Transfer Rate from 1.55 to 49.25 bits per minute. While LDA is still the most used AI algorithm, CNN has shown promising results, but due to their high technical implementation requirements, many researchers do not justify its implementation as worthwile. To achieve quick and accurate online BCIs for motor rehabilitation applications, future works on SSVEP-, P300-based and hybrid BCIs should focus on optimizing the visual stimulation module and the training stage of ML and DL algorithms.
Collapse
Affiliation(s)
- Josefina Gutierrez-Martinez
- División de Investigación en Ingeniería Médica, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Mexico City, Mexico
| | - Jorge A. Mercado-Gutierrez
- División de Investigación en Ingeniería Médica, Instituto Nacional de Rehabilitación Luis Guillermo Ibarra Ibarra, Mexico City, Mexico
| | | | | | | |
Collapse
|
30
|
Kubacki A. Use of Force Feedback Device in a Hybrid Brain-Computer Interface Based on SSVEP, EOG and Eye Tracking for Sorting Items. SENSORS 2021; 21:s21217244. [PMID: 34770554 PMCID: PMC8588340 DOI: 10.3390/s21217244] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 10/27/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022]
Abstract
Research focused on signals derived from the human organism is becoming increasingly popular. In this field, a special role is played by brain-computer interfaces based on brainwaves. They are becoming increasingly popular due to the downsizing of EEG signal recording devices and ever-lower set prices. Unfortunately, such systems are substantially limited in terms of the number of generated commands. This especially applies to sets that are not medical devices. This article proposes a hybrid brain-computer system based on the Steady-State Visual Evoked Potential (SSVEP), EOG, eye tracking, and force feedback system. Such an expanded system eliminates many of the particular system shortcomings and provides much better results. The first part of the paper presents information on the methods applied in the hybrid brain-computer system. The presented system was tested in terms of the ability of the operator to place the robot’s tip to a designated position. A virtual model of an industrial robot was proposed, which was used in the testing. The tests were repeated on a real-life industrial robot. Positioning accuracy of system was verified with the feedback system both enabled and disabled. The results of tests conducted both on the model and on the real object clearly demonstrate that force feedback improves the positioning accuracy of the robot’s tip when controlled by the operator. In addition, the results for the model and the real-life industrial model are very similar. In the next stage, research was carried out on the possibility of sorting items using the BCI system. The research was carried out on a model and a real robot. The results show that it is possible to sort using bio signals from the human body.
Collapse
Affiliation(s)
- Arkadiusz Kubacki
- Institute of Mechanical Technology, Poznan University of Technology, ul. Piotrowo 3, 60-965 Poznań, Poland
| |
Collapse
|
31
|
Jeong JH, Choi JH, Kim KT, Lee SJ, Kim DJ, Kim HM. Multi-Domain Convolutional Neural Networks for Lower-Limb Motor Imagery Using Dry vs. Wet Electrodes. SENSORS 2021; 21:s21196672. [PMID: 34640992 PMCID: PMC8513081 DOI: 10.3390/s21196672] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 10/04/2021] [Accepted: 10/05/2021] [Indexed: 11/29/2022]
Abstract
Motor imagery (MI) brain–computer interfaces (BCIs) have been used for a wide variety of applications due to their intuitive matching between the user’s intentions and the performance of tasks. Applying dry electroencephalography (EEG) electrodes to MI BCI applications can resolve many constraints and achieve practicality. In this study, we propose a multi-domain convolutional neural networks (MD-CNN) model that learns subject-specific and electrode-dependent EEG features using a multi-domain structure to improve the classification accuracy of dry electrode MI BCIs. The proposed MD-CNN model is composed of learning layers for three domain representations (time, spatial, and phase). We first evaluated the proposed MD-CNN model using a public dataset to confirm 78.96% classification accuracy for multi-class classification (chance level accuracy: 30%). After that, 10 healthy subjects participated and performed three classes of MI tasks related to lower-limb movement (gait, sitting down, and resting) over two sessions (dry and wet electrodes). Consequently, the proposed MD-CNN model achieved the highest classification accuracy (dry: 58.44%; wet: 58.66%; chance level accuracy: 43.33%) with a three-class classifier and the lowest difference in accuracy between the two electrode types (0.22%, d = 0.0292) compared with the conventional classifiers (FBCSP, EEGNet, ShallowConvNet, and DeepConvNet) that used only a single domain. We expect that the proposed MD-CNN model could be applied for developing robust MI BCI systems with dry electrodes.
Collapse
Affiliation(s)
- Ji-Hyeok Jeong
- Biomedical Research Division, Bionics Research Center, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.-H.J.); (J.-H.C.); (K.-T.K.); (S.-J.L.)
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Korea
| | - Jun-Hyuk Choi
- Biomedical Research Division, Bionics Research Center, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.-H.J.); (J.-H.C.); (K.-T.K.); (S.-J.L.)
- Division of Bio-Medical Science & Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Korea
| | - Keun-Tae Kim
- Biomedical Research Division, Bionics Research Center, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.-H.J.); (J.-H.C.); (K.-T.K.); (S.-J.L.)
| | - Song-Joo Lee
- Biomedical Research Division, Bionics Research Center, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.-H.J.); (J.-H.C.); (K.-T.K.); (S.-J.L.)
- Division of Bio-Medical Science & Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Korea
| | - Dong-Joo Kim
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Korea
- Department of Neurology, Korea University College of Medicine, Seoul 02841, Korea
- Department of Artificial Intelligence, Korea University, Seoul 02841, Korea
- Correspondence: (D.-J.K.); (H.-M.K.)
| | - Hyung-Min Kim
- Biomedical Research Division, Bionics Research Center, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.-H.J.); (J.-H.C.); (K.-T.K.); (S.-J.L.)
- Division of Bio-Medical Science & Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Korea
- Correspondence: (D.-J.K.); (H.-M.K.)
| |
Collapse
|
32
|
Sarmiento LC, Villamizar S, López O, Collazos AC, Sarmiento J, Rodríguez JB. Recognition of EEG Signals from Imagined Vowels Using Deep Learning Methods. SENSORS (BASEL, SWITZERLAND) 2021; 21:6503. [PMID: 34640824 PMCID: PMC8512781 DOI: 10.3390/s21196503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/17/2021] [Accepted: 09/24/2021] [Indexed: 01/27/2023]
Abstract
The use of imagined speech with electroencephalographic (EEG) signals is a promising field of brain-computer interfaces (BCI) that seeks communication between areas of the cerebral cortex related to language and devices or machines. However, the complexity of this brain process makes the analysis and classification of this type of signals a relevant topic of research. The goals of this study were: to develop a new algorithm based on Deep Learning (DL), referred to as CNNeeg1-1, to recognize EEG signals in imagined vowel tasks; to create an imagined speech database with 50 subjects specialized in imagined vowels from the Spanish language (/a/,/e/,/i/,/o/,/u/); and to contrast the performance of the CNNeeg1-1 algorithm with the DL Shallow CNN and EEGNet benchmark algorithms using an open access database (BD1) and the newly developed database (BD2). In this study, a mixed variance analysis of variance was conducted to assess the intra-subject and inter-subject training of the proposed algorithms. The results show that for intra-subject training analysis, the best performance among the Shallow CNN, EEGNet, and CNNeeg1-1 methods in classifying imagined vowels (/a/,/e/,/i/,/o/,/u/) was exhibited by CNNeeg1-1, with an accuracy of 65.62% for BD1 database and 85.66% for BD2 database.
Collapse
Affiliation(s)
- Luis Carlos Sarmiento
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Sergio Villamizar
- Department of Electrical and Electronics Engineering, School of Engineering, Universidad Nacional de Colombia, Bogotá 111321, Colombia; (S.V.); (J.B.R.)
| | - Omar López
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Ana Claros Collazos
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Jhon Sarmiento
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Jan Bacca Rodríguez
- Department of Electrical and Electronics Engineering, School of Engineering, Universidad Nacional de Colombia, Bogotá 111321, Colombia; (S.V.); (J.B.R.)
| |
Collapse
|
33
|
Brain Symmetry Analysis during the Use of a BCI Based on Motor Imagery for the Control of a Lower-Limb Exoskeleton. Symmetry (Basel) 2021. [DOI: 10.3390/sym13091746] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain–Computer Interfaces (BCI) are systems that allow external devices to be controlled by means of brain activity. There are different such technologies, and electroencephalography (EEG) is an example. One of the most common EEG control methods is based on detecting changes in sensorimotor rhythms (SMRs) during motor imagery (MI). The aim of this study was to assess the laterality of cortical function when performing MI of the lower limb. Brain signals from five subjects were analyzed in two conditions, during exoskeleton-assisted gait and while static. Three different EEG electrode configurations were evaluated: covering both hemispheres, covering the non-dominant hemisphere and covering the dominant hemisphere. In addition, the evolution of performance and laterality with practice was assessed. Although sightly superior results were achieved with information from all electrodes, differences between electrode configurations were not statistically significant. Regarding the evolution during the experimental sessions, the performance of the BCI generally evolved positively the higher the experience was.
Collapse
|
34
|
Ha J, Park S, Im CH, Kim L. A Hybrid Brain-Computer Interface for Real-Life Meal-Assist Robot Control. SENSORS 2021; 21:s21134578. [PMID: 34283122 PMCID: PMC8271393 DOI: 10.3390/s21134578] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 06/30/2021] [Accepted: 07/01/2021] [Indexed: 11/16/2022]
Abstract
Assistant devices such as meal-assist robots aid individuals with disabilities and support the elderly in performing daily activities. However, existing meal-assist robots are inconvenient to operate due to non-intuitive user interfaces, requiring additional time and effort. Thus, we developed a hybrid brain-computer interface-based meal-assist robot system following three features that can be measured using scalp electrodes for electroencephalography. The following three procedures comprise a single meal cycle. (1) Triple eye-blinks (EBs) from the prefrontal channel were treated as activation for initiating the cycle. (2) Steady-state visual evoked potentials (SSVEPs) from occipital channels were used to select the food per the user's intention. (3) Electromyograms (EMGs) were recorded from temporal channels as the users chewed the food to mark the end of a cycle and indicate readiness for starting the following meal. The accuracy, information transfer rate, and false positive rate during experiments on five subjects were as follows: accuracy (EBs/SSVEPs/EMGs) (%): (94.67/83.33/97.33); FPR (EBs/EMGs) (times/min): (0.11/0.08); ITR (SSVEPs) (bit/min): 20.41. These results revealed the feasibility of this assistive system. The proposed system allows users to eat on their own more naturally. Furthermore, it can increase the self-esteem of disabled and elderly peeople and enhance their quality of life.
Collapse
Affiliation(s)
- Jihyeon Ha
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.H.); (S.P.)
- Department of Biomedical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Sangin Park
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.H.); (S.P.)
| | - Chang-Hwan Im
- Department of Biomedical Engineering, Hanyang University, Seoul 04763, Korea;
| | - Laehyun Kim
- Center for Bionics, Korea Institute of Science and Technology, Seoul 02792, Korea; (J.H.); (S.P.)
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul 04763, Korea
- Correspondence: ; Tel.: +82-2-958-6726
| |
Collapse
|