1
|
Luo S, Meng Q, Li S, Yu H. Research of intent recognition in rehabilitation robots: a systematic review. Disabil Rehabil Assist Technol 2024; 19:1307-1318. [PMID: 36695473 DOI: 10.1080/17483107.2023.2170477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 01/26/2023]
Abstract
PURPOSE Rehabilitation robots with intent recognition are helping people with dysfunction to enjoy better lives. Many rehabilitation robots with intent recognition have been developed by academic institutions and commercial companies. However, there is no systematic summary about the application of intent recognition in the field of rehabilitation robots. Therefore, the purpose of this paper is to summarize the application of intent recognition in rehabilitation robots, analyze the current status of their research, and provide cutting-edge research directions for colleagues. MATERIALS AND METHODS Literature searches were conducted on Web of Science, IEEE Xplore, ScienceDirect, SpringerLink, and Medline. Search terms included "rehabilitation robot", "intent recognition", "exoskeleton", "prosthesis", "surface electromyography (sEMG)" and "electroencephalogram (EEG)". References listed in relevant literature were further screened according to inclusion and exclusion criteria. RESULTS In this field, most studies have recognized movement intent by kinematic, sEMG, and EEG signals. However, in practical studies, the development of intent recognition in rehabilitation robots is limited by the hysteresis of kinematic signals and the weak anti-interference ability of sEMG and EEG signals. CONCLUSIONS Intent recognition has achieved a lot in the field of rehabilitation robotics but the key factors limiting its development are still timeliness and accuracy. In the future, intent recognition strategy with multi-sensor information fusion may be a good solution.
Collapse
Affiliation(s)
- Shengli Luo
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| | | | - Sujiao Li
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| | - Hongliu Yu
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
2
|
Lorenz EA, Su X, Skjæret-Maroni N. A review of combined functional neuroimaging and motion capture for motor rehabilitation. J Neuroeng Rehabil 2024; 21:3. [PMID: 38172799 PMCID: PMC10765727 DOI: 10.1186/s12984-023-01294-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 12/11/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Technological advancements in functional neuroimaging and motion capture have led to the development of novel methods that facilitate the diagnosis and rehabilitation of motor deficits. These advancements allow for the synchronous acquisition and analysis of complex signal streams of neurophysiological data (e.g., EEG, fNIRS) and behavioral data (e.g., motion capture). The fusion of those data streams has the potential to provide new insights into cortical mechanisms during movement, guide the development of rehabilitation practices, and become a tool for assessment and therapy in neurorehabilitation. RESEARCH OBJECTIVE This paper aims to review the existing literature on the combined use of motion capture and functional neuroimaging in motor rehabilitation. The objective is to understand the diversity and maturity of technological solutions employed and explore the clinical advantages of this multimodal approach. METHODS This paper reviews literature related to the combined use of functional neuroimaging and motion capture for motor rehabilitation following the PRISMA guidelines. Besides study and participant characteristics, technological aspects of the used systems, signal processing methods, and the nature of multimodal feature synchronization and fusion were extracted. RESULTS Out of 908 publications, 19 were included in the final review. Basic or translation studies were mainly represented and based predominantly on healthy participants or stroke patients. EEG and mechanical motion capture technologies were most used for biomechanical data acquisition, and their subsequent processing is based mainly on traditional methods. The system synchronization techniques at large were underreported. The fusion of multimodal features mainly supported the identification of movement-related cortical activity, and statistical methods were occasionally employed to examine cortico-kinematic relationships. CONCLUSION The fusion of motion capture and functional neuroimaging might offer advantages for motor rehabilitation in the future. Besides facilitating the assessment of cognitive processes in real-world settings, it could also improve rehabilitative devices' usability in clinical environments. Further, by better understanding cortico-peripheral coupling, new neuro-rehabilitation methods can be developed, such as personalized proprioceptive training. However, further research is needed to advance our knowledge of cortical-peripheral coupling, evaluate the validity and reliability of multimodal parameters, and enhance user-friendly technologies for clinical adaptation.
Collapse
Affiliation(s)
- Emanuel A Lorenz
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Xiaomeng Su
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| | - Nina Skjæret-Maroni
- Department of Neuromedicine and Movement Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
3
|
Dominijanni G, Pinheiro DL, Pollina L, Orset B, Gini M, Anselmino E, Pierella C, Olivier J, Shokur S, Micera S. Human motor augmentation with an extra robotic arm without functional interference. Sci Robot 2023; 8:eadh1438. [PMID: 38091424 DOI: 10.1126/scirobotics.adh1438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 11/15/2023] [Indexed: 12/18/2023]
Abstract
Extra robotic arms (XRAs) are gaining interest in neuroscience and robotics, offering potential tools for daily activities. However, this compelling opportunity poses new challenges for sensorimotor control strategies and human-machine interfaces (HMIs). A key unsolved challenge is allowing users to proficiently control XRAs without hindering their existing functions. To address this, we propose a pipeline to identify suitable HMIs given a defined task to accomplish with the XRA. Following such a scheme, we assessed a multimodal motor HMI based on gaze detection and diaphragmatic respiration in a purposely designed modular neurorobotic platform integrating virtual reality and a bilateral upper limb exoskeleton. Our results show that the proposed HMI does not interfere with speaking or visual exploration and that it can be used to control an extra virtual arm independently from the biological ones or in coordination with them. Participants showed significant improvements in performance with daily training and retention of learning, with no further improvements when artificial haptic feedback was provided. As a final proof of concept, naïve and experienced participants used a simplified version of the HMI to control a wearable XRA. Our analysis indicates how the presented HMI can be effectively used to control XRAs. The observation that experienced users achieved a success rate 22.2% higher than that of naïve users, combined with the result that naïve users showed average success rates of 74% when they first engaged with the system, endorses the viability of both the virtual reality-based testing and training and the proposed pipeline.
Collapse
Affiliation(s)
- Giulia Dominijanni
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Daniel Leal Pinheiro
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Neuroengineering and Neurocognition Laboratory, Escola Paulista de Medicina, Department of Neurology and Neurosurgery, Division of Neuroscience, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Leonardo Pollina
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Bastien Orset
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Martina Gini
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
- Neuroelectronic Interfaces, Faculty of Electrical Engineering and IT, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen 52074, Germany
| | - Eugenio Anselmino
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Camilla Pierella
- Department of Neurosciences, Rehabilitation, Ophthalmology, Genetics, and Maternal and Children's Sciences (DINOGMI), University of Genoa, Genoa, Italy
| | - Jérémy Olivier
- Institute for Industrial Sciences and Technologies, Haute Ecole du Paysage, d'Ingénierie et d'Architecture (HEPIA), HES-SO University of Applied Sciences and Arts Western Switzerland, Geneva, Switzerland
| | - Solaiman Shokur
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| | - Silvestro Micera
- Neuro-X Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- BioRobotics Institute, Health Interdisciplinary Center, and Department of Excellence in AI and Robotics, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
4
|
Jiang H, Shen F, Chen L, Peng Y, Guo H, Gao H. Joint domain symmetry and predictive balance for cross-dataset EEG emotion recognition. J Neurosci Methods 2023; 400:109978. [PMID: 37806390 DOI: 10.1016/j.jneumeth.2023.109978] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/16/2023] [Accepted: 09/26/2023] [Indexed: 10/10/2023]
Abstract
BACKGROUND Cross-dataset EEG emotion recognition is an extremely challenging task, since data distributions of EEG from different datasets are greatly different, which makes the universal models yield unsatisfactory results. Although there are many methods have been proposed to reduce cross-dataset distribution discrepancies, they still neglected the following two problems. (1) Label space inconsistency: emotional label spaces of subjects from different datasets are different; (2) Uncertainty propagation: the uncertainty of misclassified emotion samples will propagate between datasets. NEW METHOD To solve these problems, we propose a novel method called domain symmetry and predictive balance (DSPB). For the problem of label space inconsistency, a domain symmetry module is designed to make label spaces of source and target domain to be the same, which randomly selects samples from the source domain and put into the target domain. For the problem of uncertainty propagation, a predictive balance module is proposed to reduce the prediction score of incorrect samples and then effectively reduce distribution differences between EEG from different datasets. RESULTS Experimental results show that our method achieve 61.48% average accuracies on the three cross-dataset tasks. Moreover, we find that gamma is the most relevant to emotion recognition among the five frequency bands, and the prefrontal and temporal brain regions are the channels carrying the most emotional information among the 62 brain channels. COMPARISON WITH EXISTING METHODS Compared with the partial domain adaptation method (SPDA) and the unsupervised domain adaptation (MS-MDA), our method improves average accuracies by 15.60% and 23.11%, respectively. CONCLUSION Besides, data distributions of EEG from different datasets but with the same emotional labels have been well aligned, which demonstrates the effectiveness of DSPB.
Collapse
Affiliation(s)
- Haiting Jiang
- College of Physics and Electronic Information Engineering, Zhejiang Normal University, Jin Hua, 321004, China
| | - Fangyao Shen
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jin Hua, 321004, China
| | - Lina Chen
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jin Hua, 321004, China.
| | - Yong Peng
- School of Computer Science and Technology, Hangzhou Dianzi University, Hang Zhou, 310018, China
| | - Hongjie Guo
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jin Hua, 321004, China
| | - Hong Gao
- School of Computer Science and Technology (School of Artificial Intelligence), Zhejiang Normal University, Jin Hua, 321004, China
| |
Collapse
|
5
|
Lin C, Zhang C, Xu J, Liu R, Leng Y, Fu C. Neural Correlation of EEG and Eye Movement in Natural Grasping Intention Estimation. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4329-4337. [PMID: 37883284 DOI: 10.1109/tnsre.2023.3327907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Decoding the user's natural grasp intent enhances the application of wearable robots, improving the daily lives of individuals with disabilities. Electroencephalogram (EEG) and eye movements are two natural representations when users generate grasp intent in their minds, with current studies decoding human intent by fusing EEG and eye movement signals. However, the neural correlation between these two signals remains unclear. Thus, this paper aims to explore the consistency between EEG and eye movement in natural grasping intention estimation. Specifically, six grasp intent pairs are decoded by combining feature vectors and utilizing the optimal classifier. Extensive experimental results indicate that the coupling between the EEG and eye movements intent patterns remains intact when the user generates a natural grasp intent, and concurrently, the EEG pattern is consistent with the eye movements pattern across the task pairs. Moreover, the findings reveal a solid connection between EEG and eye movements even when taking into account cortical EEG (originating from the visual cortex or motor cortex) and the presence of a suboptimal classifier. Overall, this work uncovers the coupling correlation between EEG and eye movements and provides a reference for intention estimation.
Collapse
|
6
|
Xu D, Tang F, Li Y, Zhang Q, Feng X. FB-CCNN: A Filter Bank Complex Spectrum Convolutional Neural Network with Artificial Gradient Descent Optimization. Brain Sci 2023; 13:brainsci13050780. [PMID: 37239253 DOI: 10.3390/brainsci13050780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/02/2023] [Accepted: 05/08/2023] [Indexed: 05/28/2023] Open
Abstract
The brain-computer interface (BCI) provides direct communication between human brains and machines, including robots, drones and wheelchairs, without the involvement of peripheral systems. BCI based on electroencephalography (EEG) has been applied in many fields, including aiding people with physical disabilities, rehabilitation, education and entertainment. Among the different EEG-based BCI paradigms, steady-state visual evoked potential (SSVEP)-based BCIs are known for their lower training requirements, high classification accuracy and high information transfer rate (ITR). In this article, a filter bank complex spectrum convolutional neural network (FB-CCNN) was proposed, and it achieved leading classification accuracies of 94.85 ± 6.18% and 80.58 ± 14.43%, respectively, on two open SSVEP datasets. An optimization algorithm named artificial gradient descent (AGD) was also proposed to generate and optimize the hyperparameters of the FB-CCNN. AGD also revealed correlations between different hyperparameters and their corresponding performances. It was experimentally demonstrated that FB-CCNN performed better when the hyperparameters were fixed values rather than channel number-based. In conclusion, a deep learning model named FB-CCNN and a hyperparameter-optimizing algorithm named AGD were proposed and demonstrated to be effective in classifying SSVEP through experiments. The hyperparameter design process and analysis were carried out using AGD, and advice on choosing hyperparameters for deep learning models in classifying SSVEP was provided.
Collapse
Affiliation(s)
- Dongcen Xu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Fengzhen Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Yiping Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Qifeng Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Xisheng Feng
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| |
Collapse
|
7
|
Catalán JM, Trigili E, Nann M, Blanco-Ivorra A, Lauretti C, Cordella F, Ivorra E, Armstrong E, Crea S, Alcañiz M, Zollo L, Soekadar SR, Vitiello N, García-Aracil N. Hybrid brain/neural interface and autonomous vision-guided whole-arm exoskeleton control to perform activities of daily living (ADLs). J Neuroeng Rehabil 2023; 20:61. [PMID: 37149621 PMCID: PMC10164333 DOI: 10.1186/s12984-023-01185-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 04/26/2023] [Indexed: 05/08/2023] Open
Abstract
BACKGROUND The aging of the population and the progressive increase of life expectancy in developed countries is leading to a high incidence of age-related cerebrovascular diseases, which affect people's motor and cognitive capabilities and might result in the loss of arm and hand functions. Such conditions have a detrimental impact on people's quality of life. Assistive robots have been developed to help people with motor or cognitive disabilities to perform activities of daily living (ADLs) independently. Most of the robotic systems for assisting on ADLs proposed in the state of the art are mainly external manipulators and exoskeletal devices. The main objective of this study is to compare the performance of an hybrid EEG/EOG interface to perform ADLs when the user is controlling an exoskeleton rather than using an external manipulator. METHODS Ten impaired participants (5 males and 5 females, mean age 52 ± 16 years) were instructed to use both systems to perform a drinking task and a pouring task comprising multiple subtasks. For each device, two modes of operation were studied: synchronous mode (the user received a visual cue indicating the sub-tasks to be performed at each time) and asynchronous mode (the user started and finished each of the sub-tasks independently). Fluent control was assumed when the time for successful initializations ranged below 3 s and a reliable control in case it remained below 5 s. NASA-TLX questionnaire was used to evaluate the task workload. For the trials involving the use of the exoskeleton, a custom Likert-Scale questionnaire was used to evaluate the user's experience in terms of perceived comfort, safety, and reliability. RESULTS All participants were able to control both systems fluently and reliably. However, results suggest better performances of the exoskeleton over the external manipulator (75% successful initializations remain below 3 s in case of the exoskeleton and bellow 5s in case of the external manipulator). CONCLUSIONS Although the results of our study in terms of fluency and reliability of EEG control suggest better performances of the exoskeleton over the external manipulator, such results cannot be considered conclusive, due to the heterogeneity of the population under test and the relatively limited number of participants.
Collapse
Affiliation(s)
- José M Catalán
- Robotics and Artificial Intelligence Group of the Bioengineering Institute, Miguel Hernandez University, 03202, Elche, Spain.
| | - Emilio Trigili
- BioRobotics Institute, Scuola Superiore Sant'Anna, 56025, Pontedera, Italy.
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy.
| | - Marius Nann
- Clinical Neurotechnology Laboratory, Charité, Universitätsmedizin Berlin, 10117, Belin, Germany
| | - Andrea Blanco-Ivorra
- Robotics and Artificial Intelligence Group of the Bioengineering Institute, Miguel Hernandez University, 03202, Elche, Spain
| | - Clemente Lauretti
- Laboratory of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, 00128, Rome, Italy
| | - Francesca Cordella
- Laboratory of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, 00128, Rome, Italy
| | - Eugenio Ivorra
- University Institute for Human-Centered Technology Research (Human-Tech), Universitat Politècnica de València, 46022, Valencia, Spain
| | | | - Simona Crea
- BioRobotics Institute, Scuola Superiore Sant'Anna, 56025, Pontedera, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy
- IRCCS, Fondazione Don Carlo Gnocchi, Milan, Italy
| | - Mariano Alcañiz
- University Institute for Human-Centered Technology Research (Human-Tech), Universitat Politècnica de València, 46022, Valencia, Spain
| | - Loredana Zollo
- Laboratory of Biomedical Robotics and Biomicrosystems, Università Campus Bio-Medico di Roma, 00128, Rome, Italy
| | - Surjo R Soekadar
- Clinical Neurotechnology Laboratory, Charité, Universitätsmedizin Berlin, 10117, Belin, Germany
| | - Nicola Vitiello
- BioRobotics Institute, Scuola Superiore Sant'Anna, 56025, Pontedera, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant'Anna, Pisa, Italy
- IRCCS, Fondazione Don Carlo Gnocchi, Milan, Italy
| | - Nicolás García-Aracil
- Robotics and Artificial Intelligence Group of the Bioengineering Institute, Miguel Hernandez University, 03202, Elche, Spain
| |
Collapse
|
8
|
Bleuzé A, Mattout J, Congedo M. Tangent space alignment: Transfer learning for Brain-Computer Interface. Front Hum Neurosci 2022; 16:1049985. [DOI: 10.3389/fnhum.2022.1049985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/11/2022] [Indexed: 12/04/2022] Open
Abstract
Statistical variability of electroencephalography (EEG) between subjects and between sessions is a common problem faced in the field of Brain-Computer Interface (BCI). Such variability prevents the usage of pre-trained machine learning models and requires the use of a calibration for every new session. This paper presents a new transfer learning (TL) method that deals with this variability. This method aims to reduce calibration time and even improve accuracy of BCI systems by aligning EEG data from one subject to the other in the tangent space of the positive definite matrices Riemannian manifold. We tested the method on 18 BCI databases comprising a total of 349 subjects pertaining to three BCI paradigms, namely, event related potentials (ERP), motor imagery (MI), and steady state visually evoked potentials (SSVEP). We employ a support vector classifier for feature classification. The results demonstrate a significant improvement of classification accuracy, as compared to a classical training-test pipeline, in the case of the ERP paradigm, whereas for both the MI and SSVEP paradigm no deterioration of performance is observed. A global 2.7% accuracy improvement is obtained compared to a previously published Riemannian method, Riemannian Procrustes Analysis (RPA). Interestingly, tangent space alignment has an intrinsic ability to deal with transfer learning for sets of data that have different number of channels, naturally applying to inter-dataset transfer learning.
Collapse
|
9
|
Execution and perception of upper limb exoskeleton for stroke patients: a systematic review. INTEL SERV ROBOT 2022. [DOI: 10.1007/s11370-022-00435-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
10
|
A Comprehensive Review of Endogenous EEG-Based BCIs for Dynamic Device Control. SENSORS 2022; 22:s22155802. [PMID: 35957360 PMCID: PMC9370865 DOI: 10.3390/s22155802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/23/2022] [Accepted: 07/30/2022] [Indexed: 11/28/2022]
Abstract
Electroencephalogram (EEG)-based brain–computer interfaces (BCIs) provide a novel approach for controlling external devices. BCI technologies can be important enabling technologies for people with severe mobility impairment. Endogenous paradigms, which depend on user-generated commands and do not need external stimuli, can provide intuitive control of external devices. This paper discusses BCIs to control various physical devices such as exoskeletons, wheelchairs, mobile robots, and robotic arms. These technologies must be able to navigate complex environments or execute fine motor movements. Brain control of these devices presents an intricate research problem that merges signal processing and classification techniques with control theory. In particular, obtaining strong classification performance for endogenous BCIs is challenging, and EEG decoder output signals can be unstable. These issues present myriad research questions that are discussed in this review paper. This review covers papers published until the end of 2021 that presented BCI-controlled dynamic devices. It discusses the devices controlled, EEG paradigms, shared control, stabilization of the EEG signal, traditional machine learning and deep learning techniques, and user experience. The paper concludes with a discussion of open questions and avenues for future work.
Collapse
|
11
|
Computer Vision-Based Adaptive Semi-Autonomous Control of an Upper Limb Exoskeleton for Individuals with Tetraplegia. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094374] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
We propose the use of computer vision for adaptive semi-autonomous control of an upper limb exoskeleton for assisting users with severe tetraplegia to increase independence and quality of life. A tongue-based interface was used together with the semi-autonomous control such that individuals with complete tetraplegia were able to use it despite being paralyzed from the neck down. The semi-autonomous control uses computer vision to detect nearby objects and estimate how to grasp them to assist the user in controlling the exoskeleton. Three control schemes were tested: non-autonomous (i.e., manual control using the tongue) control, semi-autonomous control with a fixed level of autonomy, and a semi-autonomous control with a confidence-based adaptive level of autonomy. Studies on experimental participants with and without tetraplegia were carried out. The control schemes were evaluated both in terms of their performance, such as the time and number of commands needed to complete a given task, as well as ratings from the users. The studies showed a clear and significant improvement in both performance and user ratings when using either of the semi-autonomous control schemes. The adaptive semi-autonomous control outperformed the fixed version in some scenarios, namely, in the more complex tasks and with users with more training in using the system.
Collapse
|
12
|
Fitzsimons K, Murphey TD. Ergodic Shared Control: Closing the Loop on pHRI Based on Information Encoded in Motion. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Advances in exoskeletons and robot arms have given us increasing opportunities for providing physical support and meaningful feedback in training and rehabilitation settings. However, the chosen control strategies must support motor learning and provide mathematical task definitions that are actionable for the actuation. Typical robot control architectures rely on measuring error from a reference trajectory. In physical human-robot interaction, this leads to low engagement, invariant practice, and few errors, which are not conducive to motor learning. A reliance on reference trajectories means that the task definition is both over-specified—requiring specific timings not critical to task success—and lacking information about normal variability. In this article, we examine a way to define tasks and close the loop using an ergodic measure that quantifies how much information about a task is encoded in the human-robot motion. This measure can capture the natural variability that exists in typical human motion—enabling therapy based on scientific principles of motor learning. We implement an ergodic hybrid shared controller(HSC) on a robotic arm as well as an error-based controller—virtual fixtures—in a timed drawing task. In a study of 24 participants, we compare ergodic HSC with virtual fixtures and find that ergodic HSC leads to improved training outcomes.
Collapse
Affiliation(s)
| | - Todd D Murphey
- Department of Mechanical Engineering, Northwestern University, USA
| |
Collapse
|
13
|
Ma X, Qiu S, He H. Time-Distributed Attention Network for EEG-based Motor Imagery Decoding from the Same Limb. IEEE Trans Neural Syst Rehabil Eng 2022; 30:496-508. [PMID: 35201988 DOI: 10.1109/tnsre.2022.3154369] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A brain-computer interface (BCI) based on motor imagery (MI) from the same limb can provide an intuitive control pathway but has received limited attention. It is still a challenge to classify multiple MI tasks from the same limb. The goal of this study is to propose a novel decoding method to classify the MI tasks of four joints of the same upper limb and the resting state. EEG signals were collected from 20 participants. A time-distributed attention network (TD-Atten) was proposed to adaptively assign different weights to different classes and frequency bands of the input multiband Common Spatial Pattern (CSP) features. The long short-term memory (LSTM) and dense layers were then used to learn sequential information from the reweight features and perform the classification. Our proposed method outperformed other baseline and deep learning-based methods and obtained the accuracies of 46.8% in the 5-class scenario and 53.4% in the 4-class scenario. The visualization results of attention weights indicated that the proposed framework can adaptively pay attention to alpha-band related features in MI tasks, which was consistent with the analysis of brain activation patterns. These results demonstrated the feasibility and interpretability of the attention mechanism in MI decoding and the potential of this fine MI paradigm to be applied for the control of a robotic arm or a neural prosthesis.
Collapse
|
14
|
Continuous Hybrid BCI Control for Robotic Arm Using Noninvasive Electroencephalogram, Computer Vision, and Eye Tracking. MATHEMATICS 2022. [DOI: 10.3390/math10040618] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The controlling of robotic arms based on brain–computer interface (BCI) can revolutionize the quality of life and living conditions for individuals with physical disabilities. Invasive electroencephalography (EEG)-based BCI has been able to control multiple degrees of freedom (DOFs) robotic arms in three dimensions. However, it is still hard to control a multi-DOF robotic arm to reach and grasp the desired target accurately in complex three-dimensional (3D) space by a noninvasive system mainly due to the limitation of EEG decoding performance. In this study, we propose a noninvasive EEG-based BCI for a robotic arm control system that enables users to complete multitarget reach and grasp tasks and avoid obstacles by hybrid control. The results obtained from seven subjects demonstrated that motor imagery (MI) training could modulate brain rhythms, and six of them completed the online tasks using the hybrid-control-based robotic arm system. The proposed system shows effective performance due to the combination of MI-based EEG, computer vision, gaze detection, and partially autonomous guidance, which drastically improve the accuracy of online tasks and reduce the brain burden caused by long-term mental activities.
Collapse
|
15
|
Li H, Zhang W, Zhang J, Huang W. Fiber optic jerk sensor. OPTICS EXPRESS 2022; 30:5585-5595. [PMID: 35209517 DOI: 10.1364/oe.448132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 01/24/2022] [Indexed: 06/14/2023]
Abstract
Jerk is directly related to a physical mutation process of structural damage and human comfort. A fiber optic jerk sensor (FOJS) based on a fiber optic differentiating Mach-Zehnder interferometer is proposed. It can directly measure jerk by demodulating the phase of interference light, which avoids the high-frequency noise interference caused by differentiating the acceleration. The sensing theory and sensor design are given in detail. The experimental and theoretical results agree, demonstrating that the FOJS has a high sensitivity, an ultralow phase noise floor, a wide measuring range, and good linearity. The impact test shows that the FOJS can directly measure jerk and has good consistency with a standard piezoelectric accelerometer. The FOJS has potential applications in earthquake engineering, comfort evaluations, and railway design. This is the first time that directly measuring jerk with an optical sensor is reported.
Collapse
|
16
|
Meng J, Wu Z, Li S, Zhu X. Effects of Gaze Fixation on the Performance of a Motor Imagery-Based Brain-Computer Interface. Front Hum Neurosci 2022; 15:773603. [PMID: 35140593 PMCID: PMC8818858 DOI: 10.3389/fnhum.2021.773603] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 12/08/2021] [Indexed: 11/13/2022] Open
Abstract
Motor imagery-based brain-computer interfaces (BCIs) have been studied without controlling subjects’ gaze fixation position previously. The effect of gaze fixation and covert attention on the behavioral performance of BCI is still unknown. This study designed a gaze fixation controlled experiment. Subjects were required to conduct a secondary task of gaze fixation when performing the primary task of motor imagination. Subjects’ performance was analyzed according to the relationship between motor imagery target and the gaze fixation position, resulting in three BCI control conditions, i.e., congruent, incongruent, and center cross trials. A group of fourteen subjects was recruited. The average group performances of three different conditions did not show statistically significant differences in terms of BCI control accuracy, feedback duration, and trajectory length. Further analysis of gaze shift response time revealed a significantly shorter response time for congruent trials compared to incongruent trials. Meanwhile, the parietal occipital cortex also showed active neural activities for congruent and incongruent trials, and this was revealed by a contrast analysis of R-square values and lateralization index. However, the lateralization index computed from the parietal and occipital areas was not correlated with the BCI behavioral performance. Subjects’ BCI behavioral performance was not affected by the position of gaze fixation and covert attention. This indicated that motor imagery-based BCI could be used freely in robotic arm control without sacrificing performance.
Collapse
Affiliation(s)
- Jianjun Meng
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- *Correspondence: Jianjun Meng,
| | - Zehan Wu
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Songwei Li
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiangyang Zhu
- Department of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
17
|
Mohammadi M, Knoche H, Thøgersen M, Bengtson SH, Gull MA, Bentsen B, Gaihede M, Severinsen KE, Andreasen Struijk LNS. Eyes-Free Tongue Gesture and Tongue Joystick Control of a Five DOF Upper-Limb Exoskeleton for Severely Disabled Individuals. Front Neurosci 2022; 15:739279. [PMID: 34975367 PMCID: PMC8718615 DOI: 10.3389/fnins.2021.739279] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Accepted: 11/23/2021] [Indexed: 11/30/2022] Open
Abstract
Spinal cord injury can leave the affected individual severely disabled with a low level of independence and quality of life. Assistive upper-limb exoskeletons are one of the solutions that can enable an individual with tetraplegia (paralysis in both arms and legs) to perform simple activities of daily living by mobilizing the arm. Providing an efficient user interface that can provide full continuous control of such a device—safely and intuitively—with multiple degrees of freedom (DOFs) still remains a challenge. In this study, a control interface for an assistive upper-limb exoskeleton with five DOFs based on an intraoral tongue-computer interface (ITCI) for individuals with tetraplegia was proposed. Furthermore, we evaluated eyes-free use of the ITCI for the first time and compared two tongue-operated control methods, one based on tongue gestures and the other based on dynamic virtual buttons and a joystick-like control. Ten able-bodied participants tongue controlled the exoskeleton for a drinking task with and without visual feedback on a screen in three experimental sessions. As a baseline, the participants performed the drinking task with a standard gamepad. The results showed that it was possible to control the exoskeleton with the tongue even without visual feedback and to perform the drinking task at 65.1% of the speed of the gamepad. In a clinical case study, an individual with tetraplegia further succeeded to fully control the exoskeleton and perform the drinking task only 5.6% slower than the able-bodied group. This study demonstrated the first single-modal control interface that can enable individuals with complete tetraplegia to fully and continuously control a five-DOF upper limb exoskeleton and perform a drinking task after only 2 h of training. The interface was used both with and without visual feedback.
Collapse
Affiliation(s)
- Mostafa Mohammadi
- Neurorehabilitation Robotics and Engineering, Center for Rehabilitation Robotics, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Hendrik Knoche
- Human Machine Interaction, Department of Architecture, Design and Media Technology, Aalborg University, Aalborg, Denmark
| | - Mikkel Thøgersen
- Neurorehabilitation Robotics and Engineering, Center for Rehabilitation Robotics, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Stefan Hein Bengtson
- Human Machine Interaction, Department of Architecture, Design and Media Technology, Aalborg University, Aalborg, Denmark
| | - Muhammad Ahsan Gull
- Department of Materials and Production, Aalborg University, Aalborg, Denmark
| | - Bo Bentsen
- Neurorehabilitation Robotics and Engineering, Center for Rehabilitation Robotics, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Michael Gaihede
- Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | | | - Lotte N S Andreasen Struijk
- Neurorehabilitation Robotics and Engineering, Center for Rehabilitation Robotics, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| |
Collapse
|
18
|
Chen H, Jin M, Li Z, Fan C, Li J, He H. MS-MDA: Multisource Marginal Distribution Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition. Front Neurosci 2021; 15:778488. [PMID: 34949983 PMCID: PMC8688841 DOI: 10.3389/fnins.2021.778488] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 10/27/2021] [Indexed: 11/13/2022] Open
Abstract
As an essential element for the diagnosis and rehabilitation of psychiatric disorders, the electroencephalogram (EEG) based emotion recognition has achieved significant progress due to its high precision and reliability. However, one obstacle to practicality lies in the variability between subjects and sessions. Although several studies have adopted domain adaptation (DA) approaches to tackle this problem, most of them treat multiple EEG data from different subjects and sessions together as a single source domain for transfer, which either fails to satisfy the assumption of domain adaptation that the source has a certain marginal distribution, or increases the difficulty of adaptation. We therefore propose the multi-source marginal distribution adaptation (MS-MDA) for EEG emotion recognition, which takes both domain-invariant and domain-specific features into consideration. First, we assume that different EEG data share the same low-level features, then we construct independent branches for multiple EEG data source domains to adopt one-to-one domain adaptation and extract domain-specific features. Finally, the inference is made by multiple branches. We evaluate our method on SEED and SEED-IV for recognizing three and four emotions, respectively. Experimental results show that the MS-MDA outperforms the comparison methods and state-of-the-art models in cross-session and cross-subject transfer scenarios in our settings. Codes at https://github.com/VoiceBeer/MS-MDA.
Collapse
Affiliation(s)
- Hao Chen
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Ming Jin
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Zhunan Li
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Cunhang Fan
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Jinpeng Li
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Huiguang He
- Research Center for Brain-inspired Intelligence and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
19
|
Dalla Gasperina S, Roveda L, Pedrocchi A, Braghin F, Gandolla M. Review on Patient-Cooperative Control Strategies for Upper-Limb Rehabilitation Exoskeletons. Front Robot AI 2021; 8:745018. [PMID: 34950707 PMCID: PMC8688994 DOI: 10.3389/frobt.2021.745018] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 10/25/2021] [Indexed: 01/09/2023] Open
Abstract
Technology-supported rehabilitation therapy for neurological patients has gained increasing interest since the last decades. The literature agrees that the goal of robots should be to induce motor plasticity in subjects undergoing rehabilitation treatment by providing the patients with repetitive, intensive, and task-oriented treatment. As a key element, robot controllers should adapt to patients’ status and recovery stage. Thus, the design of effective training modalities and their hardware implementation play a crucial role in robot-assisted rehabilitation and strongly influence the treatment outcome. The objective of this paper is to provide a multi-disciplinary vision of patient-cooperative control strategies for upper-limb rehabilitation exoskeletons to help researchers bridge the gap between human motor control aspects, desired rehabilitation training modalities, and their hardware implementations. To this aim, we propose a three-level classification based on 1) “high-level” training modalities, 2) “low-level” control strategies, and 3) “hardware-level” implementation. Then, we provide examples of literature upper-limb exoskeletons to show how the three levels of implementation have been combined to obtain a given high-level behavior, which is specifically designed to promote motor relearning during the rehabilitation treatment. Finally, we emphasize the need for the development of compliant control strategies, based on the collaboration between the exoskeleton and the wearer, we report the key findings to promote the desired physical human-robot interaction for neurorehabilitation, and we provide insights and suggestions for future works.
Collapse
Affiliation(s)
- Stefano Dalla Gasperina
- NearLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.,WE-COBOT Lab, Polo Territoriale di Lecco, Politecnico di Milano, Lecco, Italy
| | - Loris Roveda
- Istituto Dalle Molle di studi sull'Intelligenza Artificiale (IDSIA), USI-SUPSI, Lugano, Switzerland
| | - Alessandra Pedrocchi
- NearLab, Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy.,WE-COBOT Lab, Polo Territoriale di Lecco, Politecnico di Milano, Lecco, Italy
| | - Francesco Braghin
- WE-COBOT Lab, Polo Territoriale di Lecco, Politecnico di Milano, Lecco, Italy.,Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy
| | - Marta Gandolla
- WE-COBOT Lab, Polo Territoriale di Lecco, Politecnico di Milano, Lecco, Italy.,Department of Mechanical Engineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
20
|
Mouchoux J, Bravo-Cabrera MA, Dosen S, Schilling AF, Markovic M. Impact of Shared Control Modalities on Performance and Usability of Semi-autonomous Prostheses. Front Neurorobot 2021; 15:768619. [PMID: 34975446 PMCID: PMC8718752 DOI: 10.3389/fnbot.2021.768619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/22/2021] [Indexed: 11/13/2022] Open
Abstract
Semi-autonomous (SA) control of upper-limb prostheses can improve the performance and decrease the cognitive burden of a user. In this approach, a prosthesis is equipped with additional sensors (e.g., computer vision) that provide contextual information and enable the system to accomplish some tasks automatically. Autonomous control is fused with a volitional input of a user to compute the commands that are sent to the prosthesis. Although several promising prototypes demonstrating the potential of this approach have been presented, methods to integrate the two control streams (i.e., autonomous and volitional) have not been systematically investigated. In the present study, we implemented three shared control modalities (i.e., sequential, simultaneous, and continuous) and compared their performance, as well as the cognitive and physical burdens imposed on the user. In the sequential approach, the volitional input disabled the autonomous control. In the simultaneous approach, the volitional input to a specific degree of freedom (DoF) activated autonomous control of other DoFs, whereas in the continuous approach, autonomous control was always active except for the DoFs controlled by the user. The experiment was conducted in ten able-bodied subjects, and these subjects used an SA prosthesis to perform reach-and-grasp tasks while reacting to audio cues (dual tasking). The results demonstrated that, compared to the manual baseline (volitional control only), all three SA modalities accomplished the task in a shorter time and resulted in less volitional control input. The simultaneous SA modality performed worse than the sequential and continuous SA approaches. When systematic errors were introduced in the autonomous controller to generate a mismatch between the goals of the user and controller, the performance of SA modalities substantially decreased, even below the manual baseline. The sequential SA scheme was the least impacted one in terms of errors. The present study demonstrates that a specific approach for integrating volitional and autonomous control is indeed an important factor that significantly affects the performance and physical and cognitive load, and therefore these should be considered when designing SA prostheses.
Collapse
Affiliation(s)
- Jérémy Mouchoux
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Miguel A. Bravo-Cabrera
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Strahinja Dosen
- Faculty of Medicine, Department of Health Science and Technology Center for Sensory-Motor Interaction, Aalborg University, Aalborg, Denmark
| | - Arndt F. Schilling
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| | - Marko Markovic
- Applied Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedics and Plastic Surgery, University Medical Center Göttingen, Georg-August University, Göttingen, Germany
| |
Collapse
|
21
|
A Wearable Soft Fabric Sleeve for Upper Limb Augmentation. SENSORS 2021; 21:s21227638. [PMID: 34833719 PMCID: PMC8620533 DOI: 10.3390/s21227638] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 11/02/2021] [Accepted: 11/15/2021] [Indexed: 11/17/2022]
Abstract
Soft actuators (SAs) have been used in many compliant robotic structure and wearable devices, due to their safe interaction with the wearers. Despite advances, the capability of current SAs is limited by scalability, high hysteresis, and slow responses. In this paper, a new class of soft, scalable, and high-aspect ratio fiber-reinforced hydraulic SAs is introduced. The new SA uses a simple fabrication process of insertion where a hollow elastic rubber tube is directly inserted into a constrained hollow coil, eliminating the need for the manual wrapping of an inextensible fiber around a long elastic structure. To provide high adaptation to the user skin for wearable applications, the new SAs are integrated into flexible fabrics to form a wearable fabric sleeve. To monitor the SA elongation, a soft liquid metal-based fabric piezoresistive sensor is also developed. To capture the nonlinear hysteresis of the SA, a novel asymmetric hysteresis model which only requires five model parameters in its structure is developed and experimentally validated. The new SAs-driven wearable robotic sleeve is scalable, highly flexible, and lightweight. It can also produce a large amount of force of around 23 N per muscle at around 30% elongation, to provide useful assistance to the human upper limbs. Experimental results show that the soft fabric sleeve can augment a user’s performance when working against a load, evidenced by a significant reduction on the muscular effort, as monitored by electromyogram (EMG) signals. The performance of the developed SAs, soft fabric sleeve, soft liquid metal fabric sensor, and nonlinear hysteresis model reveal that they can effectively modulate the level of assistance for the wearer. The new technologies obtained from this work can be potentially implemented in emerging assistive applications, such as rehabilitation, defense, and industry.
Collapse
|
22
|
Qiu W, Yang B, Ma J, Gao S, Zhu Y, Wang W. The Paradigm Design of a Novel 2-class Unilateral Upper Limb Motor Imagery Tasks and its EEG Signal Classification. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:152-155. [PMID: 34891260 DOI: 10.1109/embc46164.2021.9630837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multitasking motor imagery (MI) of the unilateral upper limb is potentially more valuable in stroke rehabilitation than the current conventional MI in both hands. In this paper, a novel experimental paradigm was designed to imagine two motions of unilateral upper limb, which is hand gripping and releasing, and elbow reciprocating left and right. During this experiment, the electroencephalogram (EEG) signals were collected from 10 subjects. The time and frequency domains of the EEG signals were analyzed and visualized, indicating the presence of different Event-Related Desynchronization (ERD) or Event-Related Synchronization (ERS) for the two tasks. Then the two tasks were classified through three different EEG decoding methods, in which the optimized convolutional neural network (CNN) based on FBCNet achieved an average accuracy of 67.8%, obtaining a good recognition result. This work not only can advance the studies of MI decoding of unilateral upper limb, but also can provide a basis for better upper limb stroke rehabilitation in MI-BCI.
Collapse
|
23
|
Crocher V, Singh R, Newn J, Oetomo D. Towards a Gaze-Informed Movement Intention Model for Robot-Assisted Upper-Limb Rehabilitation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6155-6158. [PMID: 34892521 DOI: 10.1109/embc46164.2021.9629610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gaze-based intention detection has been explored for robotic-assisted neuro-rehabilitation in recent years. As eye movements often precede hand movements, robotic devices can use gaze information to augment the detection of movement intention in upper-limb rehabilitation. However, due to the likely practical drawbacks of using head-mounted eye trackers and the limited generalisability of the algorithms, gaze-informed approaches have not yet been used in clinical practice.This paper introduces a preliminary model for a gazeinformed movement intention that separates the intention spatial component obtained from the gaze from the time component obtained from movement. We leverage the latter to isolate the relevant gaze information happening just before the movement initiation. We evaluated our approach with six healthy individuals using an experimental setup that employed a screen-mounted eye-tracker. The results showed a prediction accuracy of 60% and 73% for an arbitrary target choice and an imposed target choice, respectively.From these findings, we expect that the model could 1) generalise better to individuals with movement impairment (by not considering movement direction), 2) allow a generalisation to more complex, multi-stage actions including several submovements, and 3) facilitate a more natural human-robot interactions and empower patients with the agency to decide movement onset. Overall, the paper demonstrates the potential for using gaze-movement model and the use of screen-based eye trackers for robot-assisted upper-limb rehabilitation.
Collapse
|
24
|
Poy I, Wu L, Shi BE. A Multimodal Direct Gaze Interface for Wheelchairs and Teleoperated Robots. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4796-4800. [PMID: 34892283 DOI: 10.1109/embc46164.2021.9630471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gaze-based interfaces are especially useful for people with disabilities involving the upper limbs or hands. Typically, users select from a number of options (e.g. letters or commands) displayed on a screen by gazing at the desired option. However, in some applications, e.g. gaze-based driving, it may be dangerous to direct gaze away from the environment towards a separate display. In addition, a purely gaze based interface can present a high cognitive load to users, as gaze is not normally used for selection and/or control, but rather for other purposes, such as information gathering. To address these issues, this paper presents a cost-effective multi-modal system for gaze based driving which combines appearance-based gaze estimates derived from webcam images with push button inputs that trigger command execution. This system uses an intuitive "direct interface", where users determine the direction of motion by gazing in the corresponding direction in the environment. We have implemented the system for both wheelchair control and robotic teleoperation. The use of our system should provide substantial benefits for patients with severe motor disabilities, such as ALS, by providing them with a more natural and affordable method of wheelchair control. We compare the performance of our system to the more conventional and common "indirect" system where gaze is used to select commands from a separate display, showing that our system enables faster and more efficient navigation.
Collapse
|
25
|
Esposito D, Centracchio J, Andreozzi E, Gargiulo GD, Naik GR, Bifulco P. Biosignal-Based Human-Machine Interfaces for Assistance and Rehabilitation: A Survey. SENSORS 2021; 21:s21206863. [PMID: 34696076 PMCID: PMC8540117 DOI: 10.3390/s21206863] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/30/2021] [Accepted: 10/12/2021] [Indexed: 12/03/2022]
Abstract
As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal-based HMIs for assistance and rehabilitation to outline state-of-the-art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full-text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever-growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complexity, so their usefulness should be carefully evaluated for the specific application.
Collapse
Affiliation(s)
- Daniele Esposito
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Jessica Centracchio
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Emilio Andreozzi
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| | - Gaetano D. Gargiulo
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The MARCS Institute, Western Sydney University, Penrith, NSW 2751, Australia
| | - Ganesh R. Naik
- School of Engineering, Design and Built Environment, Western Sydney University, Penrith, NSW 2747, Australia;
- The Adelaide Institute for Sleep Health, Flinders University, Bedford Park, SA 5042, Australia
- Correspondence:
| | - Paolo Bifulco
- Department of Electrical Engineering and Information Technologies, Polytechnic and Basic Sciences School, University of Naples “Federico II”, 80125 Naples, Italy; (D.E.); (J.C.); (E.A.); (P.B.)
| |
Collapse
|
26
|
Qin K, Wang R, Zhang Y. Filter Bank-Driven Multivariate Synchronization Index for Training-Free SSVEP BCI. IEEE Trans Neural Syst Rehabil Eng 2021; 29:934-943. [PMID: 33852389 DOI: 10.1109/tnsre.2021.3073165] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, multivariate synchronization index (MSI) algorithm, as a novel frequency detection method, has attracted increasing attentions in the study of brain-computer interfaces (BCIs) based on steady state visual evoked potential (SSVEP). However, MSI algorithm is hard to fully exploit SSVEP-related harmonic components in the electroencephalogram (EEG), which limits the application of MSI algorithm in BCI systems. In this paper, we propose a novel filter bank-driven MSI algorithm (FBMSI) to overcome the limitation and further improve the accuracy of SSVEP recognition. We evaluate the efficacy of the FBMSI method by developing a 6-command SSVEP-NAO robot system with extensive experimental analyses. An offline experimental study is first performed with EEG collected from nine subjects to investigate the effects of varying parameters on the model performance. Offline results show that the proposed method has achieved a stable improvement effect. We further conduct an online experiment with six subjects to assess the efficacy of the developed FBMSI algorithm in a real-time BCI application. The online experimental results show that the FBMSI algorithm yields a promising average accuracy of 83.56% using a data length of even only one second, which was 12.26% higher than the standard MSI algorithm. These extensive experimental results confirmed the effectiveness of the FBMSI algorithm in SSVEP recognition and demonstrated its potential application in the development of improved BCI systems.
Collapse
|
27
|
Lillo PD, Arrichiello F, Vito DD, Antonelli G. BCI-Controlled Assistive Manipulator: Developed Architecture and Experimental Results. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.2979375] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
28
|
Upper Limb Bionic Orthoses: General Overview and Forecasting Changes. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10155323] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Using robotics in modern medicine is slowly becoming a common practice. However, there are still important life science fields which are currently devoid of such advanced technology. A noteworthy example of a life sciences field which would benefit from process automation and advanced robotic technology is rehabilitation of the upper limb with the use of an orthosis. Here, we present the state-of-the-art and prospects for development of mechanical design, actuator technology, control systems, sensor systems, and machine learning methods in rehabilitation engineering. Moreover, current technical solutions, as well as forecasts on improvement, for exoskeletons are presented and reviewed. The overview presented might be the cornerstone for future research on advanced rehabilitation engineering technology, such as an upper limb bionic orthosis.
Collapse
|
29
|
Zhang D, Yao L, Chen K, Wang S, Chang X, Liu Y. Making Sense of Spatio-Temporal Preserving Representations for EEG-Based Human Intention Recognition. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:3033-3044. [PMID: 31021810 DOI: 10.1109/tcyb.2019.2905157] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Brain-computer interface (BCI) is a system empowering humans to communicate with or control the outside world with exclusively brain intentions. Electroencephalography (EEG)-based BCI is one of the promising solutions due to its convenient and portable instruments. Despite the extensive research of EEG in recent years, it is still challenging to interpret EEG signals effectively due to its nature of noise and difficulties in capturing the inconspicuous relations between EEG signals and specific brain activities. Most existing works either only consider EEG as chain-like sequences while neglecting complex dependencies between adjacent signals or requiring complex preprocessing. In this paper, we introduce two deep learning-based frameworks with novel spatio-temporal preserving representations of raw EEG streams to precisely identify human intentions. The two frameworks consist of both convolutional and recurrent neural networks effectively exploring the preserved spatial and temporal information in either a cascade or a parallel manner. Extensive experiments on a large scale movement intention EEG dataset (108 subjects, 3 145 160 EEG records) have demonstrated that the proposed frameworks achieve high accuracy of 98.3% and outperform a set of state-of-the-art and baseline models. The developed models are further evaluated with a real-world brain typing BCI and achieve a recognition accuracy of 93% over five instruction intentions suggesting good generalization over different kinds of intentions and BCI systems.
Collapse
|
30
|
Ma X, Qiu S, He H. Multi-channel EEG recording during motor imagery of different joints from the same limb. Sci Data 2020; 7:191. [PMID: 32561769 PMCID: PMC7305171 DOI: 10.1038/s41597-020-0535-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Accepted: 05/19/2020] [Indexed: 11/09/2022] Open
Abstract
Motor imagery (MI) is one of the important brain-computer interface (BCI) paradigms, which can be used to control peripherals without external stimulus. Imagining the movements of different joints of the same limb allows intuitive control of the outer devices. In this report, we describe an open access multi-subject dataset for MI of different joints from the same limb. This experiment collected data from twenty-five healthy subjects on three tasks: 1) imagining the movement of right hand, 2) imagining the movement of right elbow, and 3) keeping resting with eyes open, which results in a total of 22,500 trials. The dataset provided includes data of three stages: 1) raw recorded data, 2) pre-processed data after operations such as artifact removal, and 3) trial data that can be directly used for feature extraction and classification. Different researchers can reuse the dataset according to their needs. We expect that this dataset will facilitate the analysis of brain activation patterns of the same limb and the study of decoding techniques for MI.
Collapse
Affiliation(s)
- Xuelin Ma
- The Research Center for Brain-Inspired Intelligence & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, 100190, China.,The School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Shuang Qiu
- The Research Center for Brain-Inspired Intelligence & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, 100190, China
| | - Huiguang He
- The Research Center for Brain-Inspired Intelligence & National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, 100190, China. .,The School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China. .,The Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, 100190, China.
| |
Collapse
|
31
|
Khan MA, Das R, Iversen HK, Puthusserypady S. Review on motor imagery based BCI systems for upper limb post-stroke neurorehabilitation: From designing to application. Comput Biol Med 2020; 123:103843. [PMID: 32768038 DOI: 10.1016/j.compbiomed.2020.103843] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 05/18/2020] [Accepted: 06/02/2020] [Indexed: 12/21/2022]
Abstract
Strokes are a growing cause of mortality and many stroke survivors suffer from motor impairment as well as other types of disabilities in their daily life activities. To treat these sequelae, motor imagery (MI) based brain-computer interface (BCI) systems have shown potential to serve as an effective neurorehabilitation tool for post-stroke rehabilitation therapy. In this review, different MI-BCI based strategies, including "Functional Electric Stimulation, Robotics Assistance and Hybrid Virtual Reality based Models," have been comprehensively reported for upper-limb neurorehabilitation. Each of these approaches have been presented to illustrate the in-depth advantages and challenges of the respective BCI systems. Additionally, the current state-of-the-art and main concerns regarding BCI based post-stroke neurorehabilitation devices have also been discussed. Finally, recommendations for future developments have been proposed while discussing the BCI neurorehabilitation systems.
Collapse
Affiliation(s)
- Muhammad Ahmed Khan
- Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark.
| | - Rig Das
- Department of Health Technology, Technical University of Denmark, 2800, Kgs. Lyngby, Denmark
| | - Helle K Iversen
- Department of Neurology, University of Copenhagen, Rigshospitalet, 2600, Glostrup, Denmark
| | | |
Collapse
|
32
|
Krausz NE, Lamotte D, Batzianoulis I, Hargrove LJ, Micera S, Billard A. Intent Prediction Based on Biomechanical Coordination of EMG and Vision-Filtered Gaze for End-Point Control of an Arm Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1471-1480. [PMID: 32386160 DOI: 10.1109/tnsre.2020.2992885] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
We propose a novel controller for powered prosthetic arms, where fused EMG and gaze data predict the desired end-point for a full arm prosthesis, which could drive the forward motion of individual joints. We recorded EMG, gaze, and motion-tracking during pick-and-place trials with 7 able-bodied subjects. Subjects positioned an object above a random target on a virtual interface, each completing around 600 trials. On average across all trials and subjects gaze preceded EMG and followed a repeatable pattern that allowed for prediction. A computer vision algorithm was used to extract the initial and target fixations and estimate the target position in 2D space. Two SVRs were trained with EMG data to predict the x- and y- position of the hand; results showed that the y-estimate was significantly better than the x-estimate. The EMG and gaze predictions were fused using a Kalman Filter-based approach, and the positional error from using EMG-only was significantly higher than the fusion of EMG and gaze. The final target position Root Mean Squared Error (RMSE) decreased from 9.28 cm with an EMG-only prediction to 6.94 cm when using a gaze-EMG fusion. This error also increased significantly when removing some or all arm muscle signals. However, using fused EMG and gaze, there were no significant difference between predictors that included all muscles, or only a subset of muscles.
Collapse
|
33
|
Bacomics: a comprehensive cross area originating in the studies of various brain-apparatus conversations. Cogn Neurodyn 2020; 14:425-442. [PMID: 32655708 DOI: 10.1007/s11571-020-09577-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 02/17/2020] [Accepted: 03/05/2020] [Indexed: 12/20/2022] Open
Abstract
The brain is the most important organ of the human body, and the conversations between the brain and an apparatus can not only reveal a normally functioning or a dysfunctional brain but also can modulate the brain. Here, the apparatus may be a nonbiological instrument, such as a computer, and the consequent brain-computer interface is now a very popular research area with various applications. The apparatus may also be a biological organ or system, such as the gut and muscle, and their efficient conversations with the brain are vital for a healthy life. Are there any common bases that bind these different scenarios? Here, we propose a new comprehensive cross area: Bacomics, which comes from brain-apparatus conversations (BAC) + omics. We take Bacomics to cover at least three situations: (1) The brain is normal, but the conversation channel is disabled, as in amyotrophic lateral sclerosis. The task is to reconstruct or open up new channels to reactivate the brain function. (2) The brain is in disorder, such as in Parkinson's disease, and the work is to utilize existing or open up new channels to intervene, repair and modulate the brain by medications or stimulation. (3) Both the brain and channels are in order, and the goal is to enhance coordinated development between the brain and apparatus. In this paper, we elaborate the connotation of BAC into three aspects according to the information flow: the issue of output to the outside (BAC-1), the issue of input to the brain (BAC-2) and the issue of unity of brain and apparatus (BAC-3). More importantly, there are no less than five principles that may be taken as the cornerstones of Bacomics, such as feedforward and feedback control, brain plasticity, harmony, the unity of opposites and systems principles. Clearly, Bacomics integrates these seemingly disparate domains, but more importantly, opens a much wider door for the research and development of the brain, and the principles further provide the general framework in which to realize or optimize these various conversations.
Collapse
|
34
|
de Freitas AM, Sanchez G, Lecaignard F, Maby E, Soares AB, Mattout J. EEG artifact correction strategies for online trial-by-trial analysis. J Neural Eng 2020; 17:016035. [DOI: 10.1088/1741-2552/ab581d] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
35
|
Zeng H, Shen Y, Hu X, Song A, Xu B, Li H, Wang Y, Wen P. Semi-Autonomous Robotic Arm Reaching With Hybrid Gaze-Brain Machine Interface. Front Neurorobot 2020; 13:111. [PMID: 32038219 PMCID: PMC6992643 DOI: 10.3389/fnbot.2019.00111] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 12/11/2019] [Indexed: 11/13/2022] Open
Abstract
Recent developments in the non-muscular human-robot interface (HRI) and shared control strategies have shown potential for controlling the assistive robotic arm by people with no residual movement or muscular activity in upper limbs. However, most non-muscular HRIs only produce discrete-valued commands, resulting in non-intuitive and less effective control of the dexterous assistive robotic arm. Furthermore, the user commands and the robot autonomy commands usually switch in the shared control strategies of such applications. This characteristic has been found to yield a reduced sense of agency as well as frustration for the user according to previous user studies. In this study, we firstly propose an intuitive and easy-to-learn-and-use hybrid HRI by combing the Brain-machine interface (BMI) and the gaze-tracking interface. For the proposed hybrid gaze-BMI, the continuous modulation of the movement speed via the motor intention occurs seamlessly and simultaneously to the unconstrained movement direction control with the gaze signals. We then propose a shared control paradigm that always combines user input and the autonomy with the dynamic combination regulation. The proposed hybrid gaze-BMI and shared control paradigm were validated for a robotic arm reaching task performed with healthy subjects. All the users were able to employ the hybrid gaze-BMI for moving the end-effector sequentially to reach the target across the horizontal plane while also avoiding collisions with obstacles. The shared control paradigm maintained as much volitional control as possible, while providing the assistance for the most difficult parts of the task. The presented semi-autonomous robotic system yielded continuous, smooth, and collision-free motion trajectories for the end effector approaching the target. Compared to a system without assistances from robot autonomy, it significantly reduces the rate of failure as well as the time and effort spent by the user to complete the tasks.
Collapse
Affiliation(s)
- Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yitao Shen
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Xuhui Hu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Aiguo Song
- State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Huijun Li
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yanxin Wang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Pengcheng Wen
- AVIC Aeronautics Computing Technique Research Institute, Xi’an, China
| |
Collapse
|
36
|
Ma X, Qiu S, Wei W, Wang S, He H. Deep Channel-Correlation Network for Motor Imagery Decoding From the Same Limb. IEEE Trans Neural Syst Rehabil Eng 2019; 28:297-306. [PMID: 31725383 DOI: 10.1109/tnsre.2019.2953121] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Motor imagery (MI) is an important brain-computer interface (BCI) paradigm, which can be applied without external stimulus. Imagining different joint movements from the same limb allows intuitive control of the outer devices. However, few researches focused on this field, and the decoding accuracy limited the applications for practical use. In this study, we aim to use deep learning methods to explore the ceiling of the decoding performance of three tasks: the resting state, the MI of right hand and right elbow. To represent the brain functional relationships, the correlation matrix that consists of correlation coefficients between electrodes (channels) was calculated as features. We proposed the Channel-Correlation Network to learn the overall representation among channels for classification. Ensemble learning was applied to integrate the output of multiple Channel-Correlation Networks. Our proposed method achieved the decoding accuracy of up to 87.03% in the 3-class scenario. The results demonstrated the effectiveness of deep learning method for decoding MI of different joints from the same limb and the potential of this fine paradigm to be applied in practice.
Collapse
|
37
|
Abstract
State-of-the-art high-end prostheses are electro-mechanically able to provide a great variety of movements. Nevertheless, in order to functionally replace a human limb, it is essential that each movement is properly controlled. This is the goal of prosthesis control, which has become a growing research field in the last decades, with the ultimate goal of reproducing biological limb control. Therefore, exploration and development of prosthesis control are crucial to improve many aspects of an amputee’s life. Nowadays, a large divergence between academia and industry has become evident in commercial systems. Although several studies propose more natural control systems with promising results, basic one degree of freedom (DoF), a control switching system is the most widely used option in industry because of simplicity, robustness and inertia. A few classification controlled prostheses have emerged in the last years but they are still a low percentage of the used ones. One of the factors that generate this situation is the lack of robustness of more advanced control algorithms in daily life activities outside of laboratory conditions. Because of this, research has shifted towards more functional prosthesis control. This work reviews the most recent literature in upper limb prosthetic control. It covers commonly used variants of possible biological inputs, its processing and translation to actual control, mostly focusing on electromyograms as well as the problems it will have to overcome in near future.
Collapse
|
38
|
de Neeling M, Van Hulle MM. Single-paradigm and hybrid brain computing interfaces and their use by disabled patients. J Neural Eng 2019; 16:061001. [DOI: 10.1088/1741-2552/ab2706] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
39
|
Zuo C, Jin J, Yin E, Saab R, Miao Y, Wang X, Hu D, Cichocki A. Novel hybrid brain-computer interface system based on motor imagery and P300. Cogn Neurodyn 2019; 14:253-265. [PMID: 32226566 DOI: 10.1007/s11571-019-09560-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 09/19/2019] [Accepted: 10/08/2019] [Indexed: 01/08/2023] Open
Abstract
Motor imagery (MI) is a mental representation of motor behavior and has been widely used in electroencephalogram based brain-computer interfaces (BCIs). Several studies have demonstrated the efficacy of MI-based BCI-feedback training in post-stroke rehabilitation. However, in the earliest stage of the training, calibration data typically contain insufficient discriminability, resulting in unreliable feedback, which may decrease subjects' motivation and even hinder their training. To improve the performance in the early stages of MI training, a novel hybrid BCI paradigm based on MI and P300 is proposed in this study. In this paradigm, subjects are instructed to imagine writing the Chinese character following the flash order of the desired Chinese character displayed on the screen. The event-related desynchronization/synchronization (ERD/ERS) phenomenon is produced with writing based on one's imagination. Simultaneously, the P300 potential is evoked by the flash of each stroke. Moreover, a fusion method of P300 and MI classification is proposed, in which unreliable P300 classifications are corrected by reliable MI classifications. Twelve healthy naïve MI subjects participated in this study. Results demonstrated that the proposed hybrid BCI paradigm yielded significantly better performance than the single-modality BCI paradigm. The recognition accuracy of the fusion method is significantly higher than that of P300 (p < 0.05) and MI (p < 0.01). Moreover, the training data size can be reduced through fusion of these two modalities.
Collapse
Affiliation(s)
- Cili Zuo
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Jing Jin
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Erwei Yin
- Unmanned Systems Research Center, National Institute of Defense Technology Innovation, Academy of Military Sciences China, Beijing, 100081 People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin, People's Republic of China
| | - Rami Saab
- 4Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Yangyang Miao
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Xingyu Wang
- 1Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and Technology, Shanghai, People's Republic of China
| | - Dewen Hu
- 5College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, 410073 Hunan People's Republic of China
| | - Andrzej Cichocki
- 6Skolkovo Institute of Science and Technology (SKOLTECH), Moscow, Russia 143026.,7Systems Research Institute PAS, Warsaw, Poland.,8Nicolaus Copernicus University (UMK), Torun, Poland
| |
Collapse
|
40
|
Motor-Imagery-Based Teleoperation of a Dual-Arm Robot Performing Manipulation Tasks. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2018.2875052] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
41
|
Brain-Computer Interface Channel-Selection Strategy Based on Analysis of Event-Related Desynchronization Topography in Stroke Patients. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:3817124. [PMID: 31559004 PMCID: PMC6735216 DOI: 10.1155/2019/3817124] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 06/12/2019] [Accepted: 08/13/2019] [Indexed: 11/17/2022]
Abstract
In the last decade, technology-assisted stroke rehabilitation has been the focus of research. Electroencephalogram- (EEG-) based brain-computer interface (BCI) has a great potential for motor rehabilitation in stroke patients since the closed loop between motor intention and the actual movement established by BCI can stimulate the neural pathways of motor control. Due to the deficits in the brain, motor intention expression may shift to other brain regions during and even after neural reorganization. The objective of this paper was to study the event-related desynchronization (ERD) topography during motor attempt tasks of the paretic hand in stroke patients and compare the classification performance using different channel-selection strategies in EEG-based BCI. Fifteen stroke patients were recruited in this study. A cue-based experimental paradigm was applied in the experiment, in which each patient was required to open the palm of the paretic or the unaffected hand. EEG was recorded and analyzed to measure the motor intention and indicate the activated brain regions. Support vector machine (SVM) combined with common spatial pattern (CSP) algorithm was used to calculate the offline classification accuracy between the motor attempt of the paretic hand and the resting state applying different channel-selection strategies. Results showed individualized ERD topography during the motor attempt of the paretic hand due to the deficits caused by stroke. Statistical analysis showed a significant increase in the classification accuracy by analyzing the channels showing ERD than analyzing the channels from the contralateral sensorimotor cortex (SM1). The results indicated that for stroke patients whose affected motor cortex is extensively damaged, the compensated brain regions should be considered for implementing EEG-based BCI for motor rehabilitation as the closed loop between the altered activated brain regions and the paretic hand can be stimulated more accurately using the individualized channel-selection strategy.
Collapse
|
42
|
Mick S, Lapeyre M, Rouanet P, Halgand C, Benois-Pineau J, Paclet F, Cattaert D, Oudeyer PY, de Rugy A. Reachy, a 3D-Printed Human-Like Robotic Arm as a Testbed for Human-Robot Control Strategies. Front Neurorobot 2019; 13:65. [PMID: 31474846 PMCID: PMC6703080 DOI: 10.3389/fnbot.2019.00065] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 07/29/2019] [Indexed: 11/13/2022] Open
Abstract
To this day, despite the increasing motor capability of robotic devices, elaborating efficient control strategies is still a key challenge in the field of humanoid robotic arms. In particular, providing a human “pilot” with efficient ways to drive such a robotic arm requires thorough testing prior to integration into a finished system. Additionally, when it is needed to preserve anatomical consistency between pilot and robot, such testing requires to employ devices showing human-like features. To fulfill this need for a biomimetic test platform, we present Reachy, a human-like life-scale robotic arm with seven joints from shoulder to wrist. Although Reachy does not include a poly-articulated hand and is therefore more suitable for studying reaching than manipulation, a robotic hand prototype from available third-party projects could be integrated to it. Its 3D-printed structure and off-the-shelf actuators make it inexpensive relatively to the price of an industrial-grade robot. Using an open-source architecture, its design makes it broadly connectable and customizable, so it can be integrated into many applications. To illustrate how Reachy can connect to external devices, this paper presents several proofs of concept where it is operated with various control strategies, such as tele-operation or gaze-driven control. In this way, Reachy can help researchers to explore, develop and test innovative control strategies and interfaces on a human-like robot.
Collapse
Affiliation(s)
- Sébastien Mick
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | | | | | - Christophe Halgand
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | - Jenny Benois-Pineau
- Laboratoire Bordelais de Recherche en Informatique, UMR 5800, CNRS & Univ. Bordeaux & Bordeaux INP, Talence, France
| | - Florent Paclet
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | - Daniel Cattaert
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France
| | | | - Aymar de Rugy
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, UMR 5287 CNRS & Univ. Bordeaux, Bordeaux, France.,Centre for Sensorimotor Performance, School of Human Movement and Nutrition Sciences, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
43
|
Orand A, Erdal Aksoy E, Miyasaka H, Weeks Levy C, Zhang X, Menon C. Bilateral Tactile Feedback-Enabled Training for Stroke Survivors Using Microsoft Kinect TM. SENSORS 2019; 19:s19163474. [PMID: 31398957 PMCID: PMC6719092 DOI: 10.3390/s19163474] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 08/01/2019] [Accepted: 08/05/2019] [Indexed: 02/06/2023]
Abstract
Rehabilitation and mobility training of post-stroke patients is crucial for their functional recovery. While traditional methods can still help patients, new rehabilitation and mobility training methods are necessary to facilitate better recovery at lower costs. In this work, our objective was to design and develop a rehabilitation training system targeting the functional recovery of post-stroke users with high efficiency. To accomplish this goal, we applied a bilateral training method, which proved to be effective in enhancing motor recovery using tactile feedback for the training. One participant with hemiparesis underwent six weeks of training. Two protocols, “contralateral arm matching” and “both arms moving together”, were carried out by the participant. Each of the protocols consisted of “shoulder abduction” and “shoulder flexion” at angles close to 30 and 60 degrees. The participant carried out 15 repetitions at each angle for each task. For example, in the “contralateral arm matching” protocol, the unaffected arm of the participant was set to an angle close to 30 degrees. He was then requested to keep the unaffected arm at the specified angle while trying to match the position with the affected arm. Whenever the two arms matched, a vibration was given on both brachialis muscles. For the “both arms moving together” protocol, the two arms were first set approximately to an angle of either 30 or 60 degrees. The participant was asked to return both arms to a relaxed position before moving both arms back to the remembered specified angle. The arm that was slower in moving to the specified angle received a vibration. We performed clinical assessments before, midway through, and after the training period using a Fugl-Meyer assessment (FMA), a Wolf motor function test (WMFT), and a proprioceptive assessment. For the assessments, two ipsilateral and contralateral arm matching tasks, each consisting of three movements (shoulder abduction, shoulder flexion, and elbow flexion), were used. Movements were performed at two angles, 30 and 60 degrees. For both tasks, the same procedure was used. For example, in the case of the ipsilateral arm matching task, an experimenter positioned the affected arm of the participant at 30 degrees of shoulder abduction. The participant was requested to keep the arm in that position for ~5 s before returning to a relaxed initial position. Then, after another ~5-s delay, the participant moved the affected arm back to the remembered position. An experimenter measured this shoulder abduction angle manually using a goniometer. The same procedure was repeated for the 60 degree angle and for the other two movements. We applied a low-cost Kinect to extract the participant’s body joint position data. Tactile feedback was given based on the arm position detected by the Kinect sensor. By using a Kinect sensor, we demonstrated the feasibility of the system for the training of a post-stroke user. The proposed system can further be employed for self-training of patients at home. The results of the FMA, WMFT, and goniometer angle measurements showed improvements in several tasks, suggesting a positive effect of the training system and its feasibility for further application for stroke survivors’ rehabilitation.
Collapse
Affiliation(s)
- Abbas Orand
- Department of Intelligent Systems and Digital Design, School of Information Technology, Halmstad University, Spetsvinkelgatan 29, 30250 Halmstad, Sweden
| | - Eren Erdal Aksoy
- Department of Intelligent Systems and Digital Design, School of Information Technology, Halmstad University, Spetsvinkelgatan 29, 30250 Halmstad, Sweden
| | - Hiroyuki Miyasaka
- Department of Rehabilitation, Fujita Health University Nanakuri Memorial Hospital, 424-1 Oodori-cho, Tsu, Mie 514-1296, Japan
| | - Carolyn Weeks Levy
- Schools of Mechatronics Systems Engineering and Engineering Science, Simon Fraser University, 250-13450 102 Avenue, Surrey, BC V3T 0A3, Canada
| | - Xin Zhang
- Schools of Mechatronics Systems Engineering and Engineering Science, Simon Fraser University, 250-13450 102 Avenue, Surrey, BC V3T 0A3, Canada
| | - Carlo Menon
- Schools of Mechatronics Systems Engineering and Engineering Science, Simon Fraser University, 250-13450 102 Avenue, Surrey, BC V3T 0A3, Canada.
| |
Collapse
|
44
|
Li Z, Yuan W, Zhao S, Yu Z, Kang Y, Chen CLP. Brain-Actuated Control of Dual-Arm Robot Manipulation With Relative Motion. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2017.2770168] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
45
|
Abiri R, Borhani S, Sellers EW, Jiang Y, Zhao X. A comprehensive review of EEG-based brain–computer interface paradigms. J Neural Eng 2019; 16:011001. [DOI: 10.1088/1741-2552/aaf12e] [Citation(s) in RCA: 270] [Impact Index Per Article: 54.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
46
|
Enhanced neural network control of lower limb rehabilitation exoskeleton by add-on repetitive learning. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.09.085] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
47
|
Miao Q, Zhang M, Cao J, Xie SQ. Reviewing high-level control techniques on robot-assisted upper-limb rehabilitation. Adv Robot 2018. [DOI: 10.1080/01691864.2018.1546617] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Affiliation(s)
- Qing Miao
- School of Mechanical and Electronic Engineering, Wuhan University of Technology, Wuhan, People’s Republic of China
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, People’s Republic of China
| | - Mingming Zhang
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, People’s Republic of China
| | - Jinghui Cao
- Department of Mechanical Engineering, The University of Auckland, Auckland, New Zealand
| | - Sheng Q. Xie
- School of Electronic and Electrical Engineering, University of Leeds, Leeds, UK
| |
Collapse
|
48
|
Shu X, Chen S, Meng J, Yao L, Sheng X, Jia J, Farina D, Zhu X. Tactile Stimulation Improves Sensorimotor Rhythm-based BCI Performance in Stroke Patients. IEEE Trans Biomed Eng 2018; 66:1987-1995. [PMID: 30452349 DOI: 10.1109/tbme.2018.2882075] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE BCI decoding accuracy plays a crucial role in practical applications. With accurate feedback, BCI-based therapy determines beneficial neural plasticity in stroke patients. In this study, we aimed at improving sensorimotor rhythm (SMR)-based BCI performance by integrating motor tasks with tactile stimulation. METHODS Eleven stroke patients were recruited for three experimental conditions, i.e., motor attempt (MA) condition, tactile stimulation (TS) condition, and tactile stimulation-assisted motor attempt (TS-MA) condition. Tactile stimulation was delivered to the paretic hand wrist during both task and idle states using a DC vibrator. RESULTS We observed that the TS-MA condition achieved greater motor-related cortical activation (MRCA) in alpha-beta band when compared with both TS and MA conditions. Consequently, online BCI decoding accuracies between task and idle states were significantly improved from 74.5% in the MA condition to 85.1% in the TS-MA condition (p < 0.001), whereas the accuracy in the TS condition was 54.6% (approaching to the chance level of 50%). CONCLUSION This finding demonstrates that sensory afferent from peripheral nerves benefits the neural process of sensorimotor cortex in stroke patients. With appropriate sensory stimulation, MRCA is enhanced and corresponding brain patterns are more discriminative. SIGNIFICANCE This novel SMR-BCI paradigm shows great promise to facilitate the practical application of BCI-based stroke rehabilitation.
Collapse
|
49
|
EEG-Based Control for Upper and Lower Limb Exoskeletons and Prostheses: A Systematic Review. SENSORS 2018; 18:s18103342. [PMID: 30301238 PMCID: PMC6211123 DOI: 10.3390/s18103342] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Revised: 09/12/2018] [Accepted: 09/28/2018] [Indexed: 12/13/2022]
Abstract
Electroencephalography (EEG) signals have great impact on the development of assistive rehabilitation devices. These signals are used as a popular tool to investigate the functions and the behavior of the human motion in recent research. The study of EEG-based control of assistive devices is still in early stages. Although the EEG-based control of assistive devices has attracted a considerable level of attention over the last few years, few studies have been carried out to systematically review these studies, as a means of offering researchers and experts a comprehensive summary of the present, state-of-the-art EEG-based control techniques used for assistive technology. Therefore, this research has three main goals. The first aim is to systematically gather, summarize, evaluate and synthesize information regarding the accuracy and the value of previous research published in the literature between 2011 and 2018. The second goal is to extensively report on the holistic, experimental outcomes of this domain in relation to current research. It is systematically performed to provide a wealthy image and grounded evidence of the current state of research covering EEG-based control for assistive rehabilitation devices to all the experts and scientists. The third goal is to recognize the gap of knowledge that demands further investigation and to recommend directions for future research in this area.
Collapse
|
50
|
Sullivan JL, Bhagat NA, Yozbatiran N, Paranjape R, Losey CG, Grossman RG, Contreras-Vidal JL, Francisco GE, O'Malley MK. Improving robotic stroke rehabilitation by incorporating neural intent detection: Preliminary results from a clinical trial. IEEE Int Conf Rehabil Robot 2018; 2017:122-127. [PMID: 28813805 DOI: 10.1109/icorr.2017.8009233] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents the preliminary findings of a multi-year clinical study evaluating the effectiveness of adding a brain-machine interface (BMI) to the MAHI-Exo II, a robotic upper limb exoskeleton, for elbow flexion/extension rehabilitation in chronic stroke survivors. The BMI was used to trigger robot motion when movement intention was detected from subjects' neural signals, thus requiring that subjects be mentally engaged during robotic therapy. The first six subjects to complete the program have shown improvements in both Fugl-Meyer Upper-Extremity scores as well as in kinematic movement quality measures that relate to movement planning, coordination, and control. These results are encouraging and suggest that increasing subject engagement during therapy through the addition of an intent-detecting BMI enhances the effectiveness of standard robotic rehabilitation.
Collapse
|