1
|
Le Guillou R, Froger J, Morin M, Couderc M, Cormier C, Azevedo-Coste C, Gasq D. Specifications and functional impact of a self-triggered grasp neuroprosthesis developed to restore prehension in hemiparetic post-stroke subjects. Biomed Eng Online 2024; 23:129. [PMID: 39709421 DOI: 10.1186/s12938-024-01323-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 12/11/2024] [Indexed: 12/23/2024] Open
Abstract
BACKGROUND Stroke is the leading cause of acquired motor deficiencies in adults. Restoring prehension abilities is challenging for individuals who have not recovered active hand opening capacities after their rehabilitation. Self-triggered functional electrical stimulation applied to finger extensor muscles to restore grasping abilities in daily life is called grasp neuroprosthesis (GNP) and remains poorly accessible to the post-stroke population. Thus, we developed a GNP prototype with self-triggering control modalities adapted to the characteristics of the post-stroke population and assessed its impact on abilities. METHODS Through two clinical research protocols, 22 stroke participants used the GNP and its control modalities (EMG activity of a pre-defined muscle, IMU motion detection, foot switches and voice commands) for 3 to 5 sessions over a week. The NeuroPrehens software interpreted user commands through input signals from electromyographic, inertial, foot switches or microphone sensors to trigger an external electrical stimulator using two bipolar channels with surface electrodes. Users tested a panel of 9 control modalities, subjectively evaluated in ease-of-use and reliability with scores out of 10 and selected a preferred one before training with the GNP to perform functional unimanual standardized prehension tasks in a seated position. The responsiveness and functional impact of the GNP were assessed through a posteriori analysis of video recordings of these tasks across the two blinded evaluation multi-crossover N-of-1 randomized controlled trials. RESULTS Non-paretic foot triggering, whether from EMG or IMU, received the highest scores in both ease-of-use (median scores out of 10: EMG 10, IMU 9) and reliability (EMG 9, IMU 9) and were found viable and appreciated by users, like voice control and head lateral inclination modalities. The assessment of the system's general responsiveness combined with the control modalities latencies revealed median (95% confidence interval) durations between user intent and FES triggering of 333 ms (211 to 561), 217 ms (167 to 355) and 467 ms (147 to 728) for the IMU, EMG and voice control types of modalities, respectively. The functional improvement with the use of the GNP was significant in the two prehension tasks evaluated, with a median (95% confidence interval) improvement of 3 (- 1 to 5) points out of 5. CONCLUSIONS The GNP prototype and its control modalities were well suited to the post-stroke population in terms of self-triggering, responsiveness and restoration of functional grasping abilities. A wearable version of this device is being developed to improve prehension abilities at home. TRIAL REGISTRATION Both studies are registered on clinicaltrials.gov: NCT03946488, registered May 10, 2019 and NCT04804384, registered March 18, 2021.
Collapse
Affiliation(s)
- R Le Guillou
- Department of Clinical Physiology, Motion Analysis Center, University Hospital of Toulouse, Hôpital de Purpan, Toulouse, France.
- INRIA, University of Montpellier, Montpellier, France.
- ToNIC, Toulouse NeuroImaging Center, University of Toulouse, Inserm, UPS, Toulouse, France.
| | - J Froger
- Department of Physical Medicine and Rehabilitation, University Hospital Center of Nîmes, University of Montpellier, Le Grau du Roi, France
- EuroMov Digital Health in Motion, University of Montpellier, IMT Mines Ales, Montpellier, France
| | - M Morin
- Department of Clinical Physiology, Motion Analysis Center, University Hospital of Toulouse, Hôpital de Purpan, Toulouse, France
| | - M Couderc
- Department of Clinical Physiology, Motion Analysis Center, University Hospital of Toulouse, Hôpital de Purpan, Toulouse, France
| | - C Cormier
- Department of Clinical Physiology, Motion Analysis Center, University Hospital of Toulouse, Hôpital de Purpan, Toulouse, France
- ToNIC, Toulouse NeuroImaging Center, University of Toulouse, Inserm, UPS, Toulouse, France
| | | | - D Gasq
- Department of Clinical Physiology, Motion Analysis Center, University Hospital of Toulouse, Hôpital de Purpan, Toulouse, France
- ToNIC, Toulouse NeuroImaging Center, University of Toulouse, Inserm, UPS, Toulouse, France
| |
Collapse
|
2
|
Zhou Y, Yu T, Gao W, Huang W, Lu Z, Huang Q, Li Y. Shared Three-Dimensional Robotic Arm Control Based on Asynchronous BCI and Computer Vision. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3163-3175. [PMID: 37498753 DOI: 10.1109/tnsre.2023.3299350] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
OBJECTIVE A brain-computer interface (BCI) can be used to translate neuronal activity into commands to control external devices. However, using noninvasive BCI to control a robotic arm for movements in three-dimensional (3D) environments and accomplish complicated daily tasks, such as grasping and drinking, remains a challenge. APPROACH In this study, a shared robotic arm control system based on hybrid asynchronous BCI and computer vision was presented. The BCI model, which combines steady-state visual evoked potentials (SSVEPs) and blink-related electrooculography (EOG) signals, allows users to freely choose from fifteen commands in an asynchronous mode corresponding to robot actions in a 3D workspace and reach targets with a wide movement range, while computer vision can identify objects and assist a robotic arm in completing more precise tasks, such as grasping a target automatically. RESULTS Ten subjects participated in the experiments and achieved an average accuracy of more than 92% and a high trajectory efficiency for robot movement. All subjects were able to perform the reach-grasp-drink tasks successfully using the proposed shared control method, with fewer error commands and shorter completion time than with direct BCI control. SIGNIFICANCE Our results demonstrated the feasibility and efficiency of generating practical multidimensional control of an intuitive robotic arm by merging hybrid asynchronous BCI and computer vision-based recognition.
Collapse
|
3
|
Neťuková S, Bejtic M, Malá C, Horáková L, Kutílek P, Kauler J, Krupička R. Lower Limb Exoskeleton Sensors: State-of-the-Art. SENSORS (BASEL, SWITZERLAND) 2022; 22:9091. [PMID: 36501804 PMCID: PMC9738474 DOI: 10.3390/s22239091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 11/08/2022] [Accepted: 11/17/2022] [Indexed: 06/17/2023]
Abstract
Due to the ever-increasing proportion of older people in the total population and the growing awareness of the importance of protecting workers against physical overload during long-time hard work, the idea of supporting exoskeletons progressed from high-tech fiction to almost commercialized products within the last six decades. Sensors, as part of the perception layer, play a crucial role in enhancing the functionality of exoskeletons by providing as accurate real-time data as possible to generate reliable input data for the control layer. The result of the processed sensor data is the information about current limb position, movement intension, and needed support. With the help of this review article, we want to clarify which criteria for sensors used in exoskeletons are important and how standard sensor types, such as kinematic and kinetic sensors, are used in lower limb exoskeletons. We also want to outline the possibilities and limitations of special medical signal sensors detecting, e.g., brain or muscle signals to improve data perception at the human-machine interface. A topic-based literature and product research was done to gain the best possible overview of the newest developments, research results, and products in the field. The paper provides an extensive overview of sensor criteria that need to be considered for the use of sensors in exoskeletons, as well as a collection of sensors and their placement used in current exoskeleton products. Additionally, the article points out several types of sensors detecting physiological or environmental signals that might be beneficial for future exoskeleton developments.
Collapse
|
4
|
Abstract
For those patients with partial hand level amputation who would benefit from myoelectric prosthetic digits for enhanced prehensile function, the Starfish Procedure provides muscle transfers, which allow for the generation of intuitively controlled electromyographic signals for individual digital control with minimal myoelectric cross-talk. Thoughtful preoperative planning allows for creation of multiple sources of high-quality myoelectric signal in a single operation, which does not require microsurgery, providing for wide applicability to hand surgeons of all backgrounds.
Collapse
Affiliation(s)
| | - Bryan J Loeffler
- Reconstructive Center for Lost Limbs, OrthoCarolina Hand Center, 1915 Randolph Road, Charlotte, NC 28207, USA; Department of Orthopaedic Surgery, Atrium Healthcare, Charlotte, NC, USA
| | - Raymond Glenn Gaston
- Reconstructive Center for Lost Limbs, OrthoCarolina Hand Center, 1915 Randolph Road, Charlotte, NC 28207, USA; Department of Orthopaedic Surgery, Atrium Healthcare, Charlotte, NC, USA.
| |
Collapse
|
5
|
Saha S, Mamun KA, Ahmed K, Mostafa R, Naik GR, Darvishi S, Khandoker AH, Baumert M. Progress in Brain Computer Interface: Challenges and Opportunities. Front Syst Neurosci 2021; 15:578875. [PMID: 33716680 PMCID: PMC7947348 DOI: 10.3389/fnsys.2021.578875] [Citation(s) in RCA: 123] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 01/06/2021] [Indexed: 12/13/2022] Open
Abstract
Brain computer interfaces (BCI) provide a direct communication link between the brain and a computer or other external devices. They offer an extended degree of freedom either by strengthening or by substituting human peripheral working capacity and have potential applications in various fields such as rehabilitation, affective computing, robotics, gaming, and neuroscience. Significant research efforts on a global scale have delivered common platforms for technology standardization and help tackle highly complex and non-linear brain dynamics and related feature extraction and classification challenges. Time-variant psycho-neurophysiological fluctuations and their impact on brain signals impose another challenge for BCI researchers to transform the technology from laboratory experiments to plug-and-play daily life. This review summarizes state-of-the-art progress in the BCI field over the last decades and highlights critical challenges.
Collapse
Affiliation(s)
- Simanto Saha
- School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, SA, Australia
- Department of Electrical and Electronic Engineering, United International University, Dhaka, Bangladesh
| | - Khondaker A. Mamun
- Advanced Intelligent Multidisciplinary Systems (AIMS) Lab, Department of Computer Science and Engineering, United International University, Dhaka, Bangladesh
| | - Khawza Ahmed
- Department of Electrical and Electronic Engineering, United International University, Dhaka, Bangladesh
| | - Raqibul Mostafa
- Department of Electrical and Electronic Engineering, United International University, Dhaka, Bangladesh
| | - Ganesh R. Naik
- Adelaide Institute for Sleep Health, College of Medicine and Public Health, Flinders University, Adelaide, SA, Australia
| | - Sam Darvishi
- School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, SA, Australia
| | - Ahsan H. Khandoker
- Healthcare Engineering Innovation Center, Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
| | - Mathias Baumert
- School of Electrical and Electronic Engineering, The University of Adelaide, Adelaide, SA, Australia
| |
Collapse
|
6
|
Paek AY, Brantley JA, Sujatha Ravindran A, Nathan K, He Y, Eguren D, Cruz-Garza JG, Nakagome S, Wickramasuriya DS, Chang J, Rashed-Al-Mahfuz M, Amin MR, Bhagat NA, Contreras-Vidal JL. A Roadmap Towards Standards for Neurally Controlled End Effectors. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:84-90. [PMID: 35402986 PMCID: PMC8979628 DOI: 10.1109/ojemb.2021.3059161] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 12/24/2020] [Accepted: 02/09/2021] [Indexed: 12/02/2022] Open
Abstract
The control and manipulation of various types of end effectors such as powered exoskeletons, prostheses, and ‘neural’ cursors by brain-machine interface (BMI) systems has been the target of many research projects. A seamless “plug and play” interface between any BMI and end effector is desired, wherein similar user's intent cause similar end effectors to behave identically. This report is based on the outcomes of an IEEE Standards Association Industry Connections working group on End Effectors for Brain-Machine Interfacing that convened to identify and address gaps in the existing standards for BMI-based solutions with a focus on the end-effector component. A roadmap towards standardization of end effectors for BMI systems is discussed by identifying current device standards that are applicable for end effectors. While current standards address basic electrical and mechanical safety, and to some extent, performance requirements, several gaps exist pertaining to unified terminologies, data communication protocols, patient safety and risk mitigation.
Collapse
Affiliation(s)
| | - Justin A Brantley
- University of Houston Houston TX 77204 USA
- Department of BioengineeringUniversity of Pennsylvania Philadelphia PA 19104 USA
| | | | | | | | | | - Jesus G Cruz-Garza
- University of Houston Houston TX 77204 USA
- Department of Design and Environmental AnalysisCornell University Ithaca NY 14853 USA
| | | | | | | | - Md Rashed-Al-Mahfuz
- University of Houston Houston TX 77204 USA
- Department of Computer Science and EngineeringUniversity of Rajshahi Rajshahi 6205 Bangladesh
| | | | - Nikunj A Bhagat
- University of Houston Houston TX 77204 USA
- Feinstein Institutes for Medical Research Manhasset NY 11030 USA
| | | |
Collapse
|
7
|
Roy S, Rathee D, Chowdhury A, McCreadie K, Prasad G. Assessing impact of channel selection on decoding of motor and cognitive imagery from MEG data. J Neural Eng 2020; 17:056037. [PMID: 32998113 DOI: 10.1088/1741-2552/abbd21] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Magnetoencephalography (MEG) based brain-computer interface (BCI) involves a large number of sensors allowing better spatiotemporal resolution for assessing brain activity patterns. There have been many efforts to develop BCI using MEG with high accuracy, though an increase in the number of channels (NoC) means an increase in computational complexity. However, not all sensors necessarily contribute significantly to an increase in classification accuracy (CA) and specifically in the case of MEG-based BCI no channel selection methodology has been performed. Therefore, this study investigates the effect of channel selection on the performance of MEG-based BCI. APPROACH MEG data were recorded for two sessions from 15 healthy participants performing motor imagery, cognitive imagery and a mixed imagery task pair using a unique paradigm. Performance of four state-of-the-art channel selection methods (i.e. Class-Correlation, ReliefF, Random Forest, and Infinite Latent Feature Selection were applied across six binary tasks in three different frequency bands) were evaluated in this study on two state-of-the-art features, i.e. bandpower and common spatial pattern (CSP). MAIN RESULTS All four methods provided a statistically significant increase in CA compared to a baseline method using all gradiometer sensors, i.e. 204 channels with band-power features from alpha (8-12 Hz), beta (13-30 Hz), or broadband (α + β) (8-30 Hz). It is also observed that the alpha frequency band performed better than the beta and broadband frequency bands. The performance of the beta band gave the lowest CA compared with the other two bands. Channel selection improved accuracy irrespective of feature types. Moreover, all the methods reduced the NoC significantly, from 204 to a range of 1-25, using bandpower as a feature and from 15 to 105 for CSP. The optimal channel number also varied not only in each session but also for each participant. Reducing the NoC will help to decrease the computational cost and maintain numerical stability in cases of low trial numbers. SIGNIFICANCE The study showed significant improvement in performance of MEG-BCI with channel selection irrespective of feature type and hence can be successfully applied for BCI applications.
Collapse
Affiliation(s)
- Sujit Roy
- School of Computing, Engineering & Intelligent Systems, Ulster University, Derry, Londonderry, United Kingdom
| | | | | | | | | |
Collapse
|
8
|
Zhang Y, Gao Q, Song Y, Wang Z. Implementation of an SSVEP-based intelligent home service robot system. Technol Health Care 2020; 29:541-556. [PMID: 33074201 DOI: 10.3233/thc-202442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND People with severe neuromuscular disorders caused by an accident or congenital disease cannot normally interact with the physical environment. The intelligent robot technology offers the possibility to solve this problem. However, the robot can hardly carry out the task without understanding the subject's intention as it relays on speech or gestures. Brain-computer interface (BCI), a communication system that operates external devices by directly converting brain activity into digital signals, provides a solution for this. OBJECTIVE In this study, a noninvasive BCI-based humanoid robotic system was designed and implemented for home service. METHODS A humanoid robot that is equipped with multi-sensors navigates to the object placement area under the guidance of a specific symbol "Naomark", which has a unique ID, and then sends the information of the scanned object back to the user interface. Based on this information, the subject gives commands to the robot to grab the wanted object and give it to the subject. To identify the subject's intention, the channel projection-based canonical correlation analysis (CP-CCA) method was utilized for the steady state visual evoked potential-based BCI system. RESULTS The offline results showed that the average classification accuracy of all subjects reached 90%, and the online task completion rate was over 95%. CONCLUSION Users can complete the grab task with minimum commands, avoiding the control burden caused by complex commands. This would provide a useful assistance means for people with severe motor impairment in their daily life.
Collapse
|
9
|
Yanagisawa T, Fukuma R, Seymour B, Tanaka M, Hosomi K, Yamashita O, Kishima H, Kamitani Y, Saitoh Y. BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 2020; 95:e417-e426. [PMID: 32675074 PMCID: PMC7455320 DOI: 10.1212/wnl.0000000000009858] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 02/12/2020] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To determine whether training with a brain-computer interface (BCI) to control an image of a phantom hand, which moves based on cortical currents estimated from magnetoencephalographic signals, reduces phantom limb pain. METHODS Twelve patients with chronic phantom limb pain of the upper limb due to amputation or brachial plexus root avulsion participated in a randomized single-blinded crossover trial. Patients were trained to move the virtual hand image controlled by the BCI with a real decoder, which was constructed to classify intact hand movements from motor cortical currents, by moving their phantom hands for 3 days ("real training"). Pain was evaluated using a visual analogue scale (VAS) before and after training, and at follow-up for an additional 16 days. As a control, patients engaged in the training with the same hand image controlled by randomly changing values ("random training"). The 2 trainings were randomly assigned to the patients. This trial is registered at UMIN-CTR (UMIN000013608). RESULTS VAS at day 4 was significantly reduced from the baseline after real training (mean [SD], 45.3 [24.2]-30.9 [20.6], 1/100 mm; p = 0.009 < 0.025), but not after random training (p = 0.047 > 0.025). Compared to VAS at day 1, VAS at days 4 and 8 was significantly reduced by 32% and 36%, respectively, after real training and was significantly lower than VAS after random training (p < 0.01). CONCLUSION Three-day training to move the hand images controlled by BCI significantly reduced pain for 1 week. CLASSIFICATION OF EVIDENCE This study provides Class III evidence that BCI reduces phantom limb pain.
Collapse
Affiliation(s)
- Takufumi Yanagisawa
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan.
| | - Ryohei Fukuma
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Ben Seymour
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Masataka Tanaka
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Koichi Hosomi
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Okito Yamashita
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Haruhiko Kishima
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Yukiyasu Kamitani
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| | - Youichi Saitoh
- From the Institute for Advanced Co-Creation Studies (T.Y.), Osaka University; Departments of Neurosurgery (T.Y., R.F., M.T., K.H., H.K., Y.S.) and Neuromodulation and Neurosurgery (K.H., Y.S.), Osaka University Graduate School of Medicine; Department of Neuroinformatics (T.Y., R.F., Y.K.), ATR Computational Neuroscience Laboratories, Kyoto, Japan; Computational and Biological Learning Laboratory, Department of Engineering (B.S.), University of Cambridge, UK; Center for Information and Neural Networks (B.S.), National Institute for Information and Communications Technology, Osaka; RIKEN Center for Advanced Intelligence Project (O.Y.), Tokyo; Department of Computational Brain Imaging (O.Y.), ATR Neural Information Analysis Laboratories, Kyoto; and Graduate School of Informatics (Y.K.), Kyoto University, Japan
| |
Collapse
|
10
|
Mikhaylov A, Pimashkin A, Pigareva Y, Gerasimova S, Gryaznov E, Shchanikov S, Zuev A, Talanov M, Lavrov I, Demin V, Erokhin V, Lobov S, Mukhina I, Kazantsev V, Wu H, Spagnolo B. Neurohybrid Memristive CMOS-Integrated Systems for Biosensors and Neuroprosthetics. Front Neurosci 2020; 14:358. [PMID: 32410943 PMCID: PMC7199501 DOI: 10.3389/fnins.2020.00358] [Citation(s) in RCA: 110] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Accepted: 03/24/2020] [Indexed: 11/18/2022] Open
Abstract
Here we provide a perspective concept of neurohybrid memristive chip based on the combination of living neural networks cultivated in microfluidic/microelectrode system, metal-oxide memristive devices or arrays integrated with mixed-signal CMOS layer to control the analog memristive circuits, process the decoded information, and arrange a feedback stimulation of biological culture as parts of a bidirectional neurointerface. Our main focus is on the state-of-the-art approaches for cultivation and spatial ordering of the network of dissociated hippocampal neuron cells, fabrication of a large-scale cross-bar array of memristive devices tailored using device engineering, resistive state programming, or non-linear dynamics, as well as hardware implementation of spiking neural networks (SNNs) based on the arrays of memristive devices and integrated CMOS electronics. The concept represents an example of a brain-on-chip system belonging to a more general class of memristive neurohybrid systems for a new-generation robotics, artificial intelligence, and personalized medicine, discussed in the framework of the proposed roadmap for the next decade period.
Collapse
Affiliation(s)
- Alexey Mikhaylov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Alexey Pimashkin
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Yana Pigareva
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | | | - Evgeny Gryaznov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
| | - Sergey Shchanikov
- Department of Information Technologies, Vladimir State University, Murom, Russia
| | - Anton Zuev
- Department of Information Technologies, Vladimir State University, Murom, Russia
| | - Max Talanov
- Neuroscience Laboratory, Kazan Federal University, Kazan, Russia
| | - Igor Lavrov
- Department of Neurologic Surgery, Mayo Clinic, Rochester, MN, United States
- Laboratory of Motor Neurorehabilitation, Kazan Federal University, Kazan, Russia
| | | | - Victor Erokhin
- Neuroscience Laboratory, Kazan Federal University, Kazan, Russia
- Kurchatov Institute, Moscow, Russia
- CNR-Institute of Materials for Electronics and Magnetism, Italian National Research Council, Parma, Italy
| | - Sergey Lobov
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Irina Mukhina
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Cell Technology Group, Privolzhsky Research Medical University, Nizhny Novgorod, Russia
| | - Victor Kazantsev
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Huaqiang Wu
- Institute of Microelectronics, Tsinghua University, Beijing, China
| | - Bernardo Spagnolo
- Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia
- Dipartimento di Fisica e Chimica-Emilio Segrè, Group of Interdisciplinary Theoretical Physics, Università di Palermo and CNISM, Unità di Palermo, Palermo, Italy
- Istituto Nazionale di Fisica Nucleare, Sezione di Catania, Catania, Italy
| |
Collapse
|
11
|
Lee SB, Kim HJ, Kim H, Jeong JH, Lee SW, Kim DJ. Comparative analysis of features extracted from EEG spatial, spectral and temporal domains for binary and multiclass motor imagery classification. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2019.06.008] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
12
|
Belkacem AN, Nishio S, Suzuki T, Ishiguro H, Hirata M. Neuromagnetic Decoding of Simultaneous Bilateral Hand Movements for Multidimensional Brain-Machine Interfaces. IEEE Trans Neural Syst Rehabil Eng 2019; 26:1301-1310. [PMID: 29877855 DOI: 10.1109/tnsre.2018.2837003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
To provide multidimensional control, we describe the first reported decoding of bilateral hand movements by using single-trial magnetoencephalography signals as a new approach to enhance a user's ability to interact with a complex environment through a multidimensional brain-machine interface. Ten healthy participants performed or imagined four types of bilateral hand movements during neuromagnetic measurements. By applying a support vector machine (SVM) method to classify the four movements regarding the sensor data obtained from the sensorimotor area, we found the mean accuracy of a two-class classification using the amplitudes of neuromagnetic fields to be particularly suitable for real-time applications, with accuracies comparable to those obtained in previous studies involving unilateral movement. The sensor data from over the sensorimotor cortex showed discriminative time-series waveforms and time-frequency maps in the bilateral hemispheres according to the four tasks. Furthermore, we used four-class classification algorithms based on the SVM method to decode all types of bilateral movements. Our results provided further proof that the slow components of neuromagnetic fields carry sufficient neural information to classify even bilateral hand movements and demonstrated the potential utility of decoding bilateral movements for engineering purposes such as multidimensional motor control.
Collapse
|
13
|
Ofner P, Schwarz A, Pereira J, Wyss D, Wildburger R, Müller-Putz GR. Attempted Arm and Hand Movements can be Decoded from Low-Frequency EEG from Persons with Spinal Cord Injury. Sci Rep 2019; 9:7134. [PMID: 31073142 PMCID: PMC6509331 DOI: 10.1038/s41598-019-43594-9] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 04/26/2019] [Indexed: 01/08/2023] Open
Abstract
We show that persons with spinal cord injury (SCI) retain decodable neural correlates of attempted arm and hand movements. We investigated hand open, palmar grasp, lateral grasp, pronation, and supination in 10 persons with cervical SCI. Discriminative movement information was provided by the time-domain of low-frequency electroencephalography (EEG) signals. Based on these signals, we obtained a maximum average classification accuracy of 45% (chance level was 20%) with respect to the five investigated classes. Pattern analysis indicates central motor areas as the origin of the discriminative signals. Furthermore, we introduce a proof-of-concept to classify movement attempts online in a closed loop, and tested it on a person with cervical SCI. We achieved here a modest classification performance of 68.4% with respect to palmar grasp vs hand open (chance level 50%).
Collapse
Affiliation(s)
- Patrick Ofner
- Graz University of Technology, Institute of Neural Engineering, BCI-Lab, Graz, Austria
| | - Andreas Schwarz
- Graz University of Technology, Institute of Neural Engineering, BCI-Lab, Graz, Austria
| | - Joana Pereira
- Graz University of Technology, Institute of Neural Engineering, BCI-Lab, Graz, Austria
| | | | | | - Gernot R Müller-Putz
- Graz University of Technology, Institute of Neural Engineering, BCI-Lab, Graz, Austria.
| |
Collapse
|
14
|
Yanagisawa T, Fukuma R, Seymour B, Hosomi K, Kishima H, Shimizu T, Yokoi H, Hirata M, Yoshimine T, Kamitani Y, Saitoh Y. MEG-BMI to Control Phantom Limb Pain. Neurol Med Chir (Tokyo) 2018; 58:327-333. [PMID: 29998936 PMCID: PMC6092605 DOI: 10.2176/nmc.st.2018-0099] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
A brachial plexus root avulsion (BPRA) causes intractable pain in the insensible affected hands. Such pain is partly due to phantom limb pain, which is neuropathic pain occurring after the amputation of a limb and partial or complete deafferentation. Previous studies suggested that the pain was attributable to maladaptive plasticity of the sensorimotor cortex. However, there is little evidence to demonstrate the causal links between the pain and the cortical representation, and how much cortical factors affect the pain. Here, we applied lesioning of the dorsal root entry zone (DREZotomy) and training with a brain–machine interface (BMI) based on real-time magnetoencephalography signals to reconstruct affected hand movements with a robotic hand. The DREZotomy successfully reduced the shooting pain after BPRA, but a part of the pain remained. The BMI training successfully induced some plastic changes in the sensorimotor representation of the phantom hand movements and helped control the remaining pain. When the patient tried to control the robotic hand by moving their phantom hand through association with the representation of the intact hand, this especially decreased the pain while decreasing the classification accuracy of the phantom hand movements. These results strongly suggested that pain after the BPRA was partly attributable to cortical representation of phantom hand movements and that the BMI training controlled the pain by inducing appropriate cortical reorganization. For the treatment of chronic pain, we need to know how to modulate the cortical representation by novel methods.
Collapse
Affiliation(s)
- Takufumi Yanagisawa
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Osaka University Institute for Advanced Co-Creation Studies.,Department of Neuroinformatics, ATR Computational Neuroscience Laboratories.,Division of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University
| | - Ryohei Fukuma
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Department of Neuroinformatics, ATR Computational Neuroscience Laboratories
| | - Ben Seymour
- Computational and Biological Learning Laboratory, Department of Engineering, University of Cambridge.,Center for Information and Neural Networks, National Institute for Information and Communications Technology
| | - Koichi Hosomi
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Department of Neuromodulation and Neurosurgery, Osaka University Graduate School of Medicine
| | - Haruhiko Kishima
- Department of Neurosurgery, Osaka University Graduate School of Medicine
| | - Takeshi Shimizu
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Department of Neuromodulation and Neurosurgery, Osaka University Graduate School of Medicine
| | - Hiroshi Yokoi
- Department of Mechanical Engineering and Intelligent Systems, The University of Electro-Communications
| | - Masayuki Hirata
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Division of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University
| | - Toshiki Yoshimine
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Division of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University
| | - Yukiyasu Kamitani
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories.,Graduate School of Informatics, Kyoto University
| | - Youichi Saitoh
- Department of Neurosurgery, Osaka University Graduate School of Medicine.,Department of Neuromodulation and Neurosurgery, Osaka University Graduate School of Medicine
| |
Collapse
|
15
|
Fukuma R, Yanagisawa T, Yokoi H, Hirata M, Yoshimine T, Saitoh Y, Kamitani Y, Kishima H. Training in Use of Brain-Machine Interface-Controlled Robotic Hand Improves Accuracy Decoding Two Types of Hand Movements. Front Neurosci 2018; 12:478. [PMID: 30050405 PMCID: PMC6050372 DOI: 10.3389/fnins.2018.00478] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 06/25/2018] [Indexed: 11/21/2022] Open
Abstract
Objective: Brain-machine interfaces (BMIs) are useful for inducing plastic changes in cortical representation. A BMI first decodes hand movements using cortical signals and then converts the decoded information into movements of a robotic hand. By using the BMI robotic hand, the cortical representation decoded by the BMI is modulated to improve decoding accuracy. We developed a BMI based on real-time magnetoencephalography (MEG) signals to control a robotic hand using decoded hand movements. Subjects were trained to use the BMI robotic hand freely for 10 min to evaluate plastic changes in the cortical representation due to the training. Method: We trained nine young healthy subjects with normal motor function. In open-loop conditions, they were instructed to grasp or open their right hands during MEG recording. Time-averaged MEG signals were then used to train a real decoder to control the robotic arm in real time. Then, subjects were instructed to control the BMI-controlled robotic hand by moving their right hands for 10 min while watching the robot's movement. During this closed-loop session, subjects tried to improve their ability to control the robot. Finally, subjects performed the same offline task to compare cortical activities related to the hand movements. As a control, we used a random decoder trained by the MEG signals with shuffled movement labels. We performed the same experiments with the random decoder as a crossover trial. To evaluate the cortical representation, cortical currents were estimated using a source localization technique. Hand movements were also decoded by a support vector machine using the MEG signals during the offline task. The classification accuracy of the movements was compared among offline tasks. Results: During the BMI training with the real decoder, the subjects succeeded in improving their accuracy in controlling the BMI robotic hand with correct rates of 0.28 ± 0.13 to 0.50 ± 0.11 (p = 0.017, n = 8, paired Student's t-test). Moreover, the classification accuracy of hand movements during the offline task was significantly increased after BMI training with the real decoder from 62.7 ± 6.5 to 70.0 ± 11.1% (p = 0.022, n = 8, t(7) = 2.93, paired Student's t-test), whereas accuracy did not significantly change after BMI training with the random decoder from 63.0 ± 8.8 to 66.4 ± 9.0% (p = 0.225, n = 8, t(7) = 1.33). Conclusion: BMI training is a useful tool to train the cortical activity necessary for BMI control and to induce some plastic changes in the activity.
Collapse
Affiliation(s)
- Ryohei Fukuma
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan.,Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Seika-cho, Japan
| | - Takufumi Yanagisawa
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan.,Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Seika-cho, Japan.,Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.,Institute for Advanced Co-Creation Studies, Osaka University, Suita, Japan.,Endowed Research Department of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University, Suita, Japan
| | - Hiroshi Yokoi
- Department of Mechanical Engineering and Intelligent Systems, University of Electro-Communications, Chofu, Japan
| | - Masayuki Hirata
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan.,Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.,Endowed Research Department of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University, Suita, Japan
| | - Toshiki Yoshimine
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan.,Endowed Research Department of Clinical Neuroengineering, Global Center for Medical Engineering and Informatics, Osaka University, Suita, Japan
| | - Youichi Saitoh
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan.,Department of Neuromodulation and Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Yukiyasu Kamitani
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Seika-cho, Japan.,Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Haruhiko Kishima
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
16
|
Marjaninejad A, Taherian B, Valero-Cuevas FJ. Finger movements are mainly represented by a linear transformation of energy in band-specific ECoG signals. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:986-989. [PMID: 29060039 DOI: 10.1109/embc.2017.8036991] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Electrocardiogram (ECoG) recordings are very attractive for Brain Machine Interface (BMI) applications due to their balance between good signal to noise ratio and minimal invasiveness. The design of ECoG signal decoders is an open research area to date which requires a better understanding of the nature of these signals and how information is encoded in them. In this study, a linear and a non-linear method, Linear Regression Model (LRM) and Artificial Neural Network (ANN) respectively, were used to decode finger movements from energy in band-specific ECoG signals. It is shown that the ANN only slightly outperformed the LRM, which suggests that finger movements are mainly represented by a linear transformation of energy in band-specific ECoG signals. In addition, comparing our results to similar Electroencephalogram (EEG) studies illustrated that the spatio-temporal summation of multiple neural signals is itself linearly correlated with movement, and is not an artifact introduced by the scalp or cranium. Furthermore, a new algorithm was employed to reduce the number of spectral features of the input signals required for either of the decoding methods.
Collapse
|
17
|
Chen X, Zhao B, Wang Y, Xu S, Gao X. Control of a 7-DOF Robotic Arm System With an SSVEP-Based BCI. Int J Neural Syst 2018; 28:1850018. [PMID: 29768990 DOI: 10.1142/s0129065718500181] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Although robot technology has been successfully used to empower people who suffer from motor disabilities to increase their interaction with their physical environment, it remains a challenge for individuals with severe motor impairment, who do not have the motor control ability to move robots or prosthetic devices by manual control. In this study, to mitigate this issue, a noninvasive brain-computer interface (BCI)-based robotic arm control system using gaze based steady-state visual evoked potential (SSVEP) was designed and implemented using a portable wireless electroencephalogram (EEG) system. A 15-target SSVEP-based BCI using a filter bank canonical correlation analysis (FBCCA) method allowed users to directly control the robotic arm without system calibration. The online results from 12 healthy subjects indicated that a command for the proposed brain-controlled robot system could be selected from 15 possible choices in 4[Formula: see text]s (i.e. 2[Formula: see text]s for visual stimulation and 2[Formula: see text]s for gaze shifting) with an average accuracy of 92.78%, resulting in a 15 commands/min transfer rate. Furthermore, all subjects (even naive users) were able to successfully complete the entire move-grasp-lift task without user training. These results demonstrated an SSVEP-based BCI could provide accurate and efficient high-level control of a robotic arm, showing the feasibility of a BCI-based robotic arm control system for hand-assistance.
Collapse
Affiliation(s)
- Xiaogang Chen
- 1 Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, P. R. China
| | - Bing Zhao
- 1 Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, P. R. China
| | - Yijun Wang
- 2 State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, P. R. China
| | - Shengpu Xu
- 1 Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, P. R. China
| | - Xiaorong Gao
- 3 Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, P. R. China
| |
Collapse
|
18
|
Shen HM, Hu L, Fu X. Integrated Giant Magnetoresistance Technology for Approachable Weak Biomagnetic Signal Detections. SENSORS (BASEL, SWITZERLAND) 2018; 18:E148. [PMID: 29316670 PMCID: PMC5795475 DOI: 10.3390/s18010148] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2017] [Revised: 12/27/2017] [Accepted: 01/05/2018] [Indexed: 01/19/2023]
Abstract
With the extensive applications of biomagnetic signals derived from active biological tissue in both clinical diagnoses and human-computer-interaction, there is an increasing need for approachable weak biomagnetic sensing technology. The inherent merits of giant magnetoresistance (GMR) and its high integration with multiple technologies makes it possible to detect weak biomagnetic signals with micron-sized, non-cooled and low-cost sensors, considering that the magnetic field intensity attenuates rapidly with distance. This paper focuses on the state-of-art in integrated GMR technology for approachable biomagnetic sensing from the perspective of discipline fusion between them. The progress in integrated GMR to overcome the challenges in weak biomagnetic signal detection towards high resolution portable applications is addressed. The various strategies for 1/f noise reduction and sensitivity enhancement in integrated GMR technology for sub-pT biomagnetic signal recording are discussed. In this paper, we review the developments of integrated GMR technology for in vivo/vitro biomagnetic source imaging and demonstrate how integrated GMR can be utilized for biomagnetic field detection. Since the field sensitivity of integrated GMR technology is being pushed to fT/Hz0.5 with the focused efforts, it is believed that the potential of integrated GMR technology will make it preferred choice in weak biomagnetic signal detection in the future.
Collapse
Affiliation(s)
- Hui-Min Shen
- School of Mechanical Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China.
| | - Liang Hu
- State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310028, China.
| | - Xin Fu
- State Key Laboratory of Fluid Power and Mechatronic Systems, Zhejiang University, Hangzhou 310028, China.
| |
Collapse
|
19
|
Zeng H, Wang Y, Wu C, Song A, Liu J, Ji P, Xu B, Zhu L, Li H, Wen P. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback. Front Neurorobot 2017; 11:60. [PMID: 29163123 PMCID: PMC5671634 DOI: 10.3389/fnbot.2017.00060] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 10/18/2017] [Indexed: 01/25/2023] Open
Abstract
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes.
Collapse
Affiliation(s)
- Hong Zeng
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Yanxin Wang
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Changcheng Wu
- College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Aiguo Song
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Jia Liu
- Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology, Nanjing University of Information Sciences and Technology, Nanjing, China
| | - Peng Ji
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Lifeng Zhu
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Huijun Li
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Pengcheng Wen
- AVIC Aeronautics Computing Technique Research Institute, Xi'an, China
| |
Collapse
|
20
|
Decoding finger movement in humans using synergy of EEG cortical current signals. Sci Rep 2017; 7:11382. [PMID: 28900188 PMCID: PMC5595824 DOI: 10.1038/s41598-017-09770-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 07/31/2017] [Indexed: 12/31/2022] Open
Abstract
The synchronized activity of neuronal populations across multiple distant brain areas may reflect coordinated interactions of large-scale brain networks. Currently, there is no established method to investigate the temporal transitions between these large-scale networks that would allow, for example, to decode finger movements. Here we applied a matrix factorization method employing principal component and temporal independent component analyses to identify brain activity synchronizations. In accordance with previous studies investigating “muscle synergies”, we refer to this activity as “brain activity synergy”. Using electroencephalography (EEG), we first estimated cortical current sources (CSs) and then identified brain activity synergies within the estimated CS signals. A decoding analysis for finger movement in eight directions showed that such CS synergies provided more information for dissociating between movements than EEG sensor signals, EEG synergy, or CS signals, suggesting that temporal activation patterns of the synchronizing CSs may contain information related to motor control. A quantitative analysis of features selected by the decoders further revealed temporal transitions among the primary motor area, dorsal and ventral premotor areas, pre-supplementary motor area, and supplementary motor area, which may reflect transitions in motor planning and execution. These results provide a proof of concept for brain activity synergy estimation using CSs.
Collapse
|
21
|
Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks. Sci Rep 2016; 6:38565. [PMID: 27966546 PMCID: PMC5155290 DOI: 10.1038/srep38565] [Citation(s) in RCA: 207] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Accepted: 11/10/2016] [Indexed: 12/27/2022] Open
Abstract
Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.
Collapse
|
22
|
Yanagisawa T, Fukuma R, Seymour B, Hosomi K, Kishima H, Shimizu T, Yokoi H, Hirata M, Yoshimine T, Kamitani Y, Saitoh Y. Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nat Commun 2016; 7:13209. [PMID: 27807349 PMCID: PMC5095287 DOI: 10.1038/ncomms13209] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2015] [Accepted: 09/12/2016] [Indexed: 12/02/2022] Open
Abstract
The cause of pain in a phantom limb after partial or complete deafferentation is an important problem. A popular but increasingly controversial theory is that it results from maladaptive reorganization of the sensorimotor cortex, suggesting that experimental induction of further reorganization should affect the pain, especially if it results in functional restoration. Here we use a brain–machine interface (BMI) based on real-time magnetoencephalography signals to reconstruct affected hand movements with a robotic hand. BMI training induces significant plasticity in the sensorimotor cortex, manifested as improved discriminability of movement information and enhanced prosthetic control. Contrary to our expectation that functional restoration would reduce pain, the BMI training with the phantom hand intensifies the pain. In contrast, BMI training designed to dissociate the prosthetic and phantom hands actually reduces pain. These results reveal a functional relevance between sensorimotor cortical plasticity and pain, and may provide a novel treatment with BMI neurofeedback. Pain in a phantom limb after limb deafferentation may be due to maladaptive sensorimotor representation. Here the authors find that sensorimotor plasticity induced by BMI training with the phantom hand, contrary to expectation, increased pain while dissociating prosthetic movements from the phantom arm relieved the pain.
Collapse
Affiliation(s)
- Takufumi Yanagisawa
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Division of Functional Diagnostic Science, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Kyoto 619-0288, Japan.,Department of Neuroinformatics, CiNet Computational Neuroscience Laboratories, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,JST PRESTO, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Division of Clinical Neuroengineering, Osaka University, Global Center for Medical Engineering and Informactics, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Ryohei Fukuma
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Kyoto 619-0288, Japan.,Department of Neuroinformatics, CiNet Computational Neuroscience Laboratories, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayamacho, Ikoma, Nara 630-0192, Japan
| | - Ben Seymour
- Department of Engineering, University of Cambridge, Computational and Biological Learning Laboratory, Trumpington Street, Cambridge CB2 1PZ, UK.,National Institute for Information and Communications Technology, Center for Information and Neural Networks, 1-3 Suita, Osaka 565-0871, Japan
| | - Koichi Hosomi
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuromodulation and Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Haruhiko Kishima
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Takeshi Shimizu
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuromodulation and Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Hiroshi Yokoi
- Department of Mechanical Engineering and Intelligent Systems, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| | - Masayuki Hirata
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuroinformatics, CiNet Computational Neuroscience Laboratories, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Division of Clinical Neuroengineering, Osaka University, Global Center for Medical Engineering and Informactics, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Toshiki Yoshimine
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuroinformatics, CiNet Computational Neuroscience Laboratories, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Division of Clinical Neuroengineering, Osaka University, Global Center for Medical Engineering and Informactics, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| | - Yukiyasu Kamitani
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika-cho, Kyoto 619-0288, Japan.,Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayamacho, Ikoma, Nara 630-0192, Japan.,Graduate School of Informatics, Kyoto University, Yoshidahonmachi, Sakyoku, Kyoto 606-8501, Japan
| | - Youichi Saitoh
- Department of Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.,Department of Neuromodulation and Neurosurgery, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan
| |
Collapse
|
23
|
Common neural correlates of real and imagined movements contributing to the performance of brain-machine interfaces. Sci Rep 2016; 6:24663. [PMID: 27090735 PMCID: PMC4835797 DOI: 10.1038/srep24663] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 04/04/2016] [Indexed: 02/06/2023] Open
Abstract
The relationship between M1 activity representing motor information in real and imagined movements have not been investigated with high spatiotemporal resolution using non-invasive measurements. We examined the similarities and differences in M1 activity during real and imagined movements. Ten subjects performed or imagined three types of right upper limb movements. To infer the movement type, we used 40 virtual channels in the M1 contralateral to the movement side (cM1) using a beamforming approach. For both real and imagined movements, cM1 activities increased around response onset, after which their intensities were significantly different. Similarly, although decoding accuracies surpassed the chance level in both real and imagined movements, these were significantly different after the onset. Single virtual channel-based analysis showed that decoding accuracy significantly increased around the hand and arm areas during real and imagined movements and that these are spatially correlated. The temporal correlation of decoding accuracy significantly increased around the hand and arm areas, except for the period immediately after response onset. Our results suggest that cM1 is involved in similar neural activities related to the representation of motor information during real and imagined movements, except for presence or absence of sensory-motor integration induced by sensory feedback.
Collapse
|