1
|
Stavisky SD. Restoring Speech Using Brain-Computer Interfaces. Annu Rev Biomed Eng 2025; 27:29-54. [PMID: 39745941 DOI: 10.1146/annurev-bioeng-110122-012818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Abstract
People who have lost the ability to speak due to neurological injuries would greatly benefit from assistive technology that provides a fast, intuitive, and naturalistic means of communication. This need can be met with brain-computer interfaces (BCIs): medical devices that bypass injured parts of the nervous system and directly transform neural activity into outputs such as text or sound. BCIs for restoring movement and typing have progressed rapidly in recent clinical trials; speech BCIs are the next frontier. This review covers the clinical need for speech BCIs, surveys foundational studies that point to where and how speech can be decoded in the brain, describes recent progress in both discrete and continuous speech decoding and closed-loop speech BCIs, provides metrics for assessing these systems' performance, and highlights key remaining challenges on the road toward clinically useful speech neuroprostheses.
Collapse
Affiliation(s)
- Sergey D Stavisky
- Department of Neurological Surgery, University of California, Davis, California, USA;
| |
Collapse
|
2
|
Gusman JT, Hosman T, Crawford R, Singer-Clark T, Kapitonava A, Kelemen JN, Hahn N, Henderson JM, Hochberg LR, Simeral JD, Vargas-Irwin CE. Multi-gesture drag-and-drop decoding in a 2D iBCI control task. J Neural Eng 2025; 22:026054. [PMID: 39899980 PMCID: PMC11983719 DOI: 10.1088/1741-2552/adb180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 12/17/2024] [Accepted: 02/03/2025] [Indexed: 02/05/2025]
Abstract
Objective. Intracortical brain-computer interfaces (iBCIs) have demonstrated the ability to enable point and click as well as reach and grasp control for people with tetraplegia. However, few studies have investigated iBCIs during long-duration discrete movements that would enable common computer interactions such as 'click-and-hold' or 'drag-and-drop'.Approach. Here, we examined the performance of multi-class and binary (attempt/no-attempt) classification of neural activity in the left precentral gyrus of two BrainGate2 clinical trial participants performing hand gestures for 1, 2, and 4 s in duration. We then designed a novel 'latch decoder' that utilizes parallel multi-class and binary decoding processes and evaluated its performance on data from isolated sustained gesture attempts and a multi-gesture drag-and-drop task.Main results. Neural activity during sustained gestures revealed a marked decrease in the discriminability of hand gestures sustained beyond 1 s. Compared to standard direct decoding methods, the Latch decoder demonstrated substantial improvement in decoding accuracy for gestures performed independently or in conjunction with simultaneous 2D cursor control.Significance. This work highlights the unique neurophysiologic response patterns of sustained gesture attempts in human motor cortex and demonstrates a promising decoding approach that could enable individuals with tetraplegia to intuitively control a wider range of consumer electronics using an iBCI.
Collapse
Affiliation(s)
- Jacob T Gusman
- Biomedical Engineering Graduate Program, School of Engineering, Brown University, Providence, RI, United States of America
- School of Engineering, Brown University, Providence, RI, United States of America
- Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- VA Center for Neurorestoration and Neurotechnology, Office of Research and Development, VA Providence Healthcare System, Providence, RI, United States of America
| | - Tommy Hosman
- School of Engineering, Brown University, Providence, RI, United States of America
- Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- VA Center for Neurorestoration and Neurotechnology, Office of Research and Development, VA Providence Healthcare System, Providence, RI, United States of America
| | - Rekha Crawford
- Department of Neuroscience, Brown University, Providence, RI, United States of America
| | - Tyler Singer-Clark
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
| | - Anastasia Kapitonava
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
| | - Jessica N Kelemen
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
| | - Nick Hahn
- Department of Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Jaimie M Henderson
- Department of Neurosurgery, Stanford University, Stanford, CA, United States of America
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, United States of America
- Bio-X Program, Stanford University, Stanford, CA, United States of America
| | - Leigh R Hochberg
- School of Engineering, Brown University, Providence, RI, United States of America
- Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- VA Center for Neurorestoration and Neurotechnology, Office of Research and Development, VA Providence Healthcare System, Providence, RI, United States of America
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
| | - John D Simeral
- School of Engineering, Brown University, Providence, RI, United States of America
- Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- VA Center for Neurorestoration and Neurotechnology, Office of Research and Development, VA Providence Healthcare System, Providence, RI, United States of America
| | - Carlos E Vargas-Irwin
- Robert J. and Nancy D. Carney Institute for Brain Science, Brown University, Providence, RI, United States of America
- VA Center for Neurorestoration and Neurotechnology, Office of Research and Development, VA Providence Healthcare System, Providence, RI, United States of America
- Department of Neuroscience, Brown University, Providence, RI, United States of America
| |
Collapse
|
3
|
Noel JP, Bockbrader M, Bertoni T, Colachis S, Solca M, Orepic P, Ganzer PD, Haggard P, Rezai A, Blanke O, Serino A. Neuronal responses in the human primary motor cortex coincide with the subjective onset of movement intention in brain-machine interface-mediated actions. PLoS Biol 2025; 23:e3003118. [PMID: 40244939 PMCID: PMC12005534 DOI: 10.1371/journal.pbio.3003118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2024] [Accepted: 03/17/2025] [Indexed: 04/19/2025] Open
Abstract
Self-initiated behavior is accompanied by the experience of intending our actions. Here, we leverage the unique opportunity to examine the full intentional chain-from intention to action to environmental effects-in a tetraplegic person outfitted with a primary motor cortex (M1) brain-machine interface (BMI) generating real hand movements via neuromuscular electrical stimulation (NMES). This combined BMI-NMES approach allowed us to selectively manipulate each element of the intentional chain (intention, action, effect) while probing subjective experience and performing extra-cellular recordings in human M1. Behaviorally, we reveal a novel form of intentional binding: motor intentions are reflected in a perceived temporal attraction between the onset of intentions and that of actions. Neurally, we demonstrate that evoked spiking activity in M1 largely coincides in time with the onset of the experience of intention and that M1 spike counts and the onset of subjective intention may co-vary on a trial-by-trial basis. Further, population-level dynamics, as indexed by a decoder instantiating movement, reflect intention-action temporal binding. The results fill a significant knowledge gap by relating human spiking activity in M1 with the onset of subjective intention and complement prior human intracranial work examining pre-motor and parietal areas.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Department of Neuroscience, University of Minnesota, Minneapolis, Minnesota, United States of America
- Minnesota Robotics Institute, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Marcie Bockbrader
- Department of Physical Medicine and Rehabilitation, The Ohio State University, Columbus, Ohio, United States of America
| | - Tommaso Bertoni
- MySpace Lab, Department of Clinical Neuroscience, University Hospital Lausanne (CHUV), Lausanne, Switzerland
- Department of Clinical Neurosciences, University Hospital, Geneva, Switzerland
| | - Sam Colachis
- Medical Devices and Neuromodulation, Battelle Memorial Institute, Columbus, Ohio, United States of America
| | - Marco Solca
- Neuro-X Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Pavo Orepic
- Neuro-X Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Patrick D. Ganzer
- Department of Biomedical Engineering, University of Miami, Miami, Florida, United States of America
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Ali Rezai
- Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, United States of America
| | - Olaf Blanke
- Neuro-X Institute, Faculty of Life Sciences, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, University Hospital Lausanne (CHUV), Lausanne, Switzerland
- Department of Clinical Neurosciences, University Hospital, Geneva, Switzerland
| |
Collapse
|
4
|
Khan S, Kallis L, Mee H, El Hadwe S, Barone D, Hutchinson P, Kolias A. Invasive Brain-Computer Interface for Communication: A Scoping Review. Brain Sci 2025; 15:336. [PMID: 40309789 PMCID: PMC12026362 DOI: 10.3390/brainsci15040336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2025] [Revised: 03/10/2025] [Accepted: 03/19/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND The rapid expansion of the brain-computer interface for patients with neurological deficits has garnered significant interest, and for patients, it provides an additional route where conventional rehabilitation has its limits. This has particularly been the case for patients who lose the ability to communicate. Circumventing neural injuries by recording from the intact cortex and subcortex has the potential to allow patients to communicate and restore self-expression. Discoveries over the last 10-15 years have been possible through advancements in technology, neuroscience, and computing. By examining studies involving intracranial brain-computer interfaces that aim to restore communication, we aimed to explore the advances made and explore where the technology is heading. METHODS For this scoping review, we systematically searched PubMed and OVID Embase. After processing the articles, the search yielded 41 articles that we included in this review. RESULTS The articles predominantly assessed patients who had either suffered from amyotrophic lateral sclerosis, cervical cord injury, or brainstem stroke, resulting in tetraplegia and, in some cases, difficulty speaking. Of the intracranial implants, ten had ALS, six had brainstem stroke, and thirteen had a spinal cord injury. Stereoelectroencephalography was also used, but the results, whilst promising, are still in their infancy. Studies involving patients who were moving cursors on a screen could improve the speed of movement by optimising the interface and utilising better decoding methods. In recent years, intracortical devices have been successfully used for accurate speech-to-text and speech-to-audio decoding in patients who are unable to speak. CONCLUSIONS Here, we summarise the progress made by BCIs used for communication. Speech decoding directly from the cortex can provide a novel therapeutic method to restore full, embodied communication to patients suffering from tetraplegia who otherwise cannot communicate.
Collapse
Affiliation(s)
- Shujhat Khan
- Department of Clinical Neuroscience, University of Cambridge, Cambridge CB2 1TN, UK; (S.K.); (H.M.); (S.E.H.); (D.B.); (P.H.)
| | - Leonie Kallis
- Department of Medicine, University of Cambridge, Trinity Ln, Cambridge CB2 1TN, UK;
| | - Harry Mee
- Department of Clinical Neuroscience, University of Cambridge, Cambridge CB2 1TN, UK; (S.K.); (H.M.); (S.E.H.); (D.B.); (P.H.)
- Department of Rehabilitation, Addenbrookes Hospital, Hills Rd., Cambridge CB2 0QQ, UK
| | - Salim El Hadwe
- Department of Clinical Neuroscience, University of Cambridge, Cambridge CB2 1TN, UK; (S.K.); (H.M.); (S.E.H.); (D.B.); (P.H.)
- Bioelectronics Laboratory, Department of Electrical Engineering, University of Cambridge, Cambridge CB2 1PZ, UK
| | - Damiano Barone
- Department of Clinical Neuroscience, University of Cambridge, Cambridge CB2 1TN, UK; (S.K.); (H.M.); (S.E.H.); (D.B.); (P.H.)
- Department of Neurosurgery, Houston Methodist, Houston, TX 77079, USA
| | - Peter Hutchinson
- Department of Clinical Neuroscience, University of Cambridge, Cambridge CB2 1TN, UK; (S.K.); (H.M.); (S.E.H.); (D.B.); (P.H.)
- Department of Neurosurgery, Addenbrookes Hospital, Hills Rd., Cambridge CB2 0QQ, UK
| | - Angelos Kolias
- Department of Clinical Neuroscience, University of Cambridge, Cambridge CB2 1TN, UK; (S.K.); (H.M.); (S.E.H.); (D.B.); (P.H.)
- Department of Neurosurgery, Addenbrookes Hospital, Hills Rd., Cambridge CB2 0QQ, UK
| |
Collapse
|
5
|
Prakash PR, Lei T, Flint RD, Hsieh JK, Fitzgerald Z, Mugler E, Templer J, Goldrick MA, Tate MC, Rosenow J, Glaser J, Slutzky MW. Decoding speech intent from non-frontal cortical areas. J Neural Eng 2025; 22:016024. [PMID: 39808939 PMCID: PMC11822885 DOI: 10.1088/1741-2552/adaa20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 01/08/2025] [Accepted: 01/14/2025] [Indexed: 01/16/2025]
Abstract
Objective. Brain machine interfaces (BMIs) that can restore speech have predominantly focused on decoding speech signals from the speech motor cortices. A few studies have shown some information outside the speech motor cortices, such as in parietal and temporal lobes, that also may be useful for BMIs. The ability to use information from outside the frontal lobe could be useful not only for people with locked-in syndrome, but also to people with frontal lobe damage, which can cause nonfluent aphasia or apraxia of speech. However, temporal and parietal lobes are predominantly involved in perceptive speech processing and comprehension. Therefore, to be able to use signals from these areas in a speech BMI, it is important to ascertain that they are related to production. Here, using intracranial recordings, we sought evidence for whether, when and where neural information related to speech intent could be found in the temporal and parietal corticesApproach. Using intracranial recordings, we examined neural activity across temporal and parietal cortices to identify signals associated with speech intent. We employed causal information to distinguish speech intent from resting states and other language-related processes, such as comprehension and working memory. Neural signals were analyzed for their spatial distribution and temporal dynamics to determine their relevance to speech production.Main results. Causal information enabled us to distinguish speech intent from resting state and other processes involved in language processing or working memory. Information related to speech intent was distributed widely across the temporal and parietal lobes, including superior temporal, medial temporal, angular, and supramarginal gyri.Significance. Loss of communication due to neurological diseases can be devastating. While speech BMIs have made strides in decoding speech from frontal lobe signals, our study reveals that the temporal and parietal cortices contain information about speech production intent that can be causally decoded prior to the onset of voice. This information is distributed across a large network. This information can be used to improve current speech BMIs and potentially expand the patient population for speech BMIs to include people with frontal lobe damage from stroke or traumatic brain injury.
Collapse
Affiliation(s)
- Prashanth Ravi Prakash
- Departments of Biomedical Engineering, Northwestern University, Chicago, IL 60611, United States of America
| | - Tianhao Lei
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
| | - Robert D Flint
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
| | - Jason K Hsieh
- Department of Neurosurgery, Neurological Institute, Cleveland Clinic Foundation, Cleveland, OH, United States of America
| | - Zachary Fitzgerald
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
| | - Emily Mugler
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
| | - Jessica Templer
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
| | - Matthew A Goldrick
- Linguistics, Northwestern University, Chicago, IL 60611, United States of America
| | - Matthew C Tate
- Neurosurgery, Northwestern University, Chicago, IL 60611, United States of America
| | - Joshua Rosenow
- Neurosurgery, Northwestern University, Chicago, IL 60611, United States of America
| | - Joshua Glaser
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
| | - Marc W Slutzky
- Departments of Biomedical Engineering, Northwestern University, Chicago, IL 60611, United States of America
- Neurology, Northwestern University, Chicago, IL 60611, United States of America
- Neuroscience, Northwestern University, Chicago, IL 60611, United States of America
- Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL 60611, United States of America
| |
Collapse
|
6
|
Ottenhoff MC, Verwoert M, Goulis S, Wagner L, van Dijk JP, Kubben PL, Herff C. Global motor dynamics - Invariant neural representations of motor behavior in distributed brain-wide recordings. J Neural Eng 2024; 21:056034. [PMID: 39383883 DOI: 10.1088/1741-2552/ad851c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 10/09/2024] [Indexed: 10/11/2024]
Abstract
Objective.Motor-related neural activity is more widespread than previously thought, as pervasive brain-wide neural correlates of motor behavior have been reported in various animal species. Brain-wide movement-related neural activity have been observed in individual brain areas in humans as well, but it is unknown to what extent global patterns exist.Approach.Here, we use a decoding approach to capture and characterize brain-wide neural correlates of movement. We recorded invasive electrophysiological data from stereotactic electroencephalographic electrodes implanted in eight epilepsy patients who performed both an executed and imagined grasping task. Combined, these electrodes cover the whole brain, including deeper structures such as the hippocampus, insula and basal ganglia. We extract a low-dimensional representation and classify movement from rest trials using a Riemannian decoder.Main results.We reveal global neural dynamics that are predictive across tasks and participants. Using an ablation analysis, we demonstrate that these dynamics remain remarkably stable under loss of information. Similarly, the dynamics remain stable across participants, as we were able to predict movement across participants using transfer learning.Significance.Our results show that decodable global motor-related neural dynamics exist within a low-dimensional space. The dynamics are predictive of movement, nearly brain-wide and present in all our participants. The results broaden the scope to brain-wide investigations, and may allow combining datasets of multiple participants with varying electrode locations or calibrationless neural decoder.
Collapse
Affiliation(s)
- Maarten C Ottenhoff
- Department of Neurosurgery, Mental Health and Neuroscience Research Institute, Maastricht University, Maastricht, The Netherlands
| | - Maxime Verwoert
- Department of Neurosurgery, Mental Health and Neuroscience Research Institute, Maastricht University, Maastricht, The Netherlands
| | - Sophocles Goulis
- Department of Neurosurgery, Mental Health and Neuroscience Research Institute, Maastricht University, Maastricht, The Netherlands
| | - Louis Wagner
- Academic Center of Epileptology Kempenhaeghe/Maastricht University Medical Center, Maastricht, The Netherlands
- Academic Center of Epileptology Kempenhaeghe/Maastricht University Medical Center, Heeze, The Netherlands
| | - Johannes P van Dijk
- Academic Center of Epileptology Kempenhaeghe/Maastricht University Medical Center, Heeze, The Netherlands
- Department of Orthodontics, Ulm University, Ulm, Germany
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Pieter L Kubben
- Department of Neurosurgery, Mental Health and Neuroscience Research Institute, Maastricht University, Maastricht, The Netherlands
- Academic Center of Epileptology Kempenhaeghe/Maastricht University Medical Center, Maastricht, The Netherlands
| | - Christian Herff
- Department of Neurosurgery, Mental Health and Neuroscience Research Institute, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
7
|
Oliveira DS, Ponfick M, Braun DI, Osswald M, Sierotowicz M, Chatterjee S, Weber D, Eskofier B, Castellini C, Farina D, Kinfe TM, Del Vecchio A. A direct spinal cord-computer interface enables the control of the paralysed hand in spinal cord injury. Brain 2024; 147:3583-3595. [PMID: 38501612 PMCID: PMC11449141 DOI: 10.1093/brain/awae088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/24/2024] [Accepted: 03/05/2024] [Indexed: 03/20/2024] Open
Abstract
Paralysis of the muscles controlling the hand dramatically limits the quality of life for individuals living with spinal cord injury (SCI). Here, with a non-invasive neural interface, we demonstrate that eight motor complete SCI individuals (C5-C6) are still able to task-modulate in real-time the activity of populations of spinal motor neurons with residual neural pathways. In all SCI participants tested, we identified groups of motor units under voluntary control that encoded various hand movements. The motor unit discharges were mapped into more than 10 degrees of freedom, ranging from grasping to individual hand-digit flexion and extension. We then mapped the neural dynamics into a real-time controlled virtual hand. The SCI participants were able to match the cue hand posture by proportionally controlling four degrees of freedom (opening and closing the hand and index flexion/extension). These results demonstrate that wearable muscle sensors provide access to spared motor neurons that are fully under voluntary control in complete cervical SCI individuals. This non-invasive neural interface allows the investigation of motor neuron changes after the injury and has the potential to promote movement restoration when integrated with assistive devices.
Collapse
Affiliation(s)
- Daniela Souza Oliveira
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Matthias Ponfick
- Querschnittzentrum Rummelsberg, Krankenhaus Rummelsberg GmbH, 90592 Schwarzenbruck, Germany
| | - Dominik I Braun
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Marius Osswald
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Marek Sierotowicz
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Institute of Robotics and Mechatronics, German Aerospace Center (DLR), 82234 Oberpfaffenhofen, Germany
| | - Satyaki Chatterjee
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| | - Douglas Weber
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Bjoern Eskofier
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Translational Digital Health Group, Institute of AI for Health, Helmholtz Zentrum München—German Research Center for Environmental Health, 85764 Neuherberg, Germany
| | - Claudio Castellini
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Institute of Robotics and Mechatronics, German Aerospace Center (DLR), 82234 Oberpfaffenhofen, Germany
| | - Dario Farina
- Department of Bioengineering, Imperial College London, London, SW7 2AZ, UK
| | - Thomas Mehari Kinfe
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
- Division of Functional Neurosurgery and Stereotaxy, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Alessandro Del Vecchio
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91052 Erlangen, Germany
| |
Collapse
|
8
|
Deo DR, Okorokova EV, Pritchard AL, Hahn NV, Card NS, Nason-Tomaszewski SR, Jude J, Hosman T, Choi EY, Qiu D, Meng Y, Wairagkar M, Nicolas C, Kamdar FB, Iacobacci C, Acosta A, Hochberg LR, Cash SS, Williams ZM, Rubin DB, Brandman DM, Stavisky SD, AuYong N, Pandarinath C, Downey JE, Bensmaia SJ, Henderson JM, Willett FR. A mosaic of whole-body representations in human motor cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.14.613041. [PMID: 39345372 PMCID: PMC11429821 DOI: 10.1101/2024.09.14.613041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/01/2024]
Abstract
Understanding how the body is represented in motor cortex is key to understanding how the brain controls movement. The precentral gyrus (PCG) has long been thought to contain largely distinct regions for the arm, leg and face (represented by the "motor homunculus"). However, mounting evidence has begun to reveal a more intermixed, interrelated and broadly tuned motor map. Here, we revisit the motor homunculus using microelectrode array recordings from 20 arrays that broadly sample PCG across 8 individuals, creating a comprehensive map of human motor cortex at single neuron resolution. We found whole-body representations throughout all sampled points of PCG, contradicting traditional leg/arm/face boundaries. We also found two speech-preferential areas with a broadly tuned, orofacial-dominant area in between them, previously unaccounted for by the homunculus. Throughout PCG, movement representations of the four limbs were interlinked, with homologous movements of different limbs (e.g., toe curl and hand close) having correlated representations. Our findings indicate that, while the classic homunculus aligns with each area's preferred body region at a coarse level, at a finer scale, PCG may be better described as a mosaic of functional zones, each with its own whole-body representation.
Collapse
|
9
|
Silva AB, Liu JR, Metzger SL, Bhaya-Grossman I, Dougherty ME, Seaton MP, Littlejohn KT, Tu-Chan A, Ganguly K, Moses DA, Chang EF. A bilingual speech neuroprosthesis driven by cortical articulatory representations shared between languages. Nat Biomed Eng 2024; 8:977-991. [PMID: 38769157 PMCID: PMC11554235 DOI: 10.1038/s41551-024-01207-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 04/01/2024] [Indexed: 05/22/2024]
Abstract
Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged electrocorticography, along with deep-learning and statistical natural-language models of English and Spanish, to record and decode activity from speech-motor cortex of a Spanish-English bilingual with vocal-tract and limb paralysis into sentences in either language. This was achieved without requiring the participant to manually specify the target language. Decoding models relied on shared vocal-tract articulatory representations across languages, which allowed us to build a syllable classifier that generalized across a shared set of English and Spanish syllables. Transfer learning expedited training of the bilingual decoder by enabling neural data recorded in one language to improve decoding in the other language. Overall, our findings suggest shared cortical articulatory representations that persist after paralysis and enable the decoding of multiple languages without the need to train separate language-specific decoders.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Sean L Metzger
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA
| | - Maximilian E Dougherty
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Margaret P Seaton
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Adelyn Tu-Chan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Karunesh Ganguly
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
- University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA, USA.
| |
Collapse
|
10
|
Kosnoff J, Yu K, Liu C, He B. Transcranial focused ultrasound to V5 enhances human visual motion brain-computer interface by modulating feature-based attention. Nat Commun 2024; 15:4382. [PMID: 38862476 PMCID: PMC11167030 DOI: 10.1038/s41467-024-48576-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/02/2024] [Indexed: 06/13/2024] Open
Abstract
A brain-computer interface (BCI) enables users to control devices with their minds. Despite advancements, non-invasive BCIs still exhibit high error rates, prompting investigation into the potential reduction through concurrent targeted neuromodulation. Transcranial focused ultrasound (tFUS) is an emerging non-invasive neuromodulation technology with high spatiotemporal precision. This study examines whether tFUS neuromodulation can improve BCI outcomes, and explores the underlying mechanism of action using high-density electroencephalography (EEG) source imaging (ESI). As a result, V5-targeted tFUS significantly reduced the error in a BCI speller task. Source analyses revealed a significantly increase in theta and alpha activities in the tFUS condition at both V5 and downstream in the dorsal visual processing pathway. Correlation analysis indicated that the connection within the dorsal processing pathway was preserved during tFUS stimulation, while the ventral connection was weakened. These findings suggest that V5-targeted tFUS enhances feature-based attention to visual motion.
Collapse
Affiliation(s)
- Joshua Kosnoff
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA
| | - Kai Yu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA
| | - Chang Liu
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA
- Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Bin He
- Department of Biomedical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15237, USA.
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15237, USA.
| |
Collapse
|
11
|
Hansen TC, Tully TN, John Mathews V, Warren DJ. A Multimodal Assistive-Robotic-Arm Control System to Increase Independence After Tetraplegia. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2124-2133. [PMID: 38829756 DOI: 10.1109/tnsre.2024.3408833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Following tetraplegia, independence for completing essential daily tasks, such as opening doors and eating, significantly declines. Assistive robotic manipulators (ARMs) could restore independence, but typically input devices for these manipulators require functional use of the hands. We created and validated a hands-free multimodal input system for controlling an ARM in virtual reality using combinations of a gyroscope, eye-tracking, and heterologous surface electromyography (sEMG). These input modalities are mapped to ARM functions based on the user's preferences and to maximize the utility of their residual volitional capabilities following tetraplegia. The two participants in this study with tetraplegia preferred to use the control mapping with sEMG button functions and disliked winking commands. Non-disabled participants were more varied in their preferences and performance, further suggesting that customizability is an advantageous component of the control system. Replacing buttons from a traditional handheld controller with sEMG did not substantively reduce performance. The system provided adequate control to all participants to complete functional tasks in virtual reality such as opening door handles, turning stove dials, eating, and drinking, all of which enable independence and improved quality of life for these individuals.
Collapse
|
12
|
Wandelt SK, Bjånes DA, Pejsa K, Lee B, Liu C, Andersen RA. Representation of internal speech by single neurons in human supramarginal gyrus. Nat Hum Behav 2024; 8:1136-1149. [PMID: 38740984 PMCID: PMC11199147 DOI: 10.1038/s41562-024-01867-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 03/16/2024] [Indexed: 05/16/2024]
Abstract
Speech brain-machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.
Collapse
Affiliation(s)
- Sarah K Wandelt
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA.
| | - David A Bjånes
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA
| | - Kelsie Pejsa
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - Brian Lee
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Charles Liu
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Richard A Andersen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| |
Collapse
|
13
|
Brain-machine-interface device translates internal speech into text. Nat Hum Behav 2024; 8:1014-1015. [PMID: 38740991 DOI: 10.1038/s41562-024-01869-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
|
14
|
Rabut C, Norman SL, Griggs WS, Russin JJ, Jann K, Christopoulos V, Liu C, Andersen RA, Shapiro MG. Functional ultrasound imaging of human brain activity through an acoustically transparent cranial window. Sci Transl Med 2024; 16:eadj3143. [PMID: 38809965 DOI: 10.1126/scitranslmed.adj3143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 05/07/2024] [Indexed: 05/31/2024]
Abstract
Visualization of human brain activity is crucial for understanding normal and aberrant brain function. Currently available neural activity recording methods are highly invasive, have low sensitivity, and cannot be conducted outside of an operating room. Functional ultrasound imaging (fUSI) is an emerging technique that offers sensitive, large-scale, high-resolution neural imaging; however, fUSI cannot be performed through the adult human skull. Here, we used a polymeric skull replacement material to create an acoustic window compatible with fUSI to monitor adult human brain activity in a single individual. Using an in vitro cerebrovascular phantom to mimic brain vasculature and an in vivo rodent cranial defect model, first, we evaluated the fUSI signal intensity and signal-to-noise ratio through polymethyl methacrylate (PMMA) cranial implants of different thicknesses or a titanium mesh implant. We found that rat brain neural activity could be recorded with high sensitivity through a PMMA implant using a dedicated fUSI pulse sequence. We then designed a custom ultrasound-transparent cranial window implant for an adult patient undergoing reconstructive skull surgery after traumatic brain injury. We showed that fUSI could record brain activity in an awake human outside of the operating room. In a video game "connect the dots" task, we demonstrated mapping and decoding of task-modulated cortical activity in this individual. In a guitar-strumming task, we mapped additional task-specific cortical responses. Our proof-of-principle study shows that fUSI can be used as a high-resolution (200 μm) functional imaging modality for measuring adult human brain activity through an acoustically transparent cranial window.
Collapse
Affiliation(s)
- Claire Rabut
- Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Sumner L Norman
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Whitney S Griggs
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Jonathan J Russin
- USC Neurorestoration Center and the Departments of Neurosurgery and Neurology, University of Southern California, Los Angeles, CA 90033, USA
| | - Kay Jann
- Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA 90033, USA
| | | | - Charles Liu
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- USC Neurorestoration Center and the Departments of Neurosurgery and Neurology, University of Southern California, Los Angeles, CA 90033, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA 90242, USA
| | - Richard A Andersen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA 91125, USA
| | - Mikhail G Shapiro
- Division of Chemistry and Chemical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Howard Hughes Medical Institute, Pasadena, CA 91125, USA
| |
Collapse
|
15
|
Ikegawa Y, Fukuma R, Sugano H, Oshino S, Tani N, Tamura K, Iimura Y, Suzuki H, Yamamoto S, Fujita Y, Nishimoto S, Kishima H, Yanagisawa T. Text and image generation from intracranial electroencephalography using an embedding space for text and images. J Neural Eng 2024; 21:036019. [PMID: 38648781 DOI: 10.1088/1741-2552/ad417a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 04/22/2024] [Indexed: 04/25/2024]
Abstract
Objective.Invasive brain-computer interfaces (BCIs) are promising communication devices for severely paralyzed patients. Recent advances in intracranial electroencephalography (iEEG) coupled with natural language processing have enhanced communication speed and accuracy. It should be noted that such a speech BCI uses signals from the motor cortex. However, BCIs based on motor cortical activities may experience signal deterioration in users with motor cortical degenerative diseases such as amyotrophic lateral sclerosis. An alternative approach to using iEEG of the motor cortex is necessary to support patients with such conditions.Approach. In this study, a multimodal embedding of text and images was used to decode visual semantic information from iEEG signals of the visual cortex to generate text and images. We used contrastive language-image pretraining (CLIP) embedding to represent images presented to 17 patients implanted with electrodes in the occipital and temporal cortices. A CLIP image vector was inferred from the high-γpower of the iEEG signals recorded while viewing the images.Main results.Text was generated by CLIPCAP from the inferred CLIP vector with better-than-chance accuracy. Then, an image was created from the generated text using StableDiffusion with significant accuracy.Significance.The text and images generated from iEEG through the CLIP embedding vector can be used for improved communication.
Collapse
Affiliation(s)
- Yuya Ikegawa
- Institute for Advanced Co-Creation Studies, Osaka University, Suita, Japan
| | - Ryohei Fukuma
- Institute for Advanced Co-Creation Studies, Osaka University, Suita, Japan
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Hidenori Sugano
- Department of Neurosurgery, Juntendo University, Tokyo, Japan
| | - Satoru Oshino
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Naoki Tani
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Kentaro Tamura
- Department of Neurosurgery, Nara Medical University, Kashihara, Japan
| | - Yasushi Iimura
- Department of Neurosurgery, Juntendo University, Tokyo, Japan
| | - Hiroharu Suzuki
- Department of Neurosurgery, Juntendo University, Tokyo, Japan
| | - Shota Yamamoto
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Yuya Fujita
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Shinji Nishimoto
- National Institute of Information and Communications Technology (NICT), Center for Information and Neural Networks (CiNet), Suita, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Japan
| | - Haruhiko Kishima
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| | - Takufumi Yanagisawa
- Institute for Advanced Co-Creation Studies, Osaka University, Suita, Japan
- Department of Neurosurgery, Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
16
|
Berezutskaya J, Freudenburg ZV, Vansteensel MJ, Aarnoutse EJ, Ramsey NF, van Gerven MAJ. Direct speech reconstruction from sensorimotor brain activity with optimized deep learning models. J Neural Eng 2023; 20:056010. [PMID: 37467739 PMCID: PMC10510111 DOI: 10.1088/1741-2552/ace8be] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 07/12/2023] [Accepted: 07/19/2023] [Indexed: 07/21/2023]
Abstract
Objective.Development of brain-computer interface (BCI) technology is key for enabling communication in individuals who have lost the faculty of speech due to severe motor paralysis. A BCI control strategy that is gaining attention employs speech decoding from neural data. Recent studies have shown that a combination of direct neural recordings and advanced computational models can provide promising results. Understanding which decoding strategies deliver best and directly applicable results is crucial for advancing the field.Approach.In this paper, we optimized and validated a decoding approach based on speech reconstruction directly from high-density electrocorticography recordings from sensorimotor cortex during a speech production task.Main results.We show that (1) dedicated machine learning optimization of reconstruction models is key for achieving the best reconstruction performance; (2) individual word decoding in reconstructed speech achieves 92%-100% accuracy (chance level is 8%); (3) direct reconstruction from sensorimotor brain activity produces intelligible speech.Significance.These results underline the need for model optimization in achieving best speech decoding results and highlight the potential that reconstruction-based speech decoding from sensorimotor cortex can offer for development of next-generation BCI technology for communication.
Collapse
Affiliation(s)
- Julia Berezutskaya
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
- Donders Center for Brain, Cognition and Behaviour, Nijmegen 6525 GD, The Netherlands
| | - Zachary V Freudenburg
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Mariska J Vansteensel
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Erik J Aarnoutse
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Nick F Ramsey
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Marcel A J van Gerven
- Donders Center for Brain, Cognition and Behaviour, Nijmegen 6525 GD, The Netherlands
| |
Collapse
|
17
|
Kosnoff J, Yu K, Liu C, He B. Transcranial Focused Ultrasound to V5 Enhances Human Visual Motion Brain-Computer Interface by Modulating Feature-Based Attention. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.04.556252. [PMID: 37732253 PMCID: PMC10508752 DOI: 10.1101/2023.09.04.556252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2023]
Abstract
Paralysis affects roughly 1 in 50 Americans. While there is no cure for the condition, brain-computer interfaces (BCI) can allow users to control a device with their mind, bypassing the paralyzed region. Non-invasive BCIs still have high error rates, which is hypothesized to be reduced with concurrent targeted neuromodulation. This study examines whether transcranial focused ultrasound (tFUS) modulation can improve BCI outcomes, and what the underlying mechanism of action might be through high-density electroencephalography (EEG)-based source imaging (ESI) analyses. V5-targeted tFUS significantly reduced the error for the BCI speller task. ESI analyses showed significantly increased theta activity in the tFUS condition at both V5 and downstream the dorsal visual processing pathway. Correlation analysis indicates that the dorsal processing pathway connection was preserved during tFUS stimulation, whereas extraneous connections were severed. These results suggest that V5-targeted tFUS' mechanism of action is to raise the brain's feature-based attention to visual motion.
Collapse
|
18
|
Rabut C, Norman SL, Griggs WS, Russin JJ, Jann K, Christopoulos V, Liu C, Andersen RA, Shapiro MG. A window to the brain: ultrasound imaging of human neural activity through a permanent acoustic window. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.14.544094. [PMID: 37398368 PMCID: PMC10312699 DOI: 10.1101/2023.06.14.544094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Recording human brain activity is crucial for understanding normal and aberrant brain function. However, available recording methods are either highly invasive or have relatively low sensitivity. Functional ultrasound imaging (fUSI) is an emerging technique that offers sensitive, large-scale, high-resolution neural imaging. However, fUSI cannot be performed through adult human skull. Here, we use a polymeric skull replacement material to create an acoustic window allowing ultrasound to monitor brain activity in fully intact adult humans. We design the window through experiments in phantoms and rodents, then implement it in a participant undergoing reconstructive skull surgery. Subsequently, we demonstrate fully non-invasive mapping and decoding of cortical responses to finger movement, marking the first instance of high-resolution (200 μm) and large-scale (50 mmx38 mm) brain imaging through a permanent acoustic window.
Collapse
|
19
|
Verwoert M, Ottenhoff MC, Goulis S, Colon AJ, Wagner L, Tousseyn S, van Dijk JP, Kubben PL, Herff C. Dataset of Speech Production in intracranial.Electroencephalography. Sci Data 2022; 9:434. [PMID: 35869138 PMCID: PMC9307753 DOI: 10.1038/s41597-022-01542-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 07/08/2022] [Indexed: 11/28/2022] Open
Abstract
Speech production is an intricate process involving a large number of muscles and cognitive processes. The neural processes underlying speech production are not completely understood. As speech is a uniquely human ability, it can not be investigated in animal models. High-fidelity human data can only be obtained in clinical settings and is therefore not easily available to all researchers. Here, we provide a dataset of 10 participants reading out individual words while we measured intracranial EEG from a total of 1103 electrodes. The data, with its high temporal resolution and coverage of a large variety of cortical and sub-cortical brain regions, can help in understanding the speech production process better. Simultaneously, the data can be used to test speech decoding and synthesis approaches from neural data to develop speech Brain-Computer Interfaces and speech neuroprostheses. Measurement(s) | Brain activity | Technology Type(s) | Stereotactic electroencephalography | Sample Characteristic - Organism | Homo sapiens | Sample Characteristic - Environment | Epilepsy monitoring center | Sample Characteristic - Location | The Netherlands |
Collapse
|
20
|
Petrosyan A, Voskoboinikov A, Sukhinin D, Makarova A, Skalnaya A, Arkhipova N, Sinkin M, Ossadtchi A. Speech decoding from a small set of spatially segregated minimally invasive intracranial EEG electrodes with a compact and interpretable neural network. J Neural Eng 2022; 19. [PMID: 36356309 DOI: 10.1088/1741-2552/aca1e1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 11/10/2022] [Indexed: 11/12/2022]
Abstract
Objective. Speech decoding, one of the most intriguing brain-computer interface applications, opens up plentiful opportunities from rehabilitation of patients to direct and seamless communication between human species. Typical solutions rely on invasive recordings with a large number of distributed electrodes implanted through craniotomy. Here we explored the possibility of creating speech prosthesis in a minimally invasive setting with a small number of spatially segregated intracranial electrodes.Approach. We collected one hour of data (from two sessions) in two patients implanted with invasive electrodes. We then used only the contacts that pertained to a single stereotactic electroencephalographic (sEEG) shaft or an electrocorticographic (ECoG) stripe to decode neural activity into 26 words and one silence class. We employed a compact convolutional network-based architecture whose spatial and temporal filter weights allow for a physiologically plausible interpretation.Mainresults. We achieved on average 55% accuracy using only six channels of data recorded with a single minimally invasive sEEG electrode in the first patient and 70% accuracy using only eight channels of data recorded for a single ECoG strip in the second patient in classifying 26+1 overtly pronounced words. Our compact architecture did not require the use of pre-engineered features, learned fast and resulted in a stable, interpretable and physiologically meaningful decision rule successfully operating over a contiguous dataset collected during a different time interval than that used for training. Spatial characteristics of the pivotal neuronal populations corroborate with active and passive speech mapping results and exhibit the inverse space-frequency relationship characteristic of neural activity. Compared to other architectures our compact solution performed on par or better than those recently featured in neural speech decoding literature.Significance. We showcase the possibility of building a speech prosthesis with a small number of electrodes and based on a compact feature engineering free decoder derived from a small amount of training data.
Collapse
Affiliation(s)
- Artur Petrosyan
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia
| | | | - Dmitrii Sukhinin
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia
| | - Anna Makarova
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia
| | | | | | - Mikhail Sinkin
- Moscow State University of Medicine and Dentistry, Scientific Research Institute of First Aid to them. N.V. Sklifosovsky, Moscow, Russia
| | - Alexei Ossadtchi
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia.,Artificial Intelligence Research Institute, AIRI, Moscow, Russia
| |
Collapse
|
21
|
Feng J, Gu Y, Xu W. Hand surgery in a new "hand-brain" era: change the hand, rebuild the brain. Sci Bull (Beijing) 2022; 67:1932-1934. [PMID: 36546197 DOI: 10.1016/j.scib.2022.08.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Affiliation(s)
- Juntao Feng
- Hand Surgery Department, The National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai 200040, China; Department of Hand and Upper Extremity Surgery, Center for the Reconstruction of Limb Function, Jing'an District Central Hospital, Fudan University, Shanghai 200040, China
| | - Yudong Gu
- Hand Surgery Department, The National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai 200040, China; Department of Hand and Upper Extremity Surgery, Center for the Reconstruction of Limb Function, Jing'an District Central Hospital, Fudan University, Shanghai 200040, China; Institutes of Brain Science, State Key Laboratory of Medical Neurobiology and Collaborative Innovation Center of Brain Science, Fudan University, Shanghai 200032, China
| | - Wendong Xu
- Hand Surgery Department, The National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai 200040, China; Department of Hand and Upper Extremity Surgery, Center for the Reconstruction of Limb Function, Jing'an District Central Hospital, Fudan University, Shanghai 200040, China; Institutes of Brain Science, State Key Laboratory of Medical Neurobiology and Collaborative Innovation Center of Brain Science, Fudan University, Shanghai 200032, China; Research Unit of Synergistic Reconstruction of Upper and Lower Limbs after Brain Injury, Chinese Academy of Medical Sciences, Beijing 100730, China.
| |
Collapse
|
22
|
Edmondson LR, Saal HP. Getting a grasp on BMIs: Decoding prehension and speech signals. Neuron 2022; 110:1743-1745. [PMID: 35654019 DOI: 10.1016/j.neuron.2022.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Wandelt et al. (2022) show that different grasps can be decoded from neural activity in the human supramarginal gyrus (SMG), ventral premotor cortex, and somatosensory cortex during motor imagery and speech, highlighting the attractiveness of higher-level areas such as the SMG for brain-machine interface applications.
Collapse
Affiliation(s)
- Laura R Edmondson
- Active Touch Laboratory, Department of Psychology, University of Sheffield, Sheffield S1 2LT, UK
| | - Hannes P Saal
- Active Touch Laboratory, Department of Psychology, University of Sheffield, Sheffield S1 2LT, UK.
| |
Collapse
|