1
|
Silva AB, Littlejohn KT, Liu JR, Moses DA, Chang EF. The speech neuroprosthesis. Nat Rev Neurosci 2024:10.1038/s41583-024-00819-9. [PMID: 38745103 DOI: 10.1038/s41583-024-00819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/16/2024]
Abstract
Loss of speech after paralysis is devastating, but circumventing motor-pathway injury by directly decoding speech from intact cortical activity has the potential to restore natural communication and self-expression. Recent discoveries have defined how key features of speech production are facilitated by the coordinated activity of vocal-tract articulatory and motor-planning cortical representations. In this Review, we highlight such progress and how it has led to successful speech decoding, first in individuals implanted with intracranial electrodes for clinical epilepsy monitoring and subsequently in individuals with paralysis as part of early feasibility clinical trials to restore speech. We discuss high-spatiotemporal-resolution neural interfaces and the adaptation of state-of-the-art speech computational algorithms that have driven rapid and substantial progress in decoding neural activity into text, audible speech, and facial movements. Although restoring natural speech is a long-term goal, speech neuroprostheses already have performance levels that surpass communication rates offered by current assistive-communication technology. Given this accelerated rate of progress in the field, we propose key evaluation metrics for speed and accuracy, among others, to help standardize across studies. We finish by highlighting several directions to more fully explore the multidimensional feature space of speech and language, which will continue to accelerate progress towards a clinically viable speech neuroprosthesis.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
2
|
Tankus A, Rosenberg N, Ben-Hamo O, Stern E, Strauss I. Machine learning decoding of single neurons in the thalamus for speech brain-machine interfaces. J Neural Eng 2024; 21:036009. [PMID: 38648783 DOI: 10.1088/1741-2552/ad4179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 04/22/2024] [Indexed: 04/25/2024]
Abstract
Objective. Our goal is to decode firing patterns of single neurons in the left ventralis intermediate nucleus (Vim) of the thalamus, related to speech production, perception, and imagery. For realistic speech brain-machine interfaces (BMIs), we aim to characterize the amount of thalamic neurons necessary for high accuracy decoding.Approach. We intraoperatively recorded single neuron activity in the left Vim of eight neurosurgical patients undergoing implantation of deep brain stimulator or RF lesioning during production, perception and imagery of the five monophthongal vowel sounds. We utilized the Spade decoder, a machine learning algorithm that dynamically learns specific features of firing patterns and is based on sparse decomposition of the high dimensional feature space.Main results. Spade outperformed all algorithms compared with, for all three aspects of speech: production, perception and imagery, and obtained accuracies of 100%, 96%, and 92%, respectively (chance level: 20%) based on pooling together neurons across all patients. The accuracy was logarithmic in the amount of neurons for all three aspects of speech. Regardless of the amount of units employed, production gained highest accuracies, whereas perception and imagery equated with each other.Significance. Our research renders single neuron activity in the left Vim a promising source of inputs to BMIs for restoration of speech faculties for locked-in patients or patients with anarthria or dysarthria to allow them to communicate again. Our characterization of how many neurons are necessary to achieve a certain decoding accuracy is of utmost importance for planning BMI implantation.
Collapse
Affiliation(s)
- Ariel Tankus
- Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel Aviv 6423906, Israel
- Department of Neurology and Neurosurgery, School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Noam Rosenberg
- School of Electrical Engineering, Iby and Aladar Fleischman Faculty of Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Oz Ben-Hamo
- School of Electrical Engineering, Iby and Aladar Fleischman Faculty of Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Einat Stern
- Department of Neurology and Neurosurgery, School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Ido Strauss
- Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel Aviv 6423906, Israel
- Department of Neurology and Neurosurgery, School of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
3
|
Özcan F, Alkan A. Neural decoding of inferior colliculus multiunit activity for sound category identification with temporal correlation and transfer learning. Network 2024; 35:101-133. [PMID: 37982591 DOI: 10.1080/0954898x.2023.2282576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 11/07/2023] [Indexed: 11/21/2023]
Abstract
Natural sounds are easily perceived and identified by humans and animals. Despite this, the neural transformations that enable sound perception remain largely unknown. It is thought that the temporal characteristics of sounds may be reflected in auditory assembly responses at the inferior colliculus (IC) and which may play an important role in identification of natural sounds. In our study, natural sounds will be predicted from multi-unit activity (MUA) signals collected in the IC. Data is obtained from an international platform publicly accessible. The temporal correlation values of the MUA signals are converted into images. We used two different segment sizes and with a denoising method, we generated four subsets for the classification. Using pre-trained convolutional neural networks (CNNs), features of the images were extracted and the type of heard sound was classified. For this, we applied transfer learning from Alexnet, Googlenet and Squeezenet CNNs. The classifiers support vector machines (SVM), k-nearest neighbour (KNN), Naive Bayes and Ensemble were used. The accuracy, sensitivity, specificity, precision and F1 score were measured as evaluation parameters. By using all the tests and removing the noise, the accuracy improved significantly. These results will allow neuroscientists to make interesting conclusions.
Collapse
Affiliation(s)
- Fatma Özcan
- Electrical & Electronics Engineering Department, Kahramanmaras Sutcu Imam University, Kahramanmaraş, Turkey
| | - Ahmet Alkan
- Electrical & Electronics Engineering Department, Kahramanmaras Sutcu Imam University, Kahramanmaraş, Turkey
| |
Collapse
|
4
|
Angrick M, Luo S, Rabbani Q, Candrea DN, Shah S, Milsap GW, Anderson WS, Gordon CR, Rosenblatt KR, Clawson L, Tippett DC, Maragakis N, Tenore FV, Fifer MS, Hermansky H, Ramsey NF, Crone NE. Online speech synthesis using a chronically implanted brain-computer interface in an individual with ALS. Sci Rep 2024; 14:9617. [PMID: 38671062 PMCID: PMC11053081 DOI: 10.1038/s41598-024-60277-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/21/2024] [Indexed: 04/28/2024] Open
Abstract
Brain-computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant's voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.
Collapse
Affiliation(s)
- Miguel Angrick
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Daniel N Candrea
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Samyak Shah
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Griffin W Milsap
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - William S Anderson
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Chad R Gordon
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Kathryn R Rosenblatt
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Lora Clawson
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Donna C Tippett
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nicholas Maragakis
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francesco V Tenore
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Matthew S Fifer
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Hynek Hermansky
- Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA
- Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA
| | - Nick F Ramsey
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
5
|
Card NS, Wairagkar M, Iacobacci C, Hou X, Singer-Clark T, Willett FR, Kunz EM, Fan C, Nia MV, Deo DR, Srinivasan A, Choi EY, Glasser MF, Hochberg LR, Henderson JM, Shahlaie K, Brandman DM, Stavisky SD. An accurate and rapidly calibrating speech neuroprosthesis. medRxiv 2024:2023.12.26.23300110. [PMID: 38645254 PMCID: PMC11030484 DOI: 10.1101/2023.12.26.23300110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Brain-computer interfaces can enable rapid, intuitive communication for people with paralysis by transforming the cortical activity associated with attempted speech into text on a computer screen. Despite recent advances, communication with brain-computer interfaces has been restricted by extensive training data requirements and inaccurate word output. A man in his 40's with ALS with tetraparesis and severe dysarthria (ALSFRS-R = 23) was enrolled into the BrainGate2 clinical trial. He underwent surgical implantation of four microelectrode arrays into his left precentral gyrus, which recorded neural activity from 256 intracortical electrodes. We report a speech neuroprosthesis that decoded his neural activity as he attempted to speak in both prompted and unstructured conversational settings. Decoded words were displayed on a screen, then vocalized using text-to-speech software designed to sound like his pre-ALS voice. On the first day of system use, following 30 minutes of attempted speech training data, the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. On the second day, the size of the possible output vocabulary increased to 125,000 words, and, after 1.4 additional hours of training data, the neuroprosthesis achieved 90.2% accuracy. With further training data, the neuroprosthesis sustained 97.5% accuracy beyond eight months after surgical implantation. The participant has used the neuroprosthesis to communicate in self-paced conversations for over 248 hours. In an individual with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore naturalistic communication after a brief training period.
Collapse
Affiliation(s)
- Nicholas S Card
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
| | - Maitreyee Wairagkar
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
| | - Carrina Iacobacci
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
| | - Xianda Hou
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
- Departments of Computer Science, University of California Davis, Davis, CA, USA
| | - Tyler Singer-Clark
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
- Departments of Biomedical Engineering, University of California Davis, Davis, CA, USA
| | - Francis R Willett
- Departments of Neurosurgery, Stanford University, Stanford, CA, USA
- Departments of Electrical Engineering, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute, Stanford University, Stanford, CA, USA
| | - Erin M Kunz
- Departments of Electrical Engineering, Stanford University, Stanford, CA, USA
- Departments of Mechanical Engineering, Stanford University, Stanford, CA, USA
| | - Chaofei Fan
- Departments of Computer Science, Stanford University, Stanford, CA, USA
| | - Maryam Vahdati Nia
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
- Departments of Computer Science, University of California Davis, Davis, CA, USA
| | - Darrel R Deo
- Departments of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Aparna Srinivasan
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
- Departments of Biomedical Engineering, University of California Davis, Davis, CA, USA
| | - Eun Young Choi
- Departments of Neurosurgery, Stanford University, Stanford, CA, USA
| | - Matthew F Glasser
- Departments of Radiology and Neuroscience, Washington University School of Medicine, Saint Louis, MO, USA
| | - Leigh R Hochberg
- School of Engineering and Carney Institute for Brain Sciences, Brown University, Providence, RI, USA
- VA RR&D Center for Neurorestoration and Neurotechnology, VA Providence Healthcare, Providence, RI
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Jaimie M Henderson
- Departments of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Kiarash Shahlaie
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
| | - David M Brandman
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
| | - Sergey D Stavisky
- Departments of Neurological Surgery, University of California Davis, Davis, CA, USA
| |
Collapse
|
6
|
Chen J, Chen X, Wang R, Le C, Khalilian-Gourtani A, Jensen E, Dugan P, Doyle W, Devinsky O, Friedman D, Flinker A, Wang Y. Subject-Agnostic Transformer-Based Neural Speech Decoding from Surface and Depth Electrode Signals. bioRxiv 2024:2024.03.11.584533. [PMID: 38559163 PMCID: PMC10980022 DOI: 10.1101/2024.03.11.584533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Objective This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e., Electrocorticographic or ECoG array) and data from a single patient. We aim to design a deep-learning model architecture that can accommodate both surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes. The architecture should allow training on data from multiple participants with large variability in electrode placements and the trained model should perform well on participants unseen during training. Approach We propose a novel transformer-based model architecture named SwinTW that can work with arbitrarily positioned electrodes, by leveraging their 3D locations on the cortex rather than their positions on a 2D grid. We train both subject-specific models using data from a single participant as well as multi-patient models exploiting data from multiple participants. Main Results The subject-specific models using only low-density 8x8 ECoG data achieved high decoding Pearson Correlation Coefficient with ground truth spectrogram (PCC=0.817), over N=43 participants, outperforming our prior convolutional ResNet model and the 3D Swin transformer model. Incorporating additional strip, depth, and grid electrodes available in each participant (N=39) led to further improvement (PCC=0.838). For participants with only sEEG electrodes (N=9), subject-specific models still enjoy comparable performance with an average PCC=0.798. The multi-subject models achieved high performance on unseen participants, with an average PCC=0.765 in leave-one-out cross-validation. Significance The proposed SwinTW decoder enables future speech neuroprostheses to utilize any electrode placement that is clinically optimal or feasible for a particular participant, including using only depth electrodes, which are more routinely implanted in chronic neurosurgical procedures. Importantly, the generalizability of the multi-patient models suggests the exciting possibility of developing speech neuroprostheses for people with speech disability without relying on their own neural data for training, which is not always feasible.
Collapse
Affiliation(s)
- Junbo Chen
- Electrical and Computer Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
| | - Xupeng Chen
- Electrical and Computer Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
| | - Ran Wang
- Electrical and Computer Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
| | - Chenqian Le
- Electrical and Computer Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
| | | | - Erika Jensen
- Neurology Department, New York University, 223 East 34th Street, Manhattan, 10016, NY, USA
| | - Patricia Dugan
- Neurology Department, New York University, 223 East 34th Street, Manhattan, 10016, NY, USA
| | - Werner Doyle
- Neurosurgery Department, New York University, 550 1st Avenue, Manhattan, 10016, NY, USA
| | - Orrin Devinsky
- Neurology Department, New York University, 223 East 34th Street, Manhattan, 10016, NY, USA
| | - Daniel Friedman
- Neurology Department, New York University, 223 East 34th Street, Manhattan, 10016, NY, USA
| | - Adeen Flinker
- Neurology Department, New York University, 223 East 34th Street, Manhattan, 10016, NY, USA
- Biomedical Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
| | - Yao Wang
- Electrical and Computer Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
- Biomedical Engineering Department, New York University, 370 Jay Street, Brooklyn, 11201, NY, USA
| |
Collapse
|
7
|
Ju U, Wallraven C. Decoding the dynamic perception of risk and speed using naturalistic stimuli: A multivariate, whole-brain analysis. Hum Brain Mapp 2024; 45:e26652. [PMID: 38488473 PMCID: PMC10941534 DOI: 10.1002/hbm.26652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 02/20/2024] [Accepted: 02/25/2024] [Indexed: 03/18/2024] Open
Abstract
Time-resolved decoding of speed and risk perception in car driving is important for understanding the perceptual processes related to driving safety. In this study, we used an fMRI-compatible trackball with naturalistic stimuli to record dynamic ratings of perceived risk and speed and investigated the degree to which different brain regions were able to decode these. We presented participants with first-person perspective videos of cars racing on the same course. These videos varied in terms of subjectively perceived speed and risk profiles, as determined during a behavioral pilot. During the fMRI experiment, participants used the trackball to dynamically rate subjective risk in a first and speed in a second session and assessed overall risk and speed after watching each video. A standard multivariate correlation analysis based on these ratings revealed sparse decodability in visual areas only for the risk ratings. In contrast, the dynamic rating-based correlation analysis uncovered frontal, visual, and temporal region activation for subjective risk and dorsal visual stream and temporal region activation for subjectively perceived speed. Interestingly, further analyses showed that the brain regions for decoding risk changed over time, whereas those for decoding speed remained constant. Overall, our results demonstrate the advantages of time-resolved decoding to help our understanding of the dynamic networks associated with decoding risk and speed perception in realistic driving scenarios.
Collapse
Affiliation(s)
- Uijong Ju
- Department of Information DisplayKyung Hee UniversitySeoulSouth Korea
| | - Christian Wallraven
- Department of Brain and Cognitive EngineeringKorea UniversitySouth Korea
- Department of Artificial IntelligenceKorea UniversitySouth Korea
| |
Collapse
|
8
|
Tao Q, Chao H, Fang D, Dou D. Progress in neurorehabilitation research and the support by the National Natural Science Foundation of China from 2010 to 2022. Neural Regen Res 2024; 19:226-232. [PMID: 37488871 PMCID: PMC10479845 DOI: 10.4103/1673-5374.375342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/27/2023] [Accepted: 04/14/2023] [Indexed: 07/26/2023] Open
Abstract
The National Natural Science Foundation of China is one of the major funding agencies for neurorehabilitation research in China. This study reviews the frontier directions and achievements in the field of neurorehabilitation in China and worldwide. We used data from the Web of Science Core Collection (WoSCC) database to analyze the publications and data provided by the National Natural Science Foundation of China to analyze funding information. In addition, the prospects for neurorehabilitation research in China are discussed. From 2010 to 2022, a total of 74,220 publications in neurorehabilitation were identified, with there being an overall upward tendency. During this period, the National Natural Science Foundation of China has funded 476 research projects with a total funding of 192.38 million RMB to support neurorehabilitation research in China. With the support of the National Natural Science Foundation of China, China has made some achievements in neurorehabilitation research. Research related to neurorehabilitation is believed to be making steady and significant progress in China.
Collapse
Affiliation(s)
- Qian Tao
- School of Medicine, Jinan University, Guangzhou, Guangdong Province, China
- School of Health and Life Science, University of Health and Rehabilitation Sciences, Qingdao, Shandong Province, China
- Department of Health Sciences, National Natural Science Foundation of China, Beijing, China
| | - Honglu Chao
- Department of Health Sciences, National Natural Science Foundation of China, Beijing, China
- Department of Neurosurgery, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu Province, China
| | - Dong Fang
- Department of Health Sciences, National Natural Science Foundation of China, Beijing, China
| | - Dou Dou
- Department of Health Sciences, National Natural Science Foundation of China, Beijing, China
| |
Collapse
|
9
|
Jeong JH, Cho JH, Lee BH, Lee SW. Real-Time Deep Neurolinguistic Learning Enhances Noninvasive Neural Language Decoding for Brain-Machine Interaction. IEEE Trans Cybern 2023; 53:7469-7482. [PMID: 36251899 DOI: 10.1109/tcyb.2022.3211694] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG)-based brain-machine interface (BMI) has been utilized to help patients regain motor function and has recently been validated for its use in healthy people because of its ability to directly decipher human intentions. In particular, neurolinguistic research using EEGs has been investigated as an intuitive and naturalistic communication tool between humans and machines. In this study, the human mind directly decoded the neural languages based on speech imagery using the proposed deep neurolinguistic learning. Through real-time experiments, we evaluated whether BMI-based cooperative tasks between multiple users could be accomplished using a variety of neural languages. We successfully demonstrated a BMI system that allows a variety of scenarios, such as essential activity, collaborative play, and emotional interaction. This outcome presents a novel BMI frontier that can interact at the level of human-like intelligence in real time and extends the boundaries of the communication paradigm.
Collapse
|
10
|
Luo S, Angrick M, Coogan C, Candrea DN, Wyse‐Sookoo K, Shah S, Rabbani Q, Milsap GW, Weiss AR, Anderson WS, Tippett DC, Maragakis NJ, Clawson LL, Vansteensel MJ, Wester BA, Tenore FV, Hermansky H, Fifer MS, Ramsey NF, Crone NE. Stable Decoding from a Speech BCI Enables Control for an Individual with ALS without Recalibration for 3 Months. Adv Sci (Weinh) 2023; 10:e2304853. [PMID: 37875404 PMCID: PMC10724434 DOI: 10.1002/advs.202304853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 09/18/2023] [Indexed: 10/26/2023]
Abstract
Brain-computer interfaces (BCIs) can be used to control assistive devices by patients with neurological disorders like amyotrophic lateral sclerosis (ALS) that limit speech and movement. For assistive control, it is desirable for BCI systems to be accurate and reliable, preferably with minimal setup time. In this study, a participant with severe dysarthria due to ALS operates computer applications with six intuitive speech commands via a chronic electrocorticographic (ECoG) implant over the ventral sensorimotor cortex. Speech commands are accurately detected and decoded (median accuracy: 90.59%) throughout a 3-month study period without model retraining or recalibration. Use of the BCI does not require exogenous timing cues, enabling the participant to issue self-paced commands at will. These results demonstrate that a chronically implanted ECoG-based speech BCI can reliably control assistive devices over long time periods with only initial model training and calibration, supporting the feasibility of unassisted home use.
Collapse
Affiliation(s)
- Shiyu Luo
- Department of Biomedical EngineeringJohns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Miguel Angrick
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| | - Christopher Coogan
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| | - Daniel N. Candrea
- Department of Biomedical EngineeringJohns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Kimberley Wyse‐Sookoo
- Department of Biomedical EngineeringJohns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Samyak Shah
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| | - Qinwan Rabbani
- Department of Electrical and Computer EngineeringJohns Hopkins UniversityBaltimoreMD21218USA
- Center for Language and Speech ProcessingJohns Hopkins UniversityBaltimoreMD21218USA
| | - Griffin W. Milsap
- Research and Exploratory Development DepartmentJohns Hopkins University Applied Physics LaboratoryLaurelMD20723USA
| | - Alexander R. Weiss
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| | - William S. Anderson
- Department of NeurosurgeryJohns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Donna C. Tippett
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
- Department of Otolaryngology‐Head and Neck SurgeryJohns Hopkins University School of MedicineBaltimoreMD21205USA
- Department of Physical Medicine and RehabilitationJohns Hopkins University School of MedicineBaltimoreMD21205USA
| | - Nicholas J. Maragakis
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| | - Lora L. Clawson
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| | - Mariska J. Vansteensel
- Department of Neurology and NeurosurgeryUMC Utrecht Brain CenterUtrecht3584The Netherlands
| | - Brock A. Wester
- Research and Exploratory Development DepartmentJohns Hopkins University Applied Physics LaboratoryLaurelMD20723USA
| | - Francesco V. Tenore
- Research and Exploratory Development DepartmentJohns Hopkins University Applied Physics LaboratoryLaurelMD20723USA
| | - Hynek Hermansky
- Department of Electrical and Computer EngineeringJohns Hopkins UniversityBaltimoreMD21218USA
- Center for Language and Speech ProcessingJohns Hopkins UniversityBaltimoreMD21218USA
| | - Matthew S. Fifer
- Research and Exploratory Development DepartmentJohns Hopkins University Applied Physics LaboratoryLaurelMD20723USA
| | - Nick F. Ramsey
- Department of Neurology and NeurosurgeryUMC Utrecht Brain CenterUtrecht3584The Netherlands
| | - Nathan E. Crone
- Department of NeurologyJohns Hopkins University School of MedicineBaltimoreMD21287USA
| |
Collapse
|
11
|
Duraivel S, Rahimpour S, Chiang CH, Trumpis M, Wang C, Barth K, Harward SC, Lad SP, Friedman AH, Southwell DG, Sinha SR, Viventi J, Cogan GB. High-resolution neural recordings improve the accuracy of speech decoding. Nat Commun 2023; 14:6938. [PMID: 37932250 PMCID: PMC10628285 DOI: 10.1038/s41467-023-42555-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 10/13/2023] [Indexed: 11/08/2023] Open
Abstract
Patients suffering from debilitating neurodegenerative diseases often lose the ability to communicate, detrimentally affecting their quality of life. One solution to restore communication is to decode signals directly from the brain to enable neural speech prostheses. However, decoding has been limited by coarse neural recordings which inadequately capture the rich spatio-temporal structure of human brain signals. To resolve this limitation, we performed high-resolution, micro-electrocorticographic (µECoG) neural recordings during intra-operative speech production. We obtained neural signals with 57× higher spatial resolution and 48% higher signal-to-noise ratio compared to macro-ECoG and SEEG. This increased signal quality improved decoding by 35% compared to standard intracranial signals. Accurate decoding was dependent on the high-spatial resolution of the neural interface. Non-linear decoding models designed to utilize enhanced spatio-temporal neural information produced better results than linear techniques. We show that high-density µECoG can enable high-quality speech decoding for future neural speech prostheses.
Collapse
Affiliation(s)
| | - Shervin Rahimpour
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA
- Department of Neurosurgery, Clinical Neuroscience Center, University of Utah, Salt Lake City, UT, USA
| | - Chia-Han Chiang
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Michael Trumpis
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Charles Wang
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Katrina Barth
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Stephen C Harward
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA
- Duke Comprehensive Epilepsy Center, Duke School of Medicine, Durham, NC, USA
| | - Shivanand P Lad
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA
| | - Allan H Friedman
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA
| | - Derek G Southwell
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA
- Duke Comprehensive Epilepsy Center, Duke School of Medicine, Durham, NC, USA
- Department of Neurobiology, Duke School of Medicine, Durham, NC, USA
| | - Saurabh R Sinha
- Penn Epilepsy Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Jonathan Viventi
- Department of Biomedical Engineering, Duke University, Durham, NC, USA.
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA.
- Duke Comprehensive Epilepsy Center, Duke School of Medicine, Durham, NC, USA.
- Department of Neurobiology, Duke School of Medicine, Durham, NC, USA.
| | - Gregory B Cogan
- Department of Biomedical Engineering, Duke University, Durham, NC, USA.
- Department of Neurosurgery, Duke School of Medicine, Durham, NC, USA.
- Duke Comprehensive Epilepsy Center, Duke School of Medicine, Durham, NC, USA.
- Department of Neurology, Duke School of Medicine, Durham, NC, USA.
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA.
- Center for Cognitive Neuroscience, Duke University, Durham, NC, USA.
| |
Collapse
|
12
|
Wang J, Wang T, Liu H, Wang K, Moses K, Feng Z, Li P, Huang W. Flexible Electrodes for Brain-Computer Interface System. Adv Mater 2023; 35:e2211012. [PMID: 37143288 DOI: 10.1002/adma.202211012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 04/27/2023] [Indexed: 05/06/2023]
Abstract
Brain-computer interface (BCI) has been the subject of extensive research recently. Governments and companies have substantially invested in relevant research and applications. The restoration of communication and motor function, the treatment of psychological disorders, gaming, and other daily and therapeutic applications all benefit from BCI. The electrodes hold the key to the essential, fundamental BCI precondition of electrical brain activity detection and delivery. However, the traditional rigid electrodes are limited due to their mismatch in Young's modulus, potential damages to the human body, and a decline in signal quality with time. These factors make the development of flexible electrodes vital and urgent. Flexible electrodes made of soft materials have grown in popularity in recent years as an alternative to conventional rigid electrodes because they offer greater conformance, the potential for higher signal-to-noise ratio (SNR) signals, and a wider range of applications. Therefore, the latest classifications and future developmental directions of fabricating these flexible electrodes are explored in this paper to further encourage the speedy advent of flexible electrodes for BCI. In summary, the perspectives and future outlook for this developing discipline are provided.
Collapse
Affiliation(s)
- Junjie Wang
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| | - Tengjiao Wang
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| | - Haoyan Liu
- Department of Computer Science & Computer Engineering (CSCE), University of Arkansas, Fayetteville, AR, 72701, USA
| | - Kun Wang
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| | - Kumi Moses
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| | - Zhuoya Feng
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| | - Peng Li
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| | - Wei Huang
- Frontiers Science Center for Flexible Electronics (FSCFE), Xi'an Institute of Flexible Electronics (IFE) & Xi'an Institute of Biomedical Materials and Engineering (IBME), Northwestern Polytechnical University (NPU), 127 West Youyi Road, Xi'an, Shaanxi, 710072, P. R. China
| |
Collapse
|
13
|
Sankaran N, Moses D, Chiong W, Chang EF. Recommendations for promoting user agency in the design of speech neuroprostheses. Front Hum Neurosci 2023; 17:1298129. [PMID: 37920562 PMCID: PMC10619159 DOI: 10.3389/fnhum.2023.1298129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 10/04/2023] [Indexed: 11/04/2023] Open
Abstract
Brain-computer interfaces (BCI) that directly decode speech from brain activity aim to restore communication in people with paralysis who cannot speak. Despite recent advances, neural inference of speech remains imperfect, limiting the ability for speech BCIs to enable experiences such as fluent conversation that promote agency - that is, the ability for users to author and transmit messages enacting their intentions. Here, we make recommendations for promoting agency based on existing and emerging strategies in neural engineering. The focus is on achieving fast, accurate, and reliable performance while ensuring volitional control over when a decoder is engaged, what exactly is decoded, and how messages are expressed. Additionally, alongside neuroscientific progress within controlled experimental settings, we argue that a parallel line of research must consider how to translate experimental successes into real-world environments. While such research will ultimately require input from prospective users, here we identify and describe design choices inspired by human-factors work conducted in existing fields of assistive technology, which address practical issues likely to emerge in future real-world speech BCI applications.
Collapse
Affiliation(s)
- Narayan Sankaran
- Kavli Center for Ethics, Science and the Public, University of California, Berkeley, Berkeley, CA, United States
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, United States
| | - David Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, United States
| | - Winston Chiong
- Memory and Aging Center, Department of Neurology, University of California, San Francisco, San Francisco, CA, United States
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
14
|
Meier A, Kuzdeba S, Jackson L, Daliri A, Tourville JA, Guenther FH, Greenlee JDW. Lateralization and Time-Course of Cortical Phonological Representations during Syllable Production. eNeuro 2023; 10:ENEURO.0474-22.2023. [PMID: 37739786 PMCID: PMC10561542 DOI: 10.1523/eneuro.0474-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 08/15/2023] [Accepted: 08/28/2023] [Indexed: 09/24/2023] Open
Abstract
Spoken language contains information at a broad range of timescales, from phonetic distinctions on the order of milliseconds to semantic contexts which shift over seconds to minutes. It is not well understood how the brain's speech production systems combine features at these timescales into a coherent vocal output. We investigated the spatial and temporal representations in cerebral cortex of three phonological units with different durations: consonants, vowels, and syllables. Electrocorticography (ECoG) recordings were obtained from five participants while speaking single syllables. We developed a novel clustering and Kalman filter-based trend analysis procedure to sort electrodes into temporal response profiles. A linear discriminant classifier was used to determine how strongly each electrode's response encoded phonological features. We found distinct time-courses of encoding phonological units depending on their duration: consonants were represented more during speech preparation, vowels were represented evenly throughout trials, and syllables during production. Locations of strongly speech-encoding electrodes (the top 30% of electrodes) likewise depended on phonological element duration, with consonant-encoding electrodes left-lateralized, vowel-encoding hemispherically balanced, and syllable-encoding right-lateralized. The lateralization of speech-encoding electrodes depended on onset time, with electrodes active before or after speech production favoring left hemisphere and those active during speech favoring the right. Single-electrode speech classification revealed cortical areas with preferential encoding of particular phonemic elements, including consonant encoding in the left precentral and postcentral gyri and syllable encoding in the right middle frontal gyrus. Our findings support neurolinguistic theories of left hemisphere specialization for processing short-timescale linguistic units and right hemisphere processing of longer-duration units.
Collapse
Affiliation(s)
- Andrew Meier
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
| | - Scott Kuzdeba
- Graduate Program for Neuroscience, Boston University, Boston, MA 02215
| | - Liam Jackson
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
| | - Ayoub Daliri
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
- College of Health Solutions, Arizona State University, Tempe, AZ 85004
| | - Jason A Tourville
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
| | - Frank H Guenther
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
- Department of Biomedical Engineering, Boston University, Boston, MA 02215
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02215
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02215
| | - Jeremy D W Greenlee
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242
| |
Collapse
|
15
|
Berezutskaya J, Freudenburg ZV, Vansteensel MJ, Aarnoutse EJ, Ramsey NF, van Gerven MAJ. Direct speech reconstruction from sensorimotor brain activity with optimized deep learning models. J Neural Eng 2023; 20:056010. [PMID: 37467739 PMCID: PMC10510111 DOI: 10.1088/1741-2552/ace8be] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 07/12/2023] [Accepted: 07/19/2023] [Indexed: 07/21/2023]
Abstract
Objective.Development of brain-computer interface (BCI) technology is key for enabling communication in individuals who have lost the faculty of speech due to severe motor paralysis. A BCI control strategy that is gaining attention employs speech decoding from neural data. Recent studies have shown that a combination of direct neural recordings and advanced computational models can provide promising results. Understanding which decoding strategies deliver best and directly applicable results is crucial for advancing the field.Approach.In this paper, we optimized and validated a decoding approach based on speech reconstruction directly from high-density electrocorticography recordings from sensorimotor cortex during a speech production task.Main results.We show that (1) dedicated machine learning optimization of reconstruction models is key for achieving the best reconstruction performance; (2) individual word decoding in reconstructed speech achieves 92%-100% accuracy (chance level is 8%); (3) direct reconstruction from sensorimotor brain activity produces intelligible speech.Significance.These results underline the need for model optimization in achieving best speech decoding results and highlight the potential that reconstruction-based speech decoding from sensorimotor cortex can offer for development of next-generation BCI technology for communication.
Collapse
Affiliation(s)
- Julia Berezutskaya
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
- Donders Center for Brain, Cognition and Behaviour, Nijmegen 6525 GD, The Netherlands
| | - Zachary V Freudenburg
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Mariska J Vansteensel
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Erik J Aarnoutse
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Nick F Ramsey
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Marcel A J van Gerven
- Donders Center for Brain, Cognition and Behaviour, Nijmegen 6525 GD, The Netherlands
| |
Collapse
|
16
|
Merk T, Köhler R, Peterson V, Lyra L, Vanhoecke J, Chikermane M, Binns T, Li N, Walton A, Bush A, Sisterson N, Busch J, Lofredi R, Habets J, Huebl J, Zhu G, Yin Z, Zhao B, Merkl A, Bajbouj M, Krause P, Faust K, Schneider GH, Horn A, Zhang J, Kühn A, Richardson RM, Neumann WJ. Invasive neurophysiology and whole brain connectomics for neural decoding in patients with brain implants. Res Sq 2023:rs.3.rs-3212709. [PMID: 37790428 PMCID: PMC10543023 DOI: 10.21203/rs.3.rs-3212709/v1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Brain computer interfaces (BCI) provide unprecedented spatiotemporal precision that will enable significant expansion in how numerous brain disorders are treated. Decoding dynamic patient states from brain signals with machine learning is required to leverage this precision, but a standardized framework for identifying and advancing novel clinical BCI approaches does not exist. Here, we developed a platform that integrates brain signal decoding with connectomics and demonstrate its utility across 123 hours of invasively recorded brain data from 73 neurosurgical patients treated for movement disorders, depression and epilepsy. First, we introduce connectomics-informed movement decoders that generalize across cohorts with Parkinson's disease and epilepsy from the US, Europe and China. Next, we reveal network targets for emotion decoding in left prefrontal and cingulate circuits in DBS patients with major depression. Finally, we showcase opportunities to improve seizure detection in responsive neurostimulation for epilepsy. Our platform provides rapid, high-accuracy decoding for precision medicine approaches that can dynamically adapt neuromodulation therapies in response to the individual needs of patients.
Collapse
|
17
|
Xiao J, Provenza NR, Asfouri J, Myers J, Mathura RK, Metzger B, Adkinson JA, Allawala AB, Pirtle V, Oswalt D, Shofty B, Robinson ME, Mathew SJ, Goodman WK, Pouratian N, Schrater PR, Patel AB, Tolias AS, Bijanki KR, Pitkow X, Sheth SA. Decoding Depression Severity From Intracranial Neural Activity. Biol Psychiatry 2023; 94:445-453. [PMID: 36736418 PMCID: PMC10394110 DOI: 10.1016/j.biopsych.2023.01.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 01/09/2023] [Accepted: 01/25/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND Disorders of mood and cognition are prevalent, disabling, and notoriously difficult to treat. Fueling this challenge in treatment is a significant gap in our understanding of their neurophysiological basis. METHODS We recorded high-density neural activity from intracranial electrodes implanted in depression-relevant prefrontal cortical regions in 3 human subjects with severe depression. Neural recordings were labeled with depression severity scores across a wide dynamic range using an adaptive assessment that allowed sampling with a temporal frequency greater than that possible with typical rating scales. We modeled these data using regularized regression techniques with region selection to decode depression severity from the prefrontal recordings. RESULTS Across prefrontal regions, we found that reduced depression severity is associated with decreased low-frequency neural activity and increased high-frequency activity. When constraining our model to decode using a single region, spectral changes in the anterior cingulate cortex best predicted depression severity in all 3 subjects. Relaxing this constraint revealed unique, individual-specific sets of spatiospectral features predictive of symptom severity, reflecting the heterogeneous nature of depression. CONCLUSIONS The ability to decode depression severity from neural activity increases our fundamental understanding of how depression manifests in the human brain and provides a target neural signature for personalized neuromodulation therapies.
Collapse
Affiliation(s)
- Jiayang Xiao
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas; Department of Neuroscience, Baylor College of Medicine, Houston, Texas
| | - Nicole R Provenza
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Joseph Asfouri
- Department of Electrical and Computer Engineering, Rice University, Houston, Texas
| | - John Myers
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Raissa K Mathura
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Brian Metzger
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Joshua A Adkinson
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | | | - Victoria Pirtle
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Denise Oswalt
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Ben Shofty
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Meghan E Robinson
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Sanjay J Mathew
- Department of Psychiatry, Baylor College of Medicine, Houston, Texas
| | - Wayne K Goodman
- Department of Psychiatry, Baylor College of Medicine, Houston, Texas
| | - Nader Pouratian
- Department of Neurological Surgery, UT Southwestern Medical Center, Dallas, Texas
| | - Paul R Schrater
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, Minnesota; Department of Psychology, University of Minnesota, Minneapolis, Minnesota
| | - Ankit B Patel
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas
| | - Andreas S Tolias
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas
| | - Kelly R Bijanki
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas
| | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Electrical and Computer Engineering, Rice University, Houston, Texas; Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, Texas
| | - Sameer A Sheth
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas.
| |
Collapse
|
18
|
Straiton J. I know what you're thinking; can neuroimaging truly reveal our innermost thoughts? Biotechniques 2023; 75:81-83. [PMID: 37622332 DOI: 10.2144/btn-2023-0066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/26/2023] Open
Abstract
[Formula: see text] Advances in neuroimaging, combined with developments in artificial intelligence software, have allowed researchers to noninvasively decode the brain and 'read the mind'.
Collapse
|
19
|
Schroeder ML, Sherafati A, Ulbrich RL, Wheelock MD, Svoboda AM, Klein ED, George TG, Tripathy K, Culver JP, Eggebrecht AT. Mapping cortical activations underlying covert and overt language production using high-density diffuse optical tomography. Neuroimage 2023; 276:120190. [PMID: 37245559 PMCID: PMC10760405 DOI: 10.1016/j.neuroimage.2023.120190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 05/05/2023] [Accepted: 05/23/2023] [Indexed: 05/30/2023] Open
Abstract
Gold standard neuroimaging modalities such as functional magnetic resonance imaging (fMRI), positron emission tomography (PET), and more recently electrocorticography (ECoG) have provided profound insights regarding the neural mechanisms underlying the processing of language, but they are limited in applications involving naturalistic language production especially in developing brains, during face-to-face dialogues, or as a brain-computer interface. High-density diffuse optical tomography (HD-DOT) provides high-fidelity mapping of human brain function with comparable spatial resolution to that of fMRI but in a silent and open scanning environment similar to real-life social scenarios. Therefore, HD-DOT has potential to be used in naturalistic settings where other neuroimaging modalities are limited. While HD-DOT has been previously validated against fMRI for mapping the neural correlates underlying language comprehension and covert (i.e., "silent") language production, HD-DOT has not yet been established for mapping the cortical responses to overt (i.e., "out loud") language production. In this study, we assessed the brain regions supporting a simple hierarchy of language tasks: silent reading of single words, covert production of verbs, and overt production of verbs in normal hearing right-handed native English speakers (n = 33). First, we found that HD-DOT brain mapping is resilient to movement associated with overt speaking. Second, we observed that HD-DOT is sensitive to key activations and deactivations in brain function underlying the perception and naturalistic production of language. Specifically, statistically significant results were observed that show recruitment of regions in occipital, temporal, motor, and prefrontal cortices across all three tasks after performing stringent cluster-extent based thresholding. Our findings lay the foundation for future HD-DOT studies of imaging naturalistic language comprehension and production during real-life social interactions and for broader applications such as presurgical language assessment and brain-machine interfaces.
Collapse
Affiliation(s)
- Mariel L Schroeder
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA
| | - Arefeh Sherafati
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; Department of Neurology, University of California, San Francisco, San Francisco, CA, USA
| | - Rachel L Ulbrich
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; University of Missouri School of Medicine, Columbia, MO, USA
| | - Muriah D Wheelock
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA
| | - Alexandra M Svoboda
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; University of Cincinnati Medical Center, Cincinnati, Oh, USA
| | - Emma D Klein
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Tessa G George
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA
| | - Kalyan Tripathy
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; Washington University School of Medicine, St Louis, MO, USA
| | - Joseph P Culver
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; Division of Biology & Biomedical Sciences, Washington University School of Medicine, St Louis, MO, USA; Department of Physics, Washington University in St. Louis, St Louis, MO, USA; Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO, USA
| | - Adam T Eggebrecht
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO, USA; Division of Biology & Biomedical Sciences, Washington University School of Medicine, St Louis, MO, USA; Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO, USA.
| |
Collapse
|
20
|
Angrick M, Luo S, Rabbani Q, Candrea DN, Shah S, Milsap GW, Anderson WS, Gordon CR, Rosenblatt KR, Clawson L, Maragakis N, Tenore FV, Fifer MS, Hermansky H, Ramsey NF, Crone NE. Online speech synthesis using a chronically implanted brain-computer interface in an individual with ALS. medRxiv 2023:2023.06.30.23291352. [PMID: 37425721 PMCID: PMC10327279 DOI: 10.1101/2023.06.30.23291352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Recent studies have shown that speech can be reconstructed and synthesized using only brain activity recorded with intracranial electrodes, but until now this has only been done using retrospective analyses of recordings from able-bodied patients temporarily implanted with electrodes for epilepsy surgery. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a clinical trial participant (ClinicalTrials.gov, NCT03567213) with dysarthria due to amyotrophic lateral sclerosis (ALS). We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the user from a vocabulary of 6 keywords originally designed to allow intuitive selection of items on a communication board. Our results show for the first time that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words that are intelligible to human listeners while preserving the participants voice profile.
Collapse
Affiliation(s)
- Miguel Angrick
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Daniel N Candrea
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Samyak Shah
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Griffin W Milsap
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - William S Anderson
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD
| | - Chad R Gordon
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD
- Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Kathryn R Rosenblatt
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Lora Clawson
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nicholas Maragakis
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francesco V Tenore
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Matthew S Fifer
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Hynek Hermansky
- Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA
- Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA
| | - Nick F Ramsey
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
21
|
Le Godais G, Roussel P, Bocquelet F, Aubert M, Kahane P, Chabardès S, Yvert B. Overt speech decoding from cortical activity: a comparison of different linear methods. Front Hum Neurosci 2023; 17:1124065. [PMID: 37425292 PMCID: PMC10326283 DOI: 10.3389/fnhum.2023.1124065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Speech BCIs aim at reconstructing speech in real time from ongoing cortical activity. Ideal BCIs would need to reconstruct speech audio signal frame by frame on a millisecond-timescale. Such approaches require fast computation. In this respect, linear decoder are good candidates and have been widely used in motor BCIs. Yet, they have been very seldomly studied for speech reconstruction, and never for reconstruction of articulatory movements from intracranial activity. Here, we compared vanilla linear regression, ridge-regularized linear regressions, and partial least squares regressions for offline decoding of overt speech from cortical activity. Methods Two decoding paradigms were investigated: (1) direct decoding of acoustic vocoder features of speech, and (2) indirect decoding of vocoder features through an intermediate articulatory representation chained with a real-time-compatible DNN-based articulatory-to-acoustic synthesizer. Participant's articulatory trajectories were estimated from an electromagnetic-articulography dataset using dynamic time warping. The accuracy of the decoders was evaluated by computing correlations between original and reconstructed features. Results We found that similar performance was achieved by all linear methods well above chance levels, albeit without reaching intelligibility. Direct and indirect methods achieved comparable performance, with an advantage for direct decoding. Discussion Future work will address the development of an improved neural speech decoder compatible with fast frame-by-frame speech reconstruction from ongoing activity at a millisecond timescale.
Collapse
Affiliation(s)
- Gaël Le Godais
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
| | - Philémon Roussel
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
| | - Florent Bocquelet
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
| | - Marc Aubert
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
| | - Philippe Kahane
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
- CHU Grenoble Alpes, Department of Neurology, Grenoble, France
| | - Stéphan Chabardès
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Clinatec, Grenoble, France
| | - Blaise Yvert
- Univ. Grenoble Alpes, INSERM, U1216, Grenoble Institut Neurosciences, Grenoble, France
| |
Collapse
|
22
|
Sen O, Sheehan AM, Raman PR, Khara KS, Khalifa A, Chatterjee B. Machine-Learning Methods for Speech and Handwriting Detection Using Neural Signals: A Review. Sensors (Basel) 2023; 23:5575. [PMID: 37420741 DOI: 10.3390/s23125575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 06/09/2023] [Accepted: 06/12/2023] [Indexed: 07/09/2023]
Abstract
Brain-Computer Interfaces (BCIs) have become increasingly popular in recent years due to their potential applications in diverse fields, ranging from the medical sector (people with motor and/or communication disabilities), cognitive training, gaming, and Augmented Reality/Virtual Reality (AR/VR), among other areas. BCI which can decode and recognize neural signals involved in speech and handwriting has the potential to greatly assist individuals with severe motor impairments in their communication and interaction needs. Innovative and cutting-edge advancements in this field have the potential to develop a highly accessible and interactive communication platform for these people. The purpose of this review paper is to analyze the existing research on handwriting and speech recognition from neural signals. So that the new researchers who are interested in this field can gain thorough knowledge in this research area. The current research on neural signal-based recognition of handwriting and speech has been categorized into two main types: invasive and non-invasive studies. We have examined the latest papers on converting speech-activity-based neural signals and handwriting-activity-based neural signals into text data. The methods of extracting data from the brain have also been discussed in this review. Additionally, this review includes a brief summary of the datasets, preprocessing techniques, and methods used in these studies, which were published between 2014 and 2022. This review aims to provide a comprehensive summary of the methodologies used in the current literature on neural signal-based recognition of handwriting and speech. In essence, this article is intended to serve as a valuable resource for future researchers who wish to investigate neural signal-based machine-learning methods in their work.
Collapse
Affiliation(s)
- Ovishake Sen
- Department of ECE, University of Florida, Gainesville, FL 32611, USA
| | - Anna M Sheehan
- Department of ECE, University of Florida, Gainesville, FL 32611, USA
| | - Pranay R Raman
- Department of ECE, University of Florida, Gainesville, FL 32611, USA
| | - Kabir S Khara
- Department of ECE, University of Florida, Gainesville, FL 32611, USA
| | - Adam Khalifa
- Department of ECE, University of Florida, Gainesville, FL 32611, USA
| | | |
Collapse
|
23
|
Liu Y, Zhao Z, Xu M, Yu H, Zhu Y, Zhang J, Bu L, Zhang X, Lu J, Li Y, Ming D, Wu J. Decoding and synthesizing tonal language speech from brain activity. Sci Adv 2023; 9:eadh0478. [PMID: 37294753 PMCID: PMC10256166 DOI: 10.1126/sciadv.adh0478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 05/03/2023] [Indexed: 06/11/2023]
Abstract
Recent studies have shown that the feasibility of speech brain-computer interfaces (BCIs) as a clinically valid treatment in helping nontonal language patients with communication disorders restore their speech ability. However, tonal language speech BCI is challenging because additional precise control of laryngeal movements to produce lexical tones is required. Thus, the model should emphasize the features from the tonal-related cortex. Here, we designed a modularized multistream neural network that directly synthesizes tonal language speech from intracranial recordings. The network decoded lexical tones and base syllables independently via parallel streams of neural network modules inspired by neuroscience findings. The speech was synthesized by combining tonal syllable labels with nondiscriminant speech neural activity. Compared to commonly used baseline models, our proposed models achieved higher performance with modest training data and computational costs. These findings raise a potential strategy for approaching tonal language speech restoration.
Collapse
Affiliation(s)
- Yan Liu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
| | - Zehao Zhao
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
| | - Minpeng Xu
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300041, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300041, China
| | - Haiqing Yu
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300041, China
| | - Yanming Zhu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
| | - Jie Zhang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
| | - Linghao Bu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
- Department of Neurosurgery, First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Xiaoluo Zhang
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
| | - Junfeng Lu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
- MOE Frontiers Center for Brain Science, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Yuanning Li
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Dong Ming
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300041, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300041, China
| | - Jinsong Wu
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai 200040, China
- National Center for Neurological Disorders, Shanghai 200052, China
- Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai 200040, China
- Neurosurgical Institute of Fudan University, Shanghai 200052, China
| |
Collapse
|
24
|
Grinschgl S, Berdnik AL, Stehling E, Hofer G, Neubauer AC. Who Wants to Enhance Their Cognitive Abilities? Potential Predictors of the Acceptance of Cognitive Enhancement. J Intell 2023; 11:109. [PMID: 37367511 DOI: 10.3390/jintelligence11060109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/25/2023] [Accepted: 05/25/2023] [Indexed: 06/28/2023] Open
Abstract
With advances in new technologies, the topic of cognitive enhancement has been at the center of public debate in recent years. Various enhancement methods (e.g., brain stimulation, smart drugs, or working memory training) promise improvements in one's cognitive abilities such as intelligence and memory. Although these methods have been rather ineffective so far, they are largely available to the general public and can be applied individually. As applying enhancement might be accompanied by certain risks, it is important to understand which individuals seek to enhance themselves. For instance, individuals' intelligence, personality, and interests might predict their willingness to get enhanced. Thus, in a preregistered study, we asked 257 participants about their acceptance of various enhancement methods and tested predictors thereof, such as participants' psychometrically measured and self-estimated intelligence. While both measured and self-estimated intelligence as well as participants' implicit beliefs about intelligence, did not predict participants' acceptance of enhancement; a younger age, higher interest in science-fiction, and (partially) higher openness as well as lower conscientiousness did. Thus, certain interests and personality traits might contribute to the willingness to enhance one's cognition. Finally, we discuss the need for replication and argue for testing other potential predictors of the acceptance of cognitive enhancement.
Collapse
Affiliation(s)
| | | | | | - Gabriela Hofer
- Institute of Psychology, University of Graz, 8010 Graz, Austria
| | | |
Collapse
|
25
|
Wu M, Yao K, Huang N, Li H, Zhou J, Shi R, Li J, Huang X, Li J, Jia H, Gao Z, Wong TH, Li D, Hou S, Liu Y, Zhang S, Song E, Yu J, Yu X. Ultrathin, Soft, Bioresorbable Organic Electrochemical Transistors for Transient Spatiotemporal Mapping of Brain Activity. Adv Sci (Weinh) 2023; 10:e2300504. [PMID: 36825679 DOI: 10.1002/advs.202300504] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Indexed: 05/18/2023]
Abstract
A critical challenge lies in the development of the next-generation neural interface, in mechanically tissue-compatible fashion, that offer accurate, transient recording electrophysiological (EP) information and autonomous degradation after stable operation. Here, an ultrathin, lightweight, soft and multichannel neural interface is presented based on organic-electrochemical-transistor-(OECT)-based network, with capabilities of continuous high-fidelity mapping of neural signals and biosafety active degrading after performing functions. Such platform yields a high spatiotemporal resolution of 1.42 ms and 20 µm, with signal-to-noise ratio up to ≈37 dB. The implantable OECT arrays can well establish stable functional neural interfaces, designed as fully biodegradable electronic platforms in vivo. Demonstrated applications of such OECT implants include real-time monitoring of electrical activities from the cortical surface of rats under various conditions (e.g., narcosis, epileptic seizure, and electric stimuli) and electrocorticography mapping from 100 channels. This technology offers general applicability in neural interfaces, with great potential utility in treatment/diagnosis of neurological disorders.
Collapse
Affiliation(s)
- Mengge Wu
- State Key Laboratory of Electronic Thin Films and Integrated Devices, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, 610054, P. R. China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Shanghai Frontiers Science Research Base of Intelligent Optoelectronics and Perception, Institute of Optoelectronics, Fudan University, Shanghai, 200433, P. R. China
| | - Kuanming Yao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Ningge Huang
- Shanghai Frontiers Science Research Base of Intelligent Optoelectronics and Perception, Institute of Optoelectronics, Fudan University, Shanghai, 200433, P. R. China
| | - Hu Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Jingkun Zhou
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Hong Kong Center for Cerebra-Cardiovascular Health Engineering, Hong Kong Science Park, New Territories, Hong Kong, P. R. China
| | - Rui Shi
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Jiyu Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Hong Kong Center for Cerebra-Cardiovascular Health Engineering, Hong Kong Science Park, New Territories, Hong Kong, P. R. China
| | - Xingcan Huang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Jian Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Hong Kong Center for Cerebra-Cardiovascular Health Engineering, Hong Kong Science Park, New Territories, Hong Kong, P. R. China
| | - Huiling Jia
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Hong Kong Center for Cerebra-Cardiovascular Health Engineering, Hong Kong Science Park, New Territories, Hong Kong, P. R. China
| | - Zhan Gao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Tsz Hung Wong
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Dengfeng Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Hong Kong Center for Cerebra-Cardiovascular Health Engineering, Hong Kong Science Park, New Territories, Hong Kong, P. R. China
| | - Sihui Hou
- State Key Laboratory of Electronic Thin Films and Integrated Devices, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, 610054, P. R. China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Yiming Liu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
| | - Shiming Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, SAR, P. R. China
| | - Enming Song
- Shanghai Frontiers Science Research Base of Intelligent Optoelectronics and Perception, Institute of Optoelectronics, Fudan University, Shanghai, 200433, P. R. China
| | - Junsheng Yu
- State Key Laboratory of Electronic Thin Films and Integrated Devices, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, 610054, P. R. China
| | - Xinge Yu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, P. R. China
- Hong Kong Center for Cerebra-Cardiovascular Health Engineering, Hong Kong Science Park, New Territories, Hong Kong, P. R. China
| |
Collapse
|
26
|
Soroush PZ, Herff C, Ries SK, Shih JJ, Schultz T, Krusienski DJ. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. Neuroimage 2023; 269:119913. [PMID: 36731812 DOI: 10.1016/j.neuroimage.2023.119913] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/05/2023] [Accepted: 01/29/2023] [Indexed: 02/01/2023] Open
Abstract
Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.
Collapse
|
27
|
Lavazza A, Giorgi R. Philosophical foundation of the right to mental integrity in the age of neurotechnologies. NEUROETHICS-NETH 2023. [DOI: 10.1007/s12152-023-09517-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
Abstract
AbstractNeurotechnologies broadly understood are tools that have the capability to read, record and modify our mental activity by acting on its brain correlates. The emergence of increasingly powerful and sophisticated techniques has given rise to the proposal to introduce new rights specifically directed to protect mental privacy, freedom of thought, and mental integrity. These rights, also proposed as basic human rights, are conceived in direct relation to tools that threaten mental privacy, freedom of thought, mental integrity, and personal identity. In this paper, our goal is to give a philosophical foundation to a specific right that we will call right to mental integrity. It encapsulates both the classical concepts of privacy and non-interference in our mind/brain. Such a philosophical foundation refers to certain features of the mind that hitherto could not be reached directly from the outside: intentionality, first-person perspective, personal autonomy in moral choices and in the construction of one's narrative, and relational identity. A variety of neurotechnologies or other tools, including artificial intelligence, alone or in combination can, by their very availability, threaten our mental integrity. Therefore, it is necessary to posit a specific right and provide it with a theoretical foundation and justification. It will be up to a subsequent treatment to define the moral and legal boundaries of such a right and its application.
Collapse
|
28
|
Branco MP, Geukes SH, Aarnoutse EJ, Ramsey NF, Vansteensel MJ. Nine decades of electrocorticography: A comparison between epidural and subdural recordings. Eur J Neurosci 2023; 57:1260-1288. [PMID: 36843389 DOI: 10.1111/ejn.15941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 02/10/2023] [Accepted: 02/18/2023] [Indexed: 02/28/2023]
Abstract
In recent years, electrocorticography (ECoG) has arisen as a neural signal recording tool in the development of clinically viable neural interfaces. ECoG electrodes are generally placed below the dura mater (subdural) but can also be placed on top of the dura (epidural). In deciding which of these modalities best suits long-term implants, complications and signal quality are important considerations. Conceptually, epidural placement may present a lower risk of complications as the dura is left intact but also a lower signal quality due to the dura acting as a signal attenuator. The extent to which complications and signal quality are affected by the dura, however, has been a matter of debate. To improve our understanding of the effects of the dura on complications and signal quality, we conducted a literature review. We inventorized the effect of the dura on signal quality, decodability and longevity of acute and chronic ECoG recordings in humans and non-human primates. Also, we compared the incidence and nature of serious complications in studies that employed epidural and subdural ECoG. Overall, we found that, even though epidural recordings exhibit attenuated signal amplitude over subdural recordings, particularly for high-density grids, the decodability of epidural recorded signals does not seem to be markedly affected. Additionally, we found that the nature of serious complications was comparable between epidural and subdural recordings. These results indicate that both epidural and subdural ECoG may be suited for long-term neural signal recordings, at least for current generations of clinical and high-density ECoG grids.
Collapse
Affiliation(s)
- Mariana P Branco
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Simon H Geukes
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Erik J Aarnoutse
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Nick F Ramsey
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Mariska J Vansteensel
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
29
|
Vansteensel MJ, Klein E, van Thiel G, Gaytant M, Simmons Z, Wolpaw JR, Vaughan TM. Towards clinical application of implantable brain-computer interfaces for people with late-stage ALS: medical and ethical considerations. J Neurol 2023; 270:1323-36. [PMID: 36450968 DOI: 10.1007/s00415-022-11464-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 10/26/2022] [Accepted: 10/27/2022] [Indexed: 12/05/2022]
Abstract
Individuals with amyotrophic lateral sclerosis (ALS) frequently develop speech and communication problems in the course of their disease. Currently available augmentative and alternative communication technologies do not present a solution for many people with advanced ALS, because these devices depend on residual and reliable motor activity. Brain-computer interfaces (BCIs) use neural signals for computer control and may allow people with late-stage ALS to communicate even when conventional technology falls short. Recent years have witnessed fast progression in the development and validation of implanted BCIs, which place neural signal recording electrodes in or on the cortex. Eventual widespread clinical application of implanted BCIs as an assistive communication technology for people with ALS will have significant consequences for their daily life, as well as for the clinical management of the disease, among others because of the potential interaction between the BCI and other procedures people with ALS undergo, such as tracheostomy. This article aims to facilitate responsible real-world implementation of implanted BCIs. We review the state of the art of research on implanted BCIs for communication, as well as the medical and ethical implications of the clinical application of this technology. We conclude that the contribution of all BCI stakeholders, including clinicians of the various ALS-related disciplines, will be needed to develop procedures for, and shape the process of, the responsible clinical application of implanted BCIs.
Collapse
|
30
|
Olson JA, Cyr M, Artenie DZ, Strandberg T, Hall L, Tompkins ML, Raz A, Johansson P. Emulating future neurotechnology using magic. Conscious Cogn 2023; 107:103450. [PMID: 36566673 DOI: 10.1016/j.concog.2022.103450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 12/24/2022]
Abstract
Recent developments in neuroscience and artificial intelligence have allowed machines to decode mental processes with growing accuracy. Neuroethicists have speculated that perfecting these technologies may result in reactions ranging from an invasion of privacy to an increase in self-understanding. Yet, evaluating these predictions is difficult given that people are poor at forecasting their reactions. To address this, we developed a paradigm using elements of performance magic to emulate future neurotechnologies. We led 59 participants to believe that a (sham) neurotechnological machine could infer their preferences, detect their errors, and reveal their deep-seated attitudes. The machine gave participants randomly assigned positive or negative feedback about their brain's supposed attitudes towards charity. Around 80% of participants in both groups provided rationalisations for this feedback, which shifted their attitudes in the manipulated direction but did not influence donation behaviour. Our paradigm reveals how people may respond to prospective neurotechnologies, which may inform neuroethical frameworks.
Collapse
Affiliation(s)
- Jay A Olson
- Department of Psychology, McGill University, 2001 McGill College Ave., Montreal, QC H3A 1G1, Canada.
| | - Mariève Cyr
- Faculty of Medicine and Health Sciences, McGill University, 3605 De la Montagne St., Montreal, QC H3G 2M1, Canada
| | - Despina Z Artenie
- Department of Psychology, Université du Québec à Montréal, 100 Sherbrooke St. W., Montreal, QC H2X 3P2, Canada
| | - Thomas Strandberg
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden
| | - Lars Hall
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden
| | - Matthew L Tompkins
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden
| | - Amir Raz
- Institute for Interdisciplinary Behavioral and Brain Sciences, Chapman University, 9401 Jeronimo Road, Irvine, CA 92618, USA
| | - Petter Johansson
- Lund University Cognitive Science, Lund University, Box 192, S-221 00, Lund, Sweden.
| |
Collapse
|
31
|
Metzger SL, Liu JR, Moses DA, Dougherty ME, Seaton MP, Littlejohn KT, Chartier J, Anumanchipalli GK, Tu-Chan A, Ganguly K, Chang EF. Generalizable spelling using a speech neuroprosthesis in an individual with severe limb and vocal paralysis. Nat Commun 2022; 13:6510. [PMID: 36347863 PMCID: PMC9643551 DOI: 10.1038/s41467-022-33611-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/26/2022] [Indexed: 11/09/2022] Open
Abstract
Neuroprostheses have the potential to restore communication to people who cannot speak or type due to paralysis. However, it is unclear if silent attempts to speak can be used to control a communication neuroprosthesis. Here, we translated direct cortical signals in a clinical-trial participant (ClinicalTrials.gov; NCT03698149) with severe limb and vocal-tract paralysis into single letters to spell out full sentences in real time. We used deep-learning and language-modeling techniques to decode letter sequences as the participant attempted to silently spell using code words that represented the 26 English letters (e.g. "alpha" for "a"). We leveraged broad electrode coverage beyond speech-motor cortex to include supplemental control signals from hand cortex and complementary information from low- and high-frequency signal components to improve decoding accuracy. We decoded sentences using words from a 1,152-word vocabulary at a median character error rate of 6.13% and speed of 29.4 characters per minute. In offline simulations, we showed that our approach generalized to large vocabularies containing over 9,000 words (median character error rate of 8.23%). These results illustrate the clinical viability of a silently controlled speech neuroprosthesis to generate sentences from a large vocabulary through a spelling-based approach, complementing previous demonstrations of direct full-word decoding.
Collapse
Affiliation(s)
- Sean L. Metzger
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA USA
| | - Jessie R. Liu
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA USA
| | - David A. Moses
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA
| | - Maximilian E. Dougherty
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA
| | - Margaret P. Seaton
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA
| | - Kaylo T. Littlejohn
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA USA
| | - Josh Chartier
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA
| | - Gopala K. Anumanchipalli
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA USA
| | - Adelyn Tu-Chan
- grid.266102.10000 0001 2297 6811Department of Neurology, University of California, San Francisco, San Francisco, CA USA
| | - Karunesh Ganguly
- grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Department of Neurology, University of California, San Francisco, San Francisco, CA USA
| | - Edward F. Chang
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA USA ,grid.266102.10000 0001 2297 6811Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA USA ,grid.47840.3f0000 0001 2181 7878University of California, Berkeley - University of California, San Francisco Graduate Program in Bioengineering, Berkeley, CA USA
| |
Collapse
|
32
|
Abdelnaby R, Amer SA, Mekky J, Mohamed K, Dardeer K, Hassan W, Alafandi B, Elsayed M. Brain Chip Implant: Public’s knowledge, Attitude, and Determinants. A Multi-Country Study, 2021. Open Access Maced J Med Sci 2022. [DOI: 10.3889/oamjms.2022.9982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Background: In August 2020, a brain chip was announced as implantation in the human brain targeted to boost brain activity without significant side effects.
The aim of this work was to examine the level of knowledge, awareness, and public concerns about the use of brain chip implants.
Methods: An online cross-sectional survey targeted 326 adults from more than five countries in the Middle East and North Africa during the period from May 2021 to July 2021. The data was collected through a validated self-administrated questionnaire composed of five sections. The collected data were coded and analyzed using suitable tests and methods.
Results: According to our results, 54.6% of the study participants mentioned that they had heard about the Brain Chip Implant; while only 6.1% stated that they knew its importance. The most common reported indication for the Brain Chip Implant was improving memory, followed by treatment of epilepsy and improving mental function. Brain Chip Implant safety seemed to be the most common public concern, as most of the participants were hesitant about using it and had concerns regarding its safety.
Conclusion: Medical personnel seems to be the most concerned about the use of the brain chip implant. Safety measures, confidentiality, and security procedures, respectively, are the major issues that might limit the broad use of the brain chip implant.
Collapse
|
33
|
Wei W, Hao M, Zhou K, Wang Y, Lu Q, Zhang H, Wu Y, Zhang T, Liu Y. In situ multimodal transparent electrophysiological hydrogel for in vivo miniature two-photon neuroimaging and electrocorticogram analysis. Acta Biomater 2022; 152:86-99. [PMID: 36041650 DOI: 10.1016/j.actbio.2022.08.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 08/08/2022] [Accepted: 08/23/2022] [Indexed: 11/01/2022]
Abstract
Hydrogels are widely used in nerve tissue repair and show good histocompatibility. There remain, however, challenges with hydrogels for applications related to neural signal recording, which requires a tissue-like biomechanical property, high optical transmission, and low impedance. Here, we describe a transparent hydrogel that is highly biocompatible and has a low Young's modulus (0.15 MPa). Additionally, it functions well as an implantable electrode, as it conformably adheres to brain tissue, results in minimal inflammation and has a low impedance of 150 Ω at 1 kHz. Its high transmittance, corresponding to 93.35% at a wavelength of 300 nm to 1100 nm, supports its application in two-photon imaging. Consistent with these properties, this flexible multimodal transparent electrophysiological hydrogel (MTEHy) electrode was able to record neuronal Ca2+ activity using miniature two-photon microscopy. It also used to monitor electrocorticogram (ECoG) activity in real time in freely moving mice. Moreover, its compatibility with magnetic resonance imaging (MRI), indicates that MTEHy is a new tool for studying activity in the cerebral cortex. STATEMENT OF SIGNIFICANCE: : Future brain science research requires better-performing implantable electrodes to detect neuronal signaling in the brain. In this study, we developed a new hydrogel material, MTEHy-3, that shows high biocompatibility, high optical transmittance (93.35%) and a low Young's modulus (0.15 MPa). Using as high-biocompatible metal-free hydrogel electrode, MTEHy-3 can be implanted for a long time to study the cerebral cortex, and synchronously record the Ca2+ signaling activity of individual neurons and monitor electrocorticogram activity through ionic conduction in freely moving mice. At the same time, non-metallic MTEHy-3 is also suitable for magnetic resonance imaging. Thus MTEHy-3 provides one in situ multimodal tool to detect neuronal signaling with both high spatial resolution and high temporal resolution in the brain.
Collapse
Affiliation(s)
- Wei Wei
- Jiangsu Key Laboratory of Neuropsychiatric Diseases and Institute of Neuroscience, Soochow University; Clinical Research Center of Neurological Disease, The Second Affiliated Hospital of Soochow University, Suzhou 215123, China
| | - Mingming Hao
- School of Nano Technology and Nano Bionics, University of Science and Technology of China, Hefei 230026, China; i-Lab., Key Laboratory of Multifunctional Nanomaterials and Smart Systems, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123, China; Lihuili Hospital Affiliated to Ningbo University, Ningbo 315211, China
| | - Kai Zhou
- Jiangsu Key Laboratory of Neuropsychiatric Diseases and Institute of Neuroscience, Soochow University; Clinical Research Center of Neurological Disease, The Second Affiliated Hospital of Soochow University, Suzhou 215123, China
| | - Yongfeng Wang
- i-Lab., Key Laboratory of Multifunctional Nanomaterials and Smart Systems, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123, China
| | - Qifeng Lu
- School of CHIPS, XJTLU Entrepreneur College (Taicang), Xi'an Jiaotong-Liverpool University, Suzhou 215123, China
| | - Hui Zhang
- Jiangsu Key Laboratory of Neuropsychiatric Diseases and Institute of Neuroscience, Soochow University; Clinical Research Center of Neurological Disease, The Second Affiliated Hospital of Soochow University, Suzhou 215123, China
| | - Yue Wu
- i-Lab., Key Laboratory of Multifunctional Nanomaterials and Smart Systems, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123, China
| | - Ting Zhang
- School of Nano Technology and Nano Bionics, University of Science and Technology of China, Hefei 230026, China; i-Lab., Key Laboratory of Multifunctional Nanomaterials and Smart Systems, Suzhou Institute of Nano-tech and Nano-bionics, Chinese Academy of Sciences, Suzhou 215123, China; Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China..
| | - Yaobo Liu
- Jiangsu Key Laboratory of Neuropsychiatric Diseases and Institute of Neuroscience, Soochow University; Clinical Research Center of Neurological Disease, The Second Affiliated Hospital of Soochow University, Suzhou 215123, China.; Co-innovation Center of Neuroregeneration, Nantong University, Nantong 226001, China; Department of Orthopedics, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai 200072, China.
| |
Collapse
|
34
|
Niso G, Krol LR, Combrisson E, Dubarry AS, Elliott MA, François C, Héjja-Brichard Y, Herbst SK, Jerbi K, Kovic V, Lehongre K, Luck SJ, Mercier M, Mosher JC, Pavlov YG, Puce A, Schettino A, Schön D, Sinnott-Armstrong W, Somon B, Šoškić A, Styles SJ, Tibon R, Vilas MG, van Vliet M, Chaumon M. Good scientific practice in EEG and MEG research: Progress and perspectives. Neuroimage 2022; 257:119056. [PMID: 35283287 DOI: 10.1016/j.neuroimage.2022.119056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 02/25/2022] [Accepted: 03/01/2022] [Indexed: 11/22/2022] Open
Abstract
Good scientific practice (GSP) refers to both explicit and implicit rules, recommendations, and guidelines that help scientists to produce work that is of the highest quality at any given time, and to efficiently share that work with the community for further scrutiny or utilization. For experimental research using magneto- and electroencephalography (MEEG), GSP includes specific standards and guidelines for technical competence, which are periodically updated and adapted to new findings. However, GSP also needs to be regularly revisited in a broader light. At the LiveMEEG 2020 conference, a reflection on GSP was fostered that included explicitly documented guidelines and technical advances, but also emphasized intangible GSP: a general awareness of personal, organizational, and societal realities and how they can influence MEEG research. This article provides an extensive report on most of the LiveMEEG contributions and new literature, with the additional aim to synthesize ongoing cultural changes in GSP. It first covers GSP with respect to cognitive biases and logical fallacies, pre-registration as a tool to avoid those and other early pitfalls, and a number of resources to enable collaborative and reproducible research as a general approach to minimize misconceptions. Second, it covers GSP with respect to data acquisition, analysis, reporting, and sharing, including new tools and frameworks to support collaborative work. Finally, GSP is considered in light of ethical implications of MEEG research and the resulting responsibility that scientists have to engage with societal challenges. Considering among other things the benefits of peer review and open access at all stages, the need to coordinate larger international projects, the complexity of MEEG subject matter, and today's prioritization of fairness, privacy, and the environment, we find that current GSP tends to favor collective and cooperative work, for both scientific and for societal reasons.
Collapse
|
35
|
Cooney C, Folli R, Coyle D. Opportunities, pitfalls and trade-offs in designing protocols for measuring the neural correlates of speech. Neurosci Biobehav Rev 2022; 140:104783. [PMID: 35907491 DOI: 10.1016/j.neubiorev.2022.104783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/25/2022]
Abstract
Decoding speech and speech-related processes directly from the human brain has intensified in studies over recent years as such a decoder has the potential to positively impact people with limited communication capacity due to disease or injury. Additionally, it can present entirely new forms of human-computer interaction and human-machine communication in general and facilitate better neuroscientific understanding of speech processes. Here, we synthesize the literature on neural speech decoding pertaining to how speech decoding experiments have been conducted, coalescing around a necessity for thoughtful experimental design aimed at specific research goals, and robust procedures for evaluating speech decoding paradigms. We examine the use of different modalities for presenting stimuli to participants, methods for construction of paradigms including timings and speech rhythms, and possible linguistic considerations. In addition, novel methods for eliciting naturalistic speech and validating imagined speech task performance in experimental settings are presented based on recent research. We also describe the multitude of terms used to instruct participants on how to produce imagined speech during experiments and propose methods for investigating the effect of these terms on imagined speech decoding. We demonstrate that the range of experimental procedures used in neural speech decoding studies can have unintended consequences which can impact upon the efficacy of the knowledge obtained. The review delineates the strengths and weaknesses of present approaches and poses methodological advances which we anticipate will enhance experimental design, and progress toward the optimal design of movement independent direct speech brain-computer interfaces.
Collapse
Affiliation(s)
- Ciaran Cooney
- Intelligent Systems Research Centre, Ulster University, Derry, UK.
| | - Raffaella Folli
- Institute for Research in Social Sciences, Ulster University, Jordanstown, UK
| | - Damien Coyle
- Intelligent Systems Research Centre, Ulster University, Derry, UK
| |
Collapse
|
36
|
Jeong JH, Cho JH, Lee YE, Lee SH, Shin GH, Kweon YS, Millán JDR, Müller KR, Lee SW. 2020 International brain-computer interface competition: A review. Front Hum Neurosci 2022; 16:898300. [PMID: 35937679 PMCID: PMC9354666 DOI: 10.3389/fnhum.2022.898300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- School of Computer Science, Chungbuk National University, Cheongju, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Eun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seo-Hyun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Gi-Hwan Shin
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Seok Kweon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - José del R. Millán
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
| | - Klaus-Robert Müller
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Machine Learning Group, Department of Computer Science, Berlin Institute of Technology, Berlin, Germany
- Max Planck Institute for Informatics, Saarbrucken, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
37
|
Soroush PZ, Herff C, Ries S, Shih JJ, Schultz T, Krusienski DJ. Contributions of Stereotactic EEG Electrodes in Grey and White Matter to Speech Activity Detection. Annu Int Conf IEEE Eng Med Biol Soc 2022; 2022:4789-4792. [PMID: 36086071 DOI: 10.1109/embc48229.2022.9871464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Recent studies have shown it is possible to decode and synthesize speech directly using brain activity recorded from implanted electrodes. While this activity has been extensively examined using electrocorticographic (ECoG) recordings from cortical surface grey matter, stereotactic electroen-cephalography (sEEG) provides comparatively broader coverage and access to deeper brain structures including both grey and white matter. The present study examines the relative and joint contributions of grey and white matter electrodes for speech activity detection in a brain-computer interface.
Collapse
|
38
|
Anastasopoulou I, van Lieshout P, Cheyne DO, Johnson BW. Speech Kinematics and Coordination Measured With an MEG-Compatible Speech Tracking System. Front Neurol 2022; 13:828237. [PMID: 35837226 PMCID: PMC9273948 DOI: 10.3389/fneur.2022.828237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Accepted: 06/06/2022] [Indexed: 11/13/2022] Open
Abstract
Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until recently, however, it has generally not been possible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which we used to derive kinematic profiles of oro-facial movements during speech. MASK was used to characterize speech kinematics in two healthy adults, and the results were compared to measurements from a separate participant with a conventional Electromagnetic Articulography (EMA) system. Analyses targeted the gestural landmarks of reiterated utterances /ipa/, /api/ and /pataka/. The results demonstrate that MASK reliably characterizes key kinematic and movement coordination parameters of speech motor control. Since these parameters are intrinsically registered in time with concurrent magnetoencephalographic (MEG) measurements of neuromotor brain activity, this methodology paves the way for innovative cross-disciplinary studies of the neuromotor control of human speech production, speech development, and speech motor disorders.
Collapse
Affiliation(s)
- Ioanna Anastasopoulou
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- *Correspondence: Ioanna Anastasopoulou
| | - Pascal van Lieshout
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | - Douglas O. Cheyne
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Blake W. Johnson
- School of Psychological Sciences, Macquarie University, Sydney, NSW, Australia
- Blake W. Johnson
| |
Collapse
|
39
|
Bono D, Belyk M, Longo MR, Dick F. Beyond language: The unspoken sensory-motor representation of the tongue in non-primates, non-human and human primates. Neurosci Biobehav Rev 2022; 139:104730. [PMID: 35691470 DOI: 10.1016/j.neubiorev.2022.104730] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/06/2022] [Accepted: 06/06/2022] [Indexed: 11/28/2022]
Abstract
The English idiom "on the tip of my tongue" commonly acknowledges that something is known, but it cannot be immediately brought to mind. This phrase accurately describes sensorimotor functions of the tongue, which are fundamental for many tongue-related behaviors (e.g., speech), but often neglected by scientific research. Here, we review a wide range of studies conducted on non-primates, non-human and human primates with the aim of providing a comprehensive description of the cortical representation of the tongue's somatosensory inputs and motor outputs across different phylogenetic domains. First, we summarize how the properties of passive non-noxious mechanical stimuli are encoded in the putative somatosensory tongue area, which has a conserved location in the ventral portion of the somatosensory cortex across mammals. Second, we review how complex self-generated actions involving the tongue are represented in more anterior regions of the putative somato-motor tongue area. Finally, we describe multisensory response properties of the primate and non-primate tongue area by also defining how the cytoarchitecture of this area is affected by experience and deafferentation.
Collapse
Affiliation(s)
- Davide Bono
- Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London WC1H0AP, UK; Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London WC1H0AP, UK.
| | - Michel Belyk
- Department of Speech, Hearing, and Phonetic Sciences, UCL Division of Psychology and Language Sciences, 2 Wakefield Street, London WC1N 1PJ, UK
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London WC1E7HX, UK
| | - Frederic Dick
- Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London WC1H0AP, UK; Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London WC1H0AP, UK; Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London WC1E7HX, UK.
| |
Collapse
|
40
|
Liu D, Xu X, Li D, Li J, Yu X, Ling Z, Hong B. Intracranial brain-computer interface spelling using localized visual motion response. Neuroimage 2022; 258:119363. [PMID: 35688315 DOI: 10.1016/j.neuroimage.2022.119363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 06/03/2022] [Accepted: 06/06/2022] [Indexed: 11/30/2022] Open
Abstract
Intracranial brain-computer interfaces (BCIs) can assist severely disabled persons in text communication and environmental control with high precision and speed. Nevertheless, sustainable BCI implants require minimal invasiveness. One of the implantation strategies is to adopt localized and robust cortical activities to drive BCI communication and to make a precise presurgical planning. The visual motion response is a good candidate for inclusion in this strategy because of its focal activity over the middle temporal visual area (MT). Here, we developed an intracranial BCI for spelling, utilizing only three electrodes over the MT area. The best recording electrodes were decided by preoperative functional magnetic resonance imaging (MRI) localization of the MT, and local neural activities were further enhanced by differential rereferencing of these electrodes. The BCI spelling system was validated both offline and online by five epilepsy patients, achieving the fastest speed of 62 bits/min, i.e., 12 characters/min. Moreover, the response patterns of dual-directional visual motion stimuli provided an additional dimension of BCI target encoding and paved the way for a higher information transfer rate of intracranial BCI spelling.
Collapse
Affiliation(s)
- Dingkun Liu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, Beijing, 100084, China
| | - Xin Xu
- Department of Neurosurgery, Chinese PLA General Hospital, Beijing, Beijing, 100853, China
| | - Dongyang Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, Beijing, 100084, China
| | - Jie Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, Beijing, 100084, China
| | - Xinguang Yu
- Department of Neurosurgery, Chinese PLA General Hospital, Beijing, Beijing, 100853, China
| | - Zhipei Ling
- Department of Neurosurgery, Chinese PLA General Hospital, Beijing, Beijing, 100853, China
| | - Bo Hong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, Beijing, 100084, China; McGovern Institute for Brain Research, Tsinghua University, Beijing, Beijing, 100084, China.
| |
Collapse
|
41
|
Ienca M, Fins JJ, Jox RJ, Jotterand F, Voeneky S, Andorno R, Ball T, Castelluccia C, Chavarriaga R, Chneiweiss H, Ferretti A, Friedrich O, Hurst S, Merkel G, Molnár-Gábor F, Rickli JM, Scheibner J, Vayena E, Yuste R, Kellmeyer P. Towards a Governance Framework for Brain Data. NEUROETHICS-NETH 2022. [DOI: 10.1007/s12152-022-09498-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
AbstractThe increasing availability of brain data within and outside the biomedical field, combined with the application of artificial intelligence (AI) to brain data analysis, poses a challenge for ethics and governance. We identify distinctive ethical implications of brain data acquisition and processing, and outline a multi-level governance framework. This framework is aimed at maximizing the benefits of facilitated brain data collection and further processing for science and medicine whilst minimizing risks and preventing harmful use. The framework consists of four primary areas of regulatory intervention: binding regulation, ethics and soft law, responsible innovation, and human rights.
Collapse
|
42
|
Wilson BS, Tucci DL, Moses DA, Chang EF, Young NM, Zeng FG, Lesica NA, Bur AM, Kavookjian H, Mussatto C, Penn J, Goodwin S, Kraft S, Wang G, Cohen JM, Ginsburg GS, Dawson G, Francis HW. Harnessing the Power of Artificial Intelligence in Otolaryngology and the Communication Sciences. J Assoc Res Otolaryngol 2022; 23:319-349. [PMID: 35441936 PMCID: PMC9086071 DOI: 10.1007/s10162-022-00846-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 04/02/2022] [Indexed: 02/01/2023] Open
Abstract
Use of artificial intelligence (AI) is a burgeoning field in otolaryngology and the communication sciences. A virtual symposium on the topic was convened from Duke University on October 26, 2020, and was attended by more than 170 participants worldwide. This review presents summaries of all but one of the talks presented during the symposium; recordings of all the talks, along with the discussions for the talks, are available at https://www.youtube.com/watch?v=ktfewrXvEFg and https://www.youtube.com/watch?v=-gQ5qX2v3rg . Each of the summaries is about 2500 words in length and each summary includes two figures. This level of detail far exceeds the brief summaries presented in traditional reviews and thus provides a more-informed glimpse into the power and diversity of current AI applications in otolaryngology and the communication sciences and how to harness that power for future applications.
Collapse
Affiliation(s)
- Blake S. Wilson
- grid.26009.3d0000 0004 1936 7961Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA ,grid.26009.3d0000 0004 1936 7961Duke Hearing Center, Duke University School of Medicine, Durham, NC 27710 USA ,grid.26009.3d0000 0004 1936 7961Department of Electrical & Computer Engineering, Duke University, Durham, NC 27708 USA ,grid.26009.3d0000 0004 1936 7961Department of Biomedical Engineering, Duke University, Durham, NC 27708 USA ,grid.410711.20000 0001 1034 1720Department of Otolaryngology – Head & Neck Surgery, University of North Carolina, Chapel Hill, Chapel Hill, NC 27599 USA
| | - Debara L. Tucci
- grid.26009.3d0000 0004 1936 7961Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA ,grid.214431.10000 0001 2226 8444National Institute On Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, MD 20892 USA
| | - David A. Moses
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 USA ,grid.266102.10000 0001 2297 6811UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94117 USA
| | - Edward F. Chang
- grid.266102.10000 0001 2297 6811Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94143 USA ,grid.266102.10000 0001 2297 6811UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA 94117 USA
| | - Nancy M. Young
- grid.413808.60000 0004 0388 2248Division of Otolaryngology, Ann and Robert H. Lurie Childrens Hospital of Chicago, Chicago, IL 60611 USA ,grid.16753.360000 0001 2299 3507Department of Otolaryngology - Head and Neck Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL 60611 USA ,grid.16753.360000 0001 2299 3507Department of Communication, Knowles Hearing Center, Northwestern University, Evanston, IL 60208 USA
| | - Fan-Gang Zeng
- grid.266093.80000 0001 0668 7243Center for Hearing Research, University of California, Irvine, Irvine, CA 92697 USA ,grid.266093.80000 0001 0668 7243Department of Anatomy and Neurobiology, University of California, Irvine, Irvine, CA 92697 USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California, Irvine, Irvine, CA 92697 USA ,grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697 USA ,grid.266093.80000 0001 0668 7243Department of Otolaryngology – Head and Neck Surgery, University of California, Irvine, CA 92697 USA
| | - Nicholas A. Lesica
- grid.83440.3b0000000121901201UCL Ear Institute, University College London, London, WC1X 8EE UK
| | - Andrés M. Bur
- grid.266515.30000 0001 2106 0692Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Hannah Kavookjian
- grid.266515.30000 0001 2106 0692Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Caroline Mussatto
- grid.266515.30000 0001 2106 0692Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Joseph Penn
- grid.266515.30000 0001 2106 0692Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Sara Goodwin
- grid.266515.30000 0001 2106 0692Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Shannon Kraft
- grid.266515.30000 0001 2106 0692Department of Otolaryngology - Head and Neck Surgery, Medical Center, University of Kansas, Kansas City, KS 66160 USA
| | - Guanghui Wang
- grid.68312.3e0000 0004 1936 9422Department of Computer Science, Ryerson University, Toronto, ON M5B 2K3 Canada
| | - Jonathan M. Cohen
- grid.26009.3d0000 0004 1936 7961Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA ,grid.415014.50000 0004 0575 3669ENT Department, Kaplan Medical Center, 7661041 Rehovot, Israel
| | - Geoffrey S. Ginsburg
- grid.26009.3d0000 0004 1936 7961Department of Biomedical Engineering, Duke University, Durham, NC 27708 USA ,grid.26009.3d0000 0004 1936 7961MEDx (Medicine & Engineering at Duke), Duke University, Durham, NC 27708 USA ,grid.26009.3d0000 0004 1936 7961Center for Applied Genomics & Precision Medicine, Duke University School of Medicine, Durham, NC 27710 USA ,grid.26009.3d0000 0004 1936 7961Department of Medicine, Duke University School of Medicine, Durham, NC 27710 USA ,grid.26009.3d0000 0004 1936 7961Department of Pathology, Duke University School of Medicine, Durham, NC 27710 USA ,grid.26009.3d0000 0004 1936 7961Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC 27710 USA
| | - Geraldine Dawson
- grid.26009.3d0000 0004 1936 7961Duke Institute for Brain Sciences, Duke University, Durham, NC 27710 USA ,grid.26009.3d0000 0004 1936 7961Duke Center for Autism and Brain Development, Duke University School of Medicine and the Duke Institute for Brain Sciences, NIH Autism Center of Excellence, Durham, NC 27705 USA ,grid.26009.3d0000 0004 1936 7961Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, NC 27701 USA
| | - Howard W. Francis
- grid.26009.3d0000 0004 1936 7961Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710 USA
| |
Collapse
|
43
|
Chandler JA, Van der Loos KI, Boehnke S, Beaudry JS, Buchman DZ, Illes J. Brain Computer Interfaces and Communication Disabilities: Ethical, Legal, and Social Aspects of Decoding Speech From the Brain. Front Hum Neurosci 2022; 16:841035. [PMID: 35529778 PMCID: PMC9069963 DOI: 10.3389/fnhum.2022.841035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 03/03/2022] [Indexed: 11/28/2022] Open
Abstract
A brain-computer interface technology that can decode the neural signals associated with attempted but unarticulated speech could offer a future efficient means of communication for people with severe motor impairments. Recent demonstrations have validated this approach. Here we assume that it will be possible in future to decode imagined (i.e., attempted but unarticulated) speech in people with severe motor impairments, and we consider the characteristics that could maximize the social utility of a BCI for communication. As a social interaction, communication involves the needs and goals of both speaker and listener, particularly in contexts that have significant potential consequences. We explore three high-consequence legal situations in which neurally-decoded speech could have implications: Testimony, where decoded speech is used as evidence; Consent and Capacity, where it may be used as a means of agency and participation such as consent to medical treatment; and Harm, where such communications may be networked or may cause harm to others. We then illustrate how design choices might impact the social and legal acceptability of these technologies.
Collapse
Affiliation(s)
- Jennifer A. Chandler
- Bertram Loeb Research Chair, Faculty of Law, University of Ottawa, Ottawa, ON, Canada
- *Correspondence: Jennifer A. Chandler,
| | | | - Susan Boehnke
- Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada
| | - Jonas S. Beaudry
- Institute for Health and Social Policy (IHSP) and Faculty of Law, McGill University, Montreal, QC, Canada
| | - Daniel Z. Buchman
- Centre for Addiction and Mental Health, Dalla Lana School of Public Health, Krembil Research Institute, University of Toronto Joint Centre for Bioethics, Toronto, ON, Canada
| | - Judy Illes
- Division of Neurology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
44
|
Ye H, Fan Z, Li G, Wu Z, Hu J, Sheng X, Chen L, Zhu X. Spontaneous State Detection Using Time-Frequency and Time-Domain Features Extracted From Stereo-Electroencephalography Traces. Front Neurosci 2022; 16:818214. [PMID: 35368269 PMCID: PMC8968069 DOI: 10.3389/fnins.2022.818214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 02/15/2022] [Indexed: 11/23/2022] Open
Abstract
As a minimally invasive recording technique, stereo-electroencephalography (SEEG) measures intracranial signals directly by inserting depth electrodes shafts into the human brain, and thus can capture neural activities in both cortical layers and subcortical structures. Despite gradually increasing SEEG-based brain-computer interface (BCI) studies, the features utilized were usually confined to the amplitude of the event-related potential (ERP) or band power, and the decoding capabilities of other time-frequency and time-domain features have not been demonstrated for SEEG recordings yet. In this study, we aimed to verify the validity of time-domain and time-frequency features of SEEG, where classification performances served as evaluating indicators. To do this, using SEEG signals under intermittent auditory stimuli, we extracted features including the average amplitude, root mean square, slope of linear regression, and line-length from the ERP trace and three traces of band power activities (high-gamma, beta, and alpha). These features were used to detect the active state (including activations to two types of names) against the idle state. Results suggested that valid time-domain and time-frequency features distributed across multiple regions, including the temporal lobe, parietal lobe, and deeper structures such as the insula. Among all feature types, the average amplitude, root mean square, and line-length extracted from high-gamma (60–140 Hz) power and the line-length extracted from ERP were the most informative. Using a hidden Markov model (HMM), we could precisely detect the onset and the end of the active state with a sensitivity of 95.7 ± 1.3% and a precision of 91.7 ± 1.6%. The valid features derived from high-gamma power and ERP in this work provided new insights into the feature selection procedure for further SEEG-based BCI applications.
Collapse
Affiliation(s)
- Huanpeng Ye
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhen Fan
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai, China
| | - Guangye Li
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zehan Wu
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai, China
| | - Jie Hu
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai, China
| | - Xinjun Sheng
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Liang Chen
- Department of Neurosurgery of Huashan Hospital, Fudan University, Shanghai, China
- *Correspondence: Liang Chen
| | - Xiangyang Zhu
- State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Xiangyang Zhu
| |
Collapse
|
45
|
Castelhano J, Duarte I, Bernardino I, Pelle F, Francione S, Sales F, Castelo-Branco M. Intracranial recordings in humans reveal specific hippocampal spectral and dorsal vs. ventral connectivity signatures during visual, attention and memory tasks. Sci Rep 2022; 12:3488. [PMID: 35241722 PMCID: PMC8894428 DOI: 10.1038/s41598-022-07225-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 02/10/2022] [Indexed: 11/29/2022] Open
Abstract
Invasive brain recordings using many electrodes across a wide range of tasks provide a unique opportunity to study the role of oscillatory patterning and functional connectivity. We used large-scale recordings (stereo EEG) within and beyond the human hippocampus to investigate the role of distinct frequency oscillations during real-time execution of visual, attention and memory tasks in eight epileptic patients. We found that activity patterns in the hippocampus showed task and frequency dependent properties. Importantly, we found distinct connectivity signatures, in particular concerning parietal-hippocampal connectivity, thus revealing large scale synchronization of networks involved in memory tasks. Comparing the power per frequency band, across tasks and hippocampal regions (anterior/posterior) we confirmed a main effect of frequency band (p = 0.002). Gamma band activity was higher for visuo-spatial memory tasks in the anterior hippocampus. Further, we found that alpha and beta band activity in posterior hippocampus had larger modulation for high memory load visual tasks (p = 0.004). Three functional connectivity task related networks were identified: (dorsal) parietal-hippocampus (visual attention and memory), ventral stream- hippocampus and hippocampal-frontal connections (mainly tasks involving face recognition or object based search). These findings support the critical role of oscillatory patterning in the hippocampus during visual and memory tasks and suggests the presence of task related spectral and functional connectivity signatures. These results show that the use of large scale human intracranial recordings can validate the role of oscillatory and functional connectivity patterns across a broad range of cognitive domains.
Collapse
Affiliation(s)
- João Castelhano
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal.,CIBIT, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Isabel Duarte
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal
| | - Inês Bernardino
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal.,CIBIT, Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Federica Pelle
- Claudio Munari Epilepsy Surgery Center, Niguarda Hospital, Milan, Italy
| | - Stefano Francione
- Claudio Munari Epilepsy Surgery Center, Niguarda Hospital, Milan, Italy
| | | | - Miguel Castelo-Branco
- ICNAS, University of Coimbra, Polo 3, Azinhaga de Santa Comba, Celas, 3000-548, Coimbra, Portugal. .,CIBIT, Faculty of Medicine, University of Coimbra, Coimbra, Portugal.
| |
Collapse
|
46
|
Huggins JE, Krusienski D, Vansteensel MJ, Valeriani D, Thelen A, Stavisky S, Norton JJS, Nijholt A, Müller-Putz G, Kosmyna N, Korczowski L, Kapeller C, Herff C, Halder S, Guger C, Grosse-Wentrup M, Gaunt R, Dusang AN, Clisson P, Chavarriaga R, Anderson CW, Allison BZ, Aksenova T, Aarnoutse E. Workshops of the Eighth International Brain-Computer Interface Meeting: BCIs: The Next Frontier. Brain Comput Interfaces (Abingdon) 2022; 9:69-101. [PMID: 36908334 PMCID: PMC9997957 DOI: 10.1080/2326263x.2021.2009654] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 11/15/2021] [Indexed: 12/11/2022]
Abstract
The Eighth International Brain-Computer Interface (BCI) Meeting was held June 7-9th, 2021 in a virtual format. The conference continued the BCI Meeting series' interactive nature with 21 workshops covering topics in BCI (also called brain-machine interface) research. As in the past, workshops covered the breadth of topics in BCI. Some workshops provided detailed examinations of specific methods, hardware, or processes. Others focused on specific BCI applications or user groups. Several workshops continued consensus building efforts designed to create BCI standards and increase the ease of comparisons between studies and the potential for meta-analysis and large multi-site clinical trials. Ethical and translational considerations were both the primary topic for some workshops or an important secondary consideration for others. The range of BCI applications continues to expand, with more workshops focusing on approaches that can extend beyond the needs of those with physical impairments. This paper summarizes each workshop, provides background information and references for further study, presents an overview of the discussion topics, and describes the conclusion, challenges, or initiatives that resulted from the interactions and discussion at the workshop.
Collapse
Affiliation(s)
- Jane E Huggins
- Department of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, Neuroscience Graduate Program, University of Michigan, Ann Arbor, Michigan, United States 325 East Eisenhower, Room 3017; Ann Arbor, Michigan 48108-5744, 734-936-7177
| | - Dean Krusienski
- Department of Biomedical Engineering, Virginia Commonwealth University, Richmond, VA 23219
| | - Mariska J Vansteensel
- UMC Utrecht Brain Center, Dept of Neurosurgery, University Medical Center Utrecht, The Netherlands
| | | | - Antonia Thelen
- eemagine Medical Imaging Solutions GmbH, Berlin, Germany
| | | | - James J S Norton
- National Center for Adaptive Neurotechnologies, US Department of Veterans Affairs, 113 Holland Ave, Albany, NY 12208
| | - Anton Nijholt
- Faculty EEMCS, University of Twente, Enschede, The Netherlands
| | - Gernot Müller-Putz
- Institute of Neural Engineering, GrazBCI Lab, Graz University of Technology, Stremayrgasse 16/4, 8010 Graz, Austria
| | - Nataliya Kosmyna
- Massachusetts Institute of Technology (MIT), Media Lab, E14-548, Cambridge, MA 02139, Unites States
| | | | | | - Christian Herff
- School of Mental Health and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | | | - Christoph Guger
- g.tec medical engineering GmbH/Guger Technologies OG, Austria, Sierningstrasse 14, 4521 Schiedlberg, Austria, +43725122240-0
| | - Moritz Grosse-Wentrup
- Research Group Neuroinformatics, Faculty of Computer Science, Vienna Cognitive Science Hub, Data Science @ Uni Vienna University of Vienna
| | - Robert Gaunt
- Rehab Neural Engineering Labs, Department of Physical Medicine and Rehabilitation, Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA, 3520 5th Ave, Suite 300, Pittsburgh, PA 15213, 412-383-1426
| | - Aliceson Nicole Dusang
- Department of Electrical and Computer Engineering, School of Engineering, Brown University, Carney Institute for Brain Science, Brown University, Providence, RI
- Department of Veterans Affairs Medical Center, Center for Neurorestoration and Neurotechnology, Rehabilitation R&D Service, Providence, RI
- Center for Neurotechnology and Neurorecovery, Neurology, Massachusetts General Hospital, Boston, MA
| | | | - Ricardo Chavarriaga
- IEEE Standards Association Industry Connections group on neurotechnologies for brain-machine interface, Center for Artificial Intelligence, School of Engineering, ZHAW-Zurich University of Applied Sciences, Switzerland, Switzerland
| | - Charles W Anderson
- Department of Computer Science, Molecular, Cellular and Integrative Neurosience Program, Colorado State University, Fort Collins, CO 80523
| | - Brendan Z Allison
- Dept. of Cognitive Science, Mail Code 0515, University of California at San Diego, La Jolla, United States, 619-534-9754
| | - Tetiana Aksenova
- University Grenoble Alpes, CEA, LETI, Clinatec, Grenoble 38000, France
| | - Erik Aarnoutse
- UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands
| |
Collapse
|
47
|
Sun X, Li M, Li Q, Yin H, Jiang X, Li H, Sun Z, Yang T, Jiang L. Poststroke Cognitive Impairment Research Progress on Application of Brain-Computer Interface. BioMed Research International 2022; 2022:1-16. [PMID: 35252458 PMCID: PMC8896931 DOI: 10.1155/2022/9935192] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 12/20/2021] [Accepted: 12/23/2021] [Indexed: 12/19/2022]
Abstract
Brain-computer interfaces (BCIs), a new type of rehabilitation technology, pick up nerve cell signals, identify and classify their activities, and convert them into computer-recognized instructions. This technique has been widely used in the rehabilitation of stroke patients in recent years and appears to promote motor function recovery after stroke. At present, the application of BCI in poststroke cognitive impairment is increasing, which is a common complication that also affects the rehabilitation process. This paper reviews the promise and potential drawbacks of using BCI to treat poststroke cognitive impairment, providing a solid theoretical basis for the application of BCI in this area.
Collapse
|
48
|
Abstract
Damage or degeneration of motor pathways necessary for speech and other movements, as in brainstem strokes or amyotrophic lateral sclerosis (ALS), can interfere with efficient communication without affecting brain structures responsible for language or cognition. In the worst-case scenario, this can result in the locked in syndrome (LIS), a condition in which individuals cannot initiate communication and can only express themselves by answering yes/no questions with eye blinks or other rudimentary movements. Existing augmentative and alternative communication (AAC) devices that rely on eye tracking can improve the quality of life for people with this condition, but brain-computer interfaces (BCIs) are also increasingly being investigated as AAC devices, particularly when eye tracking is too slow or unreliable. Moreover, with recent and ongoing advances in machine learning and neural recording technologies, BCIs may offer the only means to go beyond cursor control and text generation on a computer, to allow real-time synthesis of speech, which would arguably offer the most efficient and expressive channel for communication. The potential for BCI speech synthesis has only recently been realized because of seminal studies of the neuroanatomical and neurophysiological underpinnings of speech production using intracranial electrocorticographic (ECoG) recordings in patients undergoing epilepsy surgery. These studies have shown that cortical areas responsible for vocalization and articulation are distributed over a large area of ventral sensorimotor cortex, and that it is possible to decode speech and reconstruct its acoustics from ECoG if these areas are recorded with sufficiently dense and comprehensive electrode arrays. In this article, we review these advances, including the latest neural decoding strategies that range from deep learning models to the direct concatenation of speech units. We also discuss state-of-the-art vocoders that are integral in constructing natural-sounding audio waveforms for speech BCIs. Finally, this review outlines some of the challenges ahead in directly synthesizing speech for patients with LIS.
Collapse
Affiliation(s)
- Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
49
|
Abstract
The competitive demand for attention is present in our daily lives, and the identification of neural processes in the EEG signals associated with the demand for specific attention can be useful to the individual’s interactions in virtual environments. Since EEG-based devices can be portable, non-invasive, and present high temporal resolution technology for recording neural signal, the interpretations of virtual systems user’s attention, fatigue and cognitive load based on parameters extracted from the EEG signal are relevant for several purposes, such as games, rehabilitation, and therapies. However, despite the large amount of studies on this subject, different methodological forms are highlighted and suggested in this work, relating virtual environments, demand of attention, workload and fatigue applications. In our summarization, we discuss controversies, current research gaps and future directions together with the background and final sections.
Collapse
Affiliation(s)
- Rhaíra Helena Caetano E Souza
- Assistive Technology Laboratory, Electrical Engineering Faculty, Federal University of Uberlândia, Uberlândia, Brazil.,Federal Institute of Education, Science and Technology of Brasília, Brasília, Brazil
| | - Eduardo Lázaro Martins Naves
- Assistive Technology Laboratory, Electrical Engineering Faculty, Federal University of Uberlândia, Uberlândia, Brazil
| |
Collapse
|
50
|
Abstract
OBJECTIVE Brain-computer interfaces (BCI) studies are increasingly leveraging different attributes of multiple signal modalities simultaneously. Bimodal data acquisition protocols combining the temporal resolution of electroencephalography (EEG) with the spatial resolution of functional near-infrared spectroscopy (fNIRS) require novel approaches to decoding. METHODS We present an EEG-fNIRS Hybrid BCI that employs a new bimodal deep neural network architecture consisting of two convolutional sub-networks (subnets) to decode overt and imagined speech. Features from each subnet are fused before further feature extraction and classification. Nineteen participants performed overt and imagined speech in a novel cue-based paradigm enabling investigation of stimulus and linguistic effects on decoding. RESULTS Using the hybrid approach, classification accuracies (46.31% and 34.29% for overt and imagined speech, respectively (chance: 25%)) indicated a significant improvement on EEG used independently for imagined speech (p=0.020) while tending towards significance for overt speech (p=0.098). In comparison with fNIRS, significant improvements for both speech-types were achieved with bimodal decoding (p<0.001). There was a mean difference of ~12.02% between overt and imagined speech with accuracies as high as 87.18% and 53%. Deeper subnets enhanced performance while stimulus effected overt and imagined speech in significantly different ways. CONCLUSION The bimodal approach was a significant improvement on unimodal results for several tasks. Results indicate the potential of multi-modal deep learning for enhancing neural signal decoding. SIGNIFICANCE This novel architecture can be used to enhance speech decoding from bimodal neural signals.
Collapse
|