1
|
Blanpain LT, Cole ER, Chen E, Park JK, Walelign MY, Gross RE, Cabaniss BT, Willie JT, Singer AC. Multisensory flicker modulates widespread brain networks and reduces interictal epileptiform discharges. Nat Commun 2024; 15:3156. [PMID: 38605017 PMCID: PMC11009358 DOI: 10.1038/s41467-024-47263-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 03/26/2024] [Indexed: 04/13/2024] Open
Abstract
Modulating brain oscillations has strong therapeutic potential. Interventions that both non-invasively modulate deep brain structures and are practical for chronic daily home use are desirable for a variety of therapeutic applications. Repetitive audio-visual stimulation, or sensory flicker, is an accessible approach that modulates hippocampus in mice, but its effects in humans are poorly defined. We therefore quantified the neurophysiological effects of flicker with high spatiotemporal resolution in patients with focal epilepsy who underwent intracranial seizure monitoring. In this interventional trial (NCT04188834) with a cross-over design, subjects underwent different frequencies of flicker stimulation in the same recording session with the effect of sensory flicker exposure on local field potential (LFP) power and interictal epileptiform discharges (IEDs) as primary and secondary outcomes, respectively. Flicker focally modulated local field potentials in expected canonical sensory cortices but also in the medial temporal lobe and prefrontal cortex, likely via resonance of stimulated long-range circuits. Moreover, flicker decreased interictal epileptiform discharges, a pathological biomarker of epilepsy and degenerative diseases, most strongly in regions where potentials were flicker-modulated, especially the visual cortex and medial temporal lobe. This trial met the scientific goal and is now closed. Our findings reveal how multi-sensory stimulation may modulate cortical structures to mitigate pathological activity in humans.
Collapse
Affiliation(s)
- Lou T Blanpain
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
- Neuroscience Graduate Program, Graduate Division of Biological and Biomedical Sciences, Emory University, Atlanta, GA, USA
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA
| | - Eric R Cole
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA
| | - Emily Chen
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
| | - James K Park
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
| | - Michael Y Walelign
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Robert E Gross
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
- Departments of Neurosurgery and Neuroscience and Cell Biology, Rutgers Robert Wood Johnson Medical School, New Brunswick and New Jersey Medical School, Newark, NJ, USA
| | - Brian T Cabaniss
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | - Jon T Willie
- Departments of Neurological Surgery, Neurology, Psychiatry, and Biomedical Engineering, Washington University, St. Louis, MO, USA.
| | - Annabelle C Singer
- Neuroscience Graduate Program, Graduate Division of Biological and Biomedical Sciences, Emory University, Atlanta, GA, USA.
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA.
| |
Collapse
|
2
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
3
|
Silva Pereira S, Özer EE, Sebastian-Galles N. Complexity of STG signals and linguistic rhythm: a methodological study for EEG data. Cereb Cortex 2024; 34:bhad549. [PMID: 38236741 DOI: 10.1093/cercor/bhad549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/29/2023] [Accepted: 12/30/2023] [Indexed: 02/06/2024] Open
Abstract
The superior temporal and the Heschl's gyri of the human brain play a fundamental role in speech processing. Neurons synchronize their activity to the amplitude envelope of the speech signal to extract acoustic and linguistic features, a process known as neural tracking/entrainment. Electroencephalography has been extensively used in language-related research due to its high temporal resolution and reduced cost, but it does not allow for a precise source localization. Motivated by the lack of a unified methodology for the interpretation of source reconstructed signals, we propose a method based on modularity and signal complexity. The procedure was tested on data from an experiment in which we investigated the impact of native language on tracking to linguistic rhythms in two groups: English natives and Spanish natives. In the experiment, we found no effect of native language but an effect of language rhythm. Here, we compare source projected signals in the auditory areas of both hemispheres for the different conditions using nonparametric permutation tests, modularity, and a dynamical complexity measure. We found increasing values of complexity for decreased regularity in the stimuli, giving us the possibility to conclude that languages with less complex rhythms are easier to track by the auditory cortex.
Collapse
Affiliation(s)
- Silvana Silva Pereira
- Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| | - Ege Ekin Özer
- Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| | - Nuria Sebastian-Galles
- Center for Brain and Cognition, Department of Information and Communications Technologies, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| |
Collapse
|
4
|
Cabral-Calderin Y, van Hinsberg D, Thielscher A, Henry MJ. Behavioral entrainment to rhythmic auditory stimulation can be modulated by tACS depending on the electrical stimulation field properties. eLife 2024; 12:RP87820. [PMID: 38289225 PMCID: PMC10945705 DOI: 10.7554/elife.87820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024] Open
Abstract
Synchronization between auditory stimuli and brain rhythms is beneficial for perception. In principle, auditory perception could be improved by facilitating neural entrainment to sounds via brain stimulation. However, high inter-individual variability of brain stimulation effects questions the usefulness of this approach. Here we aimed to modulate auditory perception by modulating neural entrainment to frequency modulated (FM) sounds using transcranial alternating current stimulation (tACS). In addition, we evaluated the advantage of using tACS montages spatially optimized for each individual's anatomy and functional data compared to a standard montage applied to all participants. Across two different sessions, 2 Hz tACS was applied targeting auditory brain regions. Concurrent with tACS, participants listened to FM stimuli with modulation rate matching the tACS frequency but with different phase lags relative to the tACS, and detected silent gaps embedded in the FM sound. We observed that tACS modulated the strength of behavioral entrainment to the FM sound in a phase-lag specific manner. Both the optimal tACS lag and the magnitude of the tACS effect were variable across participants and sessions. Inter-individual variability of tACS effects was best explained by the strength of the inward electric field, depending on the field focality and proximity to the target brain region. Although additional evidence is necessary, our results also provided suggestive insights that spatially optimizing the electrode montage could be a promising tool to reduce inter-individual variability of tACS effects. This work demonstrates that tACS effectively modulates entrainment to sounds depending on the optimality of the electric field. However, the lack of reliability on optimal tACS lags calls for caution when planning tACS experiments based on separate sessions.
Collapse
Affiliation(s)
| | | | - Axel Thielscher
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and HvidovreCopenhagenDenmark
- Section for Magnetic Resonance, DTU Health Tech, Technical University of DenmarkCopenhagenDenmark
| | - Molly J Henry
- Max Planck Institute for Empirical AestheticsFrankfurtGermany
- Toronto Metropolitan UniversityTorontoCanada
| |
Collapse
|
5
|
Batterink LJ, Mulgrew J, Gibbings A. Rhythmically Modulating Neural Entrainment during Exposure to Regularities Influences Statistical Learning. J Cogn Neurosci 2024; 36:107-127. [PMID: 37902580 DOI: 10.1162/jocn_a_02079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
The ability to discover regularities in the environment, such as syllable patterns in speech, is known as statistical learning. Previous studies have shown that statistical learning is accompanied by neural entrainment, in which neural activity temporally aligns with repeating patterns over time. However, it is unclear whether these rhythmic neural dynamics play a functional role in statistical learning or whether they largely reflect the downstream consequences of learning, such as the enhanced perception of learned words in speech. To better understand this issue, we manipulated participants' neural entrainment during statistical learning using continuous rhythmic visual stimulation. Participants were exposed to a speech stream of repeating nonsense words while viewing either (1) a visual stimulus with a "congruent" rhythm that aligned with the word structure, (2) a visual stimulus with an incongruent rhythm, or (3) a static visual stimulus. Statistical learning was subsequently measured using both an explicit and implicit test. Participants in the congruent condition showed a significant increase in neural entrainment over auditory regions at the relevant word frequency, over and above effects of passive volume conduction, indicating that visual stimulation successfully altered neural entrainment within relevant neural substrates. Critically, during the subsequent implicit test, participants in the congruent condition showed an enhanced ability to predict upcoming syllables and stronger neural phase synchronization to component words, suggesting that they had gained greater sensitivity to the statistical structure of the speech stream relative to the incongruent and static groups. This learning benefit could not be attributed to strategic processes, as participants were largely unaware of the contingencies between the visual stimulation and embedded words. These results indicate that manipulating neural entrainment during exposure to regularities influences statistical learning outcomes, suggesting that neural entrainment may functionally contribute to statistical learning. Our findings encourage future studies using non-invasive brain stimulation methods to further understand the role of entrainment in statistical learning.
Collapse
|
6
|
Hovsepyan S, Olasagasti I, Giraud AL. Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in speech processing. PLoS Comput Biol 2023; 19:e1011595. [PMID: 37934766 PMCID: PMC10655987 DOI: 10.1371/journal.pcbi.1011595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/17/2023] [Accepted: 10/11/2023] [Indexed: 11/09/2023] Open
Abstract
Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-β) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.
Collapse
Affiliation(s)
- Sevada Hovsepyan
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Itsaso Olasagasti
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, France
| |
Collapse
|
7
|
Guilleminot P, Graef C, Butters E, Reichenbach T. Audiotactile Stimulation Can Improve Syllable Discrimination through Multisensory Integration in the Theta Frequency Band. J Cogn Neurosci 2023; 35:1760-1772. [PMID: 37677062 DOI: 10.1162/jocn_a_02045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
Syllables are an essential building block of speech. We recently showed that tactile stimuli linked to the perceptual centers of syllables in continuous speech can improve speech comprehension. The rate of syllables lies in the theta frequency range, between 4 and 8 Hz, and the behavioral effect appears linked to multisensory integration in this frequency band. Because this neural activity may be oscillatory, we hypothesized that a behavioral effect may also occur not only while but also after this activity has been evoked or entrained through vibrotactile pulses. Here, we show that audiotactile integration regarding the perception of single syllables, both on the neural and on the behavioral level, is consistent with this hypothesis. We first stimulated participants with a series of vibrotactile pulses and then presented them with a syllable in background noise. We show that, at a delay of 200 msec after the last vibrotactile pulse, audiotactile integration still occurred in the theta band and syllable discrimination was enhanced. Moreover, the dependence of both the neural multisensory integration as well as of the behavioral discrimination on the delay of the audio signal with respect to the last tactile pulse was consistent with a damped oscillation. In addition, the multisensory gain is correlated with the syllable discrimination score. Our results therefore evidence the role of the theta band in audiotactile integration and provide evidence that these effects may involve oscillatory activity that still persists after the tactile stimulation.
Collapse
|
8
|
L'Hermite S, Zoefel B. Rhythmic Entrainment Echoes in Auditory Perception. J Neurosci 2023; 43:6667-6678. [PMID: 37604689 PMCID: PMC10538584 DOI: 10.1523/jneurosci.0051-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/10/2023] [Accepted: 03/20/2023] [Indexed: 08/23/2023] Open
Abstract
Rhythmic entrainment echoes-rhythmic brain responses that outlast rhythmic stimulation-can demonstrate endogenous neural oscillations entrained by the stimulus rhythm. Here, we tested for such echoes in auditory perception. Participants detected a pure tone target, presented at a variable delay after another pure tone that was rhythmically modulated in amplitude. In four experiments involving 154 human (female and male) participants, we tested (1) which stimulus rate produces the strongest entrainment echo and, inspired by the tonotopical organization of the auditory system and findings in nonhuman primates, (2) whether these are organized according to sound frequency. We found the strongest entrainment echoes after 6 and 8 Hz stimulation, respectively. The best moments for target detection (in phase or antiphase with the preceding rhythm) depended on whether sound frequencies of entraining and target stimuli matched, which is in line with a tonotopical organization. However, for the same experimental condition, best moments were not always consistent across experiments. We provide a speculative explanation for these differences that relies on the notion that neural entrainment and repetition-related adaptation might exercise competing opposite influences on perception. Together, we find rhythmic echoes in auditory perception that seem more complex than those predicted from initial theories of neural entrainment.SIGNIFICANCE STATEMENT Rhythmic entrainment echoes are rhythmic brain responses that are produced by a rhythmic stimulus and persist after its offset. These echoes play an important role for the identification of endogenous brain oscillations, entrained by rhythmic stimulation, and give us insights into whether and how participants predict the timing of events. In four independent experiments involving >150 participants, we examined entrainment echoes in auditory perception. We found that entrainment echoes have a preferred rate (between 6 and 8 Hz) and seem to follow the tonotopic organization of the auditory system. Although speculative, we also found evidence that several, potentially competing processes might interact to produce such echoes, a notion that might need to be considered for future experimental design.
Collapse
Affiliation(s)
| | - Benedikt Zoefel
- Université de Toulouse III-Paul Sabatier, 31062 Toulouse, France
- Centre National de la Recherche Scientifique, Centre de Recherche Cerveau et Cognition, Centre Hospitalier Universitaire Purpan, 31052 Toulouse, France
| |
Collapse
|
9
|
Su Z, Zhou X, Wang L. Dissociated amplitude and phase effects of alpha oscillation in a nested structure of rhythm- and sequence-based temporal expectation. Cereb Cortex 2023; 33:9741-9755. [PMID: 37415070 DOI: 10.1093/cercor/bhad240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/08/2023] Open
Abstract
The human brain can utilize various information to form temporal expectations and optimize perceptual performance. Here we show dissociated amplitude and phase effects of prestimulus alpha oscillation in a nested structure of rhythm- and sequence-based expectation. A visual stream of rhythmic stimuli was presented in a fixed sequence such that their temporal positions could be predicted by either the low-frequency rhythm, the sequence, or the combination. The behavioral modeling indicated that rhythmic and sequence information additively led to increased accumulation speed of sensory evidence and alleviated threshold for the perceptual discrimination of the expected stimulus. The electroencephalographical results showed that the alpha amplitude was modulated mainly by rhythmic information, with the amplitude fluctuating with the phase of the low-frequency rhythm (i.e. phase-amplitude coupling). The alpha phase, however, was affected by both rhythmic and sequence information. Importantly, rhythm-based expectation improved the perceptual performance by decreasing the alpha amplitude, whereas sequence-based expectation did not further decrease the amplitude on top of rhythm-based expectation. Moreover, rhythm-based and sequence-based expectations collaboratively improved the perceptual performance by biasing the alpha oscillation toward the optimal phase. Our findings suggested flexible coordination of multiscale brain oscillations in dealing with a complex environment.
Collapse
Affiliation(s)
- Zhongbin Su
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
- Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai 200030, China
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
| | - Xiaolin Zhou
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Lihui Wang
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, China
- Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, Shanghai 201602, China
| |
Collapse
|
10
|
Ní Choisdealbha Á, Attaheri A, Rocha S, Mead N, Olawole-Scott H, Brusini P, Gibbon S, Boutris P, Grey C, Hines D, Williams I, Flanagan SA, Goswami U. Neural phase angle from two months when tracking speech and non-speech rhythm linked to language performance from 12 to 24 months. BRAIN AND LANGUAGE 2023; 243:105301. [PMID: 37399686 DOI: 10.1016/j.bandl.2023.105301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 06/05/2023] [Accepted: 06/28/2023] [Indexed: 07/05/2023]
Abstract
Atypical phase alignment of low-frequency neural oscillations to speech rhythm has been implicated in phonological deficits in developmental dyslexia. Atypical phase alignment to rhythm could thus also characterize infants at risk for later language difficulties. Here, we investigate phase-language mechanisms in a neurotypical infant sample. 122 two-, six- and nine-month-old infants were played speech and non-speech rhythms while EEG was recorded in a longitudinal design. The phase of infants' neural oscillations aligned consistently to the stimuli, with group-level convergence towards a common phase. Individual low-frequency phase alignment related to subsequent measures of language acquisition up to 24 months of age. Accordingly, individual differences in language acquisition are related to the phase alignment of cortical tracking of auditory and audiovisual rhythms in infancy, an automatic neural mechanism. Automatic rhythmic phase-language mechanisms could eventually serve as biomarkers, identifying at-risk infants and enabling intervention at the earliest stages of development.
Collapse
Affiliation(s)
| | - Adam Attaheri
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Sinead Rocha
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Natasha Mead
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Helen Olawole-Scott
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Perrine Brusini
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Samuel Gibbon
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Panagiotis Boutris
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Christina Grey
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Declan Hines
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Isabel Williams
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Sheila A Flanagan
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, United Kingdom.
| |
Collapse
|
11
|
Zioga I, Weissbart H, Lewis AG, Haegens S, Martin AE. Naturalistic Spoken Language Comprehension Is Supported by Alpha and Beta Oscillations. J Neurosci 2023; 43:3718-3732. [PMID: 37059462 PMCID: PMC10198453 DOI: 10.1523/jneurosci.1500-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 03/17/2023] [Accepted: 03/23/2023] [Indexed: 04/16/2023] Open
Abstract
Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while β oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and β oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and β power from the dependency features. Results showed that dependency features predict α and β power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in β. Critically, α- and β-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-β responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and β oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.SIGNIFICANCE STATEMENT It remains unclear whether the proposed functional role of α and β oscillations in perceptual and motor function is generalizable to higher-level cognitive processes, such as spoken language comprehension. We found that syntactic features predict α and β power in language-related regions beyond low-level linguistic features when listening to naturalistic speech in a known language. We offer experimental findings that integrate a neuroscientific framework on the role of brain oscillations as "building blocks" with spoken language comprehension. This supports the view of a domain-general role of oscillations across the hierarchy of cognitive functions, from low-level sensory operations to abstract linguistic processes.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| | - Hugo Weissbart
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
| | - Ashley G Lewis
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| | - Saskia Haegens
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Department of Psychiatry, Columbia University, New York, New York 10032
- Division of Systems Neuroscience, New York State Psychiatric Institute, New York, New York 10032
| | - Andrea E Martin
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, 6525 EN, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD, The Netherlands
| |
Collapse
|
12
|
Blanpain LT, Chen E, Park J, Walelign MY, Gross RE, Cabaniss BT, Willie JT, Singer AC. Multisensory Flicker Modulates Widespread Brain Networks and Reduces Interictal Epileptiform Discharges in Humans. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.03.14.23286691. [PMID: 36993248 PMCID: PMC10055448 DOI: 10.1101/2023.03.14.23286691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Modulating brain oscillations has strong therapeutic potential. However, commonly used non-invasive interventions such as transcranial magnetic or direct current stimulation have limited effects on deeper cortical structures like the medial temporal lobe. Repetitive audio-visual stimulation, or sensory flicker, modulates such structures in mice but little is known about its effects in humans. Using high spatiotemporal resolution, we mapped and quantified the neurophysiological effects of sensory flicker in human subjects undergoing presurgical intracranial seizure monitoring. We found that flicker modulates both local field potential and single neurons in higher cognitive regions, including the medial temporal lobe and prefrontal cortex, and that local field potential modulation is likely mediated via resonance of involved circuits. We then assessed how flicker affects pathological neural activity, specifically interictal epileptiform discharges, a biomarker of epilepsy also implicated in Alzheimer's and other diseases. In our patient population with focal seizure onsets, sensory flicker decreased the rate interictal epileptiform discharges. Our findings support the use of sensory flicker to modulate deeper cortical structures and mitigate pathological activity in humans.
Collapse
Affiliation(s)
- Lou T. Blanpain
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
- Neuroscience Graduate Program, Graduate Division of Biological and Biomedical Sciences, Emory University, Atlanta, GA, USA
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA
| | - Emily. Chen
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
| | - James Park
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
| | - Michael Y. Walelign
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Robert E. Gross
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, USA
| | - Brian T. Cabaniss
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
| | - Jon T. Willie
- Department of Neurosurgery, Washington University, St. Louis, MO, USA
| | - Annabelle C. Singer
- Neuroscience Graduate Program, Graduate Division of Biological and Biomedical Sciences, Emory University, Atlanta, GA, USA
- Coulter Department of Biomedical Engineering, Georgia Institute of Technology & Emory University, Atlanta, GA, USA
| |
Collapse
|
13
|
Murphy E. ROSE: A Neurocomputational Architecture for Syntax. ARXIV 2023:arXiv:2303.08877v1. [PMID: 36994166 PMCID: PMC10055479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
A comprehensive model of natural language processing in the brain must accommodate four components: representations, operations, structures and encoding. It further requires a principled account of how these different components mechanistically, and causally, relate to each another. While previous models have isolated regions of interest for structure-building and lexical access, and have utilized specific neural recording measures to expose possible signatures of syntax, many gaps remain with respect to bridging distinct scales of analysis that map onto these four components. By expanding existing accounts of how neural oscillations can index various linguistic processes, this article proposes a neurocomputational architecture for syntax, termed the ROSE model (Representation, Operation, Structure, Encoding). Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level. Elementary computations (O) that transform these units into manipulable objects accessible to subsequent structure-building levels are coded via high frequency broadband γ activity. Low frequency synchronization and cross-frequency coupling code for recursive categorial inferences (S). Distinct forms of low frequency coupling and phase-amplitude coupling (δ-θ coupling via pSTS-IFG; θ-γ coupling via IFG to conceptual hubs in lateral and ventral temporal cortex) then encode these structures onto distinct workspaces (E). Causally connecting R to O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling; connecting S to E is a system of frontotemporal traveling oscillations; connecting E back to lower levels is low-frequency phase resetting of spike-LFP coupling. This compositional neural code has important implications for algorithmic accounts, since it makes concrete predictions for the appropriate level of study for psycholinguistic parsing models. ROSE is reliant on neurophysiologically plausible mechanisms, is supported at all four levels by a range of recent empirical research, and provides an anatomically precise and falsifiable grounding for the basic property of natural language syntax: hierarchical, recursive structure-building.
Collapse
Affiliation(s)
- Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, UTHealth, Houston, TX, USA
- Texas Institute for Restorative Neurotechnologies, UTHealth, Houston, TX, USA
| |
Collapse
|
14
|
Wischnewski M, Alekseichuk I, Opitz A. Neurocognitive, physiological, and biophysical effects of transcranial alternating current stimulation. Trends Cogn Sci 2023; 27:189-205. [PMID: 36543610 PMCID: PMC9852081 DOI: 10.1016/j.tics.2022.11.013] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 11/23/2022] [Accepted: 11/26/2022] [Indexed: 12/23/2022]
Abstract
Transcranial alternating current stimulation (tACS) can modulate human neural activity and behavior. Accordingly, tACS has vast potential for cognitive research and brain disorder therapies. The stimulation generates oscillating electric fields in the brain that can bias neural spike timing, causing changes in local neural oscillatory power and cross-frequency and cross-area coherence. tACS affects cognitive performance by modulating underlying single or nested brain rhythms, local or distal synchronization, and metabolic activity. Clinically, stimulation tailored to abnormal neural oscillations shows promising results in alleviating psychiatric and neurological symptoms. We summarize the findings of tACS mechanisms, its use for cognitive applications, and novel developments for personalized stimulation.
Collapse
Affiliation(s)
- Miles Wischnewski
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, USA
| | - Ivan Alekseichuk
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, USA.
| | - Alexander Opitz
- Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
15
|
Pérez A, Davis MH. Speaking and listening to inter-brain relationships. Cortex 2023; 159:54-63. [PMID: 36608420 DOI: 10.1016/j.cortex.2022.12.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/11/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022]
Abstract
Studies of inter-brain relationships thrive, and yet many reservations regarding their scope and interpretation of these phenomena have been raised by the scientific community. It is thus essential to establish common ground on methodological and conceptual definitions related to this topic and to open debate about any remaining points of uncertainty. We here offer insights to improve the conceptual clarity and empirical standards offered by social neuroscience studies of inter-personal interaction using hyperscanning with a particular focus on verbal communication.
Collapse
Affiliation(s)
- Alejandro Pérez
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK.
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK
| |
Collapse
|
16
|
Zoefel B, Gilbert RA, Davis MH. Intelligibility improves perception of timing changes in speech. PLoS One 2023; 18:e0279024. [PMID: 36634109 PMCID: PMC9836318 DOI: 10.1371/journal.pone.0279024] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 11/28/2022] [Indexed: 01/13/2023] Open
Abstract
Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.
Collapse
Affiliation(s)
- Benedikt Zoefel
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Centre National de la Recherche Scientifique (CNRS), Centre de Recherche Cerveau et Cognition (CerCo), Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Rebecca A. Gilbert
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Matthew H. Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
17
|
Forward entrainment: Psychophysics, neural correlates, and function. Psychon Bull Rev 2022:10.3758/s13423-022-02220-y. [DOI: 10.3758/s13423-022-02220-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2022] [Indexed: 12/04/2022]
Abstract
AbstractWe define forward entrainment as that part of behavioral or neural entrainment that outlasts the entraining stimulus. In this review, we examine conditions under which one may optimally observe forward entrainment. In Part 1, we review and evaluate studies that have observed forward entrainment using a variety of psychophysical methods (detection, discrimination, and reaction times), different target stimuli (tones, noise, and gaps), different entraining sequences (sinusoidal, rectangular, or sawtooth waveforms), a variety of physiological measures (MEG, EEG, ECoG, CSD), in different modalities (auditory and visual), across modalities (audiovisual and auditory-motor), and in different species. In Part 2, we describe those experimental conditions that place constraints on the magnitude of forward entrainment, including an evaluation of the effects of signal uncertainty and attention, temporal envelope complexity, signal-to-noise ratio (SNR), rhythmic rate, prior experience, and intersubject variability. In Part 3 we theorize on potential mechanisms and propose that forward entrainment may instantiate a dynamic auditory afterimage that lasts a fraction of a second to minimize prediction error in signal processing.
Collapse
|
18
|
Encoding speech rate in challenging listening conditions: White noise and reverberation. Atten Percept Psychophys 2022; 84:2303-2318. [PMID: 35996057 PMCID: PMC9481500 DOI: 10.3758/s13414-022-02554-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2022] [Indexed: 11/08/2022]
Abstract
Temporal contrasts in speech are perceived relative to the speech rate of the surrounding context. That is, following a fast context sentence, listeners interpret a given target sound as longer than following a slow context, and vice versa. This rate effect, often referred to as "rate-dependent speech perception," has been suggested to be the result of a robust, low-level perceptual process, typically examined in quiet laboratory settings. However, speech perception often occurs in more challenging listening conditions. Therefore, we asked whether rate-dependent perception would be (partially) compromised by signal degradation relative to a clear listening condition. Specifically, we tested effects of white noise and reverberation, with the latter specifically distorting temporal information. We hypothesized that signal degradation would reduce the precision of encoding the speech rate in the context and thereby reduce the rate effect relative to a clear context. This prediction was borne out for both types of degradation in Experiment 1, where the context sentences but not the subsequent target words were degraded. However, in Experiment 2, which compared rate effects when contexts and targets were coherent in terms of signal quality, no reduction of the rate effect was found. This suggests that, when confronted with coherently degraded signals, listeners adapt to challenging listening situations, eliminating the difference between rate-dependent perception in clear and degraded conditions. Overall, the present study contributes towards understanding the consequences of different types of listening environments on the functioning of low-level perceptual processes that listeners use during speech perception.
Collapse
|
19
|
Otero M, Lea-Carnall C, Prado P, Escobar MJ, El-Deredy W. Modelling neural entrainment and its persistence: influence of frequency of stimulation and phase at the stimulus offset. Biomed Phys Eng Express 2022; 8. [PMID: 35320793 DOI: 10.1088/2057-1976/ac605a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 03/23/2022] [Indexed: 11/12/2022]
Abstract
Neural entrainment, the synchronization of brain oscillations to the frequency of an external stimuli, is a key mechanism that shapes perceptual and cognitive processes.Objective.Using simulations, we investigated the dynamics of neural entrainment, particularly the period following the end of the stimulation, since the persistence (reverberation) of neural entrainment may condition future sensory representations based on predictions about stimulus rhythmicity.Methods.Neural entrainment was assessed using a modified Jansen-Rit neural mass model (NMM) of coupled cortical columns, in which the spectral features of the output resembled that of the electroencephalogram (EEG). We evaluated spectro-temporal features of entrainment as a function of the stimulation frequency, the resonant frequency of the neural populations comprising the NMM, and the coupling strength between cortical columns. Furthermore, we tested if the entrainment persistence depended on the phase of the EEG-like oscillation at the time the stimulus ended.Main Results.The entrainment of the column that received the stimulation was maximum when the frequency of the entrainer was within a narrow range around the resonant frequency of the column. When this occurred, entrainment persisted for several cycles after the stimulus terminated, and the propagation of the entrainment to other columns was facilitated. Propagation also depended on the resonant frequency of the second column, and the coupling strength between columns. The duration of the persistence of the entrainment depended on the phase of the neural oscillation at the time the entrainer terminated, such that falling phases (fromπ/2 to 3π/2 in a sine function) led to longer persistence than rising phases (from 0 toπ/2 and 3π/2 to 2π).Significance.The study bridges between models of neural oscillations and empirical electrophysiology, providing insights to the mechanisms underlying neural entrainment and the use of rhythmic sensory stimulation for neuroenhancement.
Collapse
Affiliation(s)
- Mónica Otero
- Escuela de Ingeniería Biomédica, Universidad de Valparaíso, Chile.,Advanced Center for Electric and Electronic Engineering, Valparaíso, Chile
| | - Caroline Lea-Carnall
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Pavel Prado
- Latin-American Brain Health Institute (BrainLat), Universidad Adolfo Ibañez, Chile
| | | | - Wael El-Deredy
- Escuela de Ingeniería Biomédica, Universidad de Valparaíso, Chile.,Advanced Center for Electric and Electronic Engineering, Valparaíso, Chile.,Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
20
|
Keitel C, Ruzzoli M, Dugué L, Busch NA, Benwell CSY. Rhythms in cognition: The evidence revisited. Eur J Neurosci 2022; 55:2991-3009. [PMID: 35696729 PMCID: PMC9544967 DOI: 10.1111/ejn.15740] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 05/27/2022] [Accepted: 05/30/2022] [Indexed: 12/27/2022]
Affiliation(s)
| | - Manuela Ruzzoli
- Basque Center on Cognition, Brain and Language (BCBL), Donostia/San Sebastian, Spain.,Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Laura Dugué
- Université Paris Cité, INCC UMR 8002, CNRS, Paris, France.,Institut Universitaire de France (IUF), Paris, France
| | - Niko A Busch
- Institute for Psychology, University of Münster, Münster, Germany
| | | |
Collapse
|
21
|
Distracting Linguistic Information Impairs Neural Tracking of Attended Speech. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100043. [DOI: 10.1016/j.crneur.2022.100043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 04/27/2022] [Accepted: 05/24/2022] [Indexed: 11/20/2022] Open
|
22
|
Mandke K, Flanagan S, Macfarlane A, Gabrielczyk F, Wilson A, Gross J, Goswami U. Neural sampling of the speech signal at different timescales by children with dyslexia. Neuroimage 2022; 253:119077. [PMID: 35278708 DOI: 10.1016/j.neuroimage.2022.119077] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 01/15/2022] [Accepted: 03/07/2022] [Indexed: 01/08/2023] Open
Abstract
Phonological difficulties characterize individuals with dyslexia across languages. Currently debated is whether these difficulties arise from atypical neural sampling of (or entrainment to) auditory information in speech at slow rates (<10 Hz, related to speech rhythm), faster rates, or neither. MEG studies with adults suggest that atypical sampling in dyslexia affects faster modulations in the neurophysiological gamma band, related to phoneme-level representation. However, dyslexic adults have had years of reduced experience in converting graphemes to phonemes, which could itself cause atypical gamma-band activity. The present study was designed to identify specific linguistic timescales at which English children with dyslexia may show atypical entrainment. Adopting a developmental focus, we hypothesized that children with dyslexia would show atypical entrainment to the prosodic and syllable-level information that is exaggerated in infant-directed speech and carried primarily by amplitude modulations <10 Hz. MEG was recorded in a naturalistic story-listening paradigm. The modulation bands related to different types of linguistic information were derived directly from the speech materials, and lagged coherence at multiple temporal rates spanning 0.9-40 Hz was computed. Group differences in lagged speech-brain coherence between children with dyslexia and control children were most marked in neurophysiological bands corresponding to stress and syllable-level information (<5 Hz in our materials), and phoneme-level information (12-40 Hz). Functional connectivity analyses showed network differences between groups in both hemispheres, with dyslexic children showing significantly reduced global network efficiency. Global network efficiency correlated with dyslexic children's oral language development and with control children's reading development. These developmental data suggest that dyslexia is characterized by atypical neural sampling of auditory information at slower rates. They also throw new light on the nature of the gamma band temporal sampling differences reported in MEG dyslexia studies with adults.
Collapse
Affiliation(s)
- Kanad Mandke
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom.
| | - Sheila Flanagan
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Annabel Macfarlane
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Fiona Gabrielczyk
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Angela Wilson
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Usha Goswami
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
23
|
Wass S, Perapoch Amadó M, Ives J. How the ghost learns to drive the machine? Oscillatory entrainment to our early social or physical environment and the emergence of volitional control. Dev Cogn Neurosci 2022; 54:101102. [PMID: 35398645 PMCID: PMC9010552 DOI: 10.1016/j.dcn.2022.101102] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/18/2022] [Accepted: 03/23/2022] [Indexed: 01/08/2023] Open
Abstract
An individual’s early interactions with their environment are thought to be largely passive; through the early years, the capacity for volitional control develops. Here, we consider: how is the emergence of volitional control characterised by changes in the entrainment observed between internal activity (behaviour, physiology and brain activity) and the sights and sounds in our everyday environment (physical and social)? We differentiate between contingent responsiveness (entrainment driven by evoked responses to external events) and oscillatory entrainment (driven by internal oscillators becoming temporally aligned with external oscillators). We conclude that ample evidence suggests that children show behavioural, physiological and neural entrainment to their physical and social environment, irrespective of volitional attention control; however, evidence for oscillatory entrainment beyond contingent responsiveness is currently lacking. Evidence for how oscillatory entrainment changes over developmental time is also lacking. Finally, we suggest a mechanism through which periodic environmental rhythms might facilitate both sensory processing and the development of volitional control even in the absence of oscillatory entrainment.
Collapse
|
24
|
Fiene M, Radecke JO, Misselhorn J, Sengelmann M, Herrmann CS, Schneider TR, Schwab BC, Engel AK. tACS phase-specifically biases brightness perception of flickering light. Brain Stimul 2022; 15:244-253. [PMID: 34990876 DOI: 10.1016/j.brs.2022.01.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 12/08/2021] [Accepted: 01/01/2022] [Indexed: 11/02/2022] Open
Abstract
BACKGROUND Visual phenomena like brightness illusions impressively demonstrate the highly constructive nature of perception. In addition to physical illumination, the subjective experience of brightness is related to temporal neural dynamics in visual cortex. OBJECTIVE Here, we asked whether biasing the temporal pattern of neural excitability in visual cortex by transcranial alternating current stimulation (tACS) modulates brightness perception of concurrent rhythmic visual stimuli. METHODS Participants performed a brightness discrimination task of two flickering lights, one of which was targeted by same-frequency electrical stimulation at varying phase shifts. tACS was applied with an occipital and a periorbital active control montage, based on simulations of electrical currents using finite element head models. RESULTS Experimental results reveal that flicker brightness perception is modulated dependent on the phase shift between sensory and electrical stimulation, solely under occipital tACS. Phase-specific modulatory effects by tACS were dependent on flicker-evoked neural phase stability at the tACS-targeted frequency, recorded prior to electrical stimulation. Further, the optimal timing of tACS application leading to enhanced brightness perception was correlated with the neural phase delay of the cortical flicker response. CONCLUSIONS Our results corroborate the role of temporally coordinated neural activity in visual cortex for brightness perception of rhythmic visual input in humans. Phase-specific behavioral modulations by tACS emphasize its efficacy to transfer perceptually relevant temporal information to the cortex. These findings provide an important step towards understanding the basis of visual perception and further confirm electrical stimulation as a tool for advancing controlled modulations of neural activity and related behavior.
Collapse
Affiliation(s)
- Marina Fiene
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany.
| | - Jan-Ole Radecke
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Jonas Misselhorn
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Malte Sengelmann
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Christoph S Herrmann
- Experimental Psychology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University Oldenburg, Oldenburg, 26129, Germany; Research Center Neurosensory Science, Carl von Ossietzky University Oldenburg, Oldenburg, 26129, Germany
| | - Till R Schneider
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Bettina C Schwab
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| |
Collapse
|
25
|
Gnanateja GN, Devaraju DS, Heyne M, Quique YM, Sitek KR, Tardif MC, Tessmer R, Dial HR. On the Role of Neural Oscillations Across Timescales in Speech and Music Processing. Front Comput Neurosci 2022; 16:872093. [PMID: 35814348 PMCID: PMC9260496 DOI: 10.3389/fncom.2022.872093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 05/24/2022] [Indexed: 11/25/2022] Open
Abstract
This mini review is aimed at a clinician-scientist seeking to understand the role of oscillations in neural processing and their functional relevance in speech and music perception. We present an overview of neural oscillations, methods used to study them, and their functional relevance with respect to music processing, aging, hearing loss, and disorders affecting speech and language. We first review the oscillatory frequency bands and their associations with speech and music processing. Next we describe commonly used metrics for quantifying neural oscillations, briefly touching upon the still-debated mechanisms underpinning oscillatory alignment. Following this, we highlight key findings from research on neural oscillations in speech and music perception, as well as contributions of this work to our understanding of disordered perception in clinical populations. Finally, we conclude with a look toward the future of oscillatory research in speech and music perception, including promising methods and potential avenues for future work. We note that the intention of this mini review is not to systematically review all literature on cortical tracking of speech and music. Rather, we seek to provide the clinician-scientist with foundational information that can be used to evaluate and design research studies targeting the functional role of oscillations in speech and music processing in typical and clinical populations.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Dhatri S Devaraju
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Matthias Heyne
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Yina M Quique
- Center for Education in Health Sciences, Northwestern University, Chicago, IL, United States
| | - Kevin R Sitek
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Monique C Tardif
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | - Rachel Tessmer
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Heather R Dial
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States.,Department of Communication Sciences and Disorders, University of Houston, Houston, TX, United States
| |
Collapse
|
26
|
Liu B, Yan X, Chen X, Wang Y, Gao X. tACS facilitates flickering driving by boosting steady-state visual evoked potentials. J Neural Eng 2021; 18. [PMID: 34962233 DOI: 10.1088/1741-2552/ac3ef3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Accepted: 12/01/2021] [Indexed: 11/12/2022]
Abstract
Objective.There has become of increasing interest in transcranial alternating current stimulation (tACS) since its inception nearly a decade ago. tACS in modulating brain state is an active area of research and has been demonstrated effective in various neuropsychological and clinical domains. In the visual domain, much effort has been dedicated to brain rhythms and rhythmic stimulation, i.e. tACS. However, less is known about the interplay between the rhythmic stimulation and visual stimulation.Approach.Here, we used steady-state visual evoked potential (SSVEP), induced by flickering driving as a widely used technique for frequency-tagging, to investigate the aftereffect of tACS in healthy human subjects. Seven blocks of 64-channel electroencephalogram were recorded before and after the administration of 20min 10Hz tACS, while subjects performed several blocks of SSVEP tasks. We characterized the physiological properties of tACS aftereffect by comparing and validating the temporal, spatial, spatiotemporal and signal-to-noise ratio (SNR) patterns between and within blocks in real tACS and sham tACS.Main results.Our result revealed that tACS boosted the 10Hz SSVEP significantly. Besides, the aftereffect on SSVEP was mitigated with time and lasted up to 5 min.Significance.Our results demonstrate the feasibility of facilitating the flickering driving by external rhythmic stimulation and open a new possibility to alter the brain state in a direction by noninvasive transcranial brain stimulation.
Collapse
Affiliation(s)
- Bingchuan Liu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, People's Republic of China
| | - Xinyi Yan
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, People's Republic of China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin, People's Republic of China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, People's Republic of China
| |
Collapse
|
27
|
Neural oscillations track natural but not artificial fast speech: Novel insights from speech-brain coupling using MEG. Neuroimage 2021; 244:118577. [PMID: 34525395 DOI: 10.1016/j.neuroimage.2021.118577] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/27/2021] [Accepted: 09/12/2021] [Indexed: 11/20/2022] Open
Abstract
Neural oscillations contribute to speech parsing via cortical tracking of hierarchical linguistic structures, including syllable rate. While the properties of neural entrainment have been largely probed with speech stimuli at either normal or artificially accelerated rates, the important case of natural fast speech has been largely overlooked. Using magnetoencephalography, we found that listening to naturally-produced speech was associated with cortico-acoustic coupling, both at normal (∼6 syllables/s) and fast (∼9 syllables/s) rates, with a corresponding shift in peak entrainment frequency. Interestingly, time-compressed sentences did not yield such coupling, despite being generated at the same rate as the natural fast sentences. Additionally, neural activity in right motor cortex exhibited stronger tuning to natural fast rather than to artificially accelerated speech, and showed evidence for stronger phase-coupling with left temporo-parietal and motor areas. These findings are highly relevant for our understanding of the role played by auditory and motor cortex oscillations in the perception of naturally produced speech.
Collapse
|
28
|
van Bree S, Alamia A, Zoefel B. Oscillation or not-Why we can and need to know (commentary on Doelling and Assaneo, 2021). Eur J Neurosci 2021; 55:201-204. [PMID: 34817088 DOI: 10.1111/ejn.15542] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 10/25/2021] [Accepted: 11/08/2021] [Indexed: 11/28/2022]
Affiliation(s)
- Sander van Bree
- Centre for Cognitive Neuroimaging, Institute for Neuroscience and Psychology, University of Glasgow, Glasgow, UK.,Centre for Human Brain Health, University of Birmingham, Birmingham, UK
| | - Andrea Alamia
- Centre de Recherche Cerveau et Cognition, CNRS, Toulouse, France.,CerCo, Université Toulouse III Paul Sabatier, Toulouse, France
| | - Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition, CNRS, Toulouse, France.,CerCo, Université Toulouse III Paul Sabatier, Toulouse, France
| |
Collapse
|
29
|
Fiveash A, Bedoin N, Gordon RL, Tillmann B. Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology 2021; 35:771-791. [PMID: 34435803 PMCID: PMC8595576 DOI: 10.1037/neu0000766] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated. METHOD In this theoretical review article, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders. RESULTS We propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech. CONCLUSION The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
- University of Lyon 2, CNRS, UMR5596, Lyon, F-69000, France
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| |
Collapse
|
30
|
Geffen A, Bland N, Sale MV. Effects of Slow Oscillatory Transcranial Alternating Current Stimulation on Motor Cortical Excitability Assessed by Transcranial Magnetic Stimulation. Front Hum Neurosci 2021; 15:726604. [PMID: 34588969 PMCID: PMC8473706 DOI: 10.3389/fnhum.2021.726604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 08/24/2021] [Indexed: 11/13/2022] Open
Abstract
Converging evidence suggests that transcranial alternating current stimulation (tACS) may entrain endogenous neural oscillations to match the frequency and phase of the exogenously applied current and this entrainment may outlast the stimulation (although only for a few oscillatory cycles following the cessation of stimulation). However, observing entrainment in the electroencephalograph (EEG) during stimulation is extremely difficult due to the presence of complex tACS artifacts. The present study assessed entrainment to slow oscillatory (SO) tACS by measuring motor cortical excitability across different oscillatory phases during (i.e., online) and outlasting (i.e., offline) stimulation. 30 healthy participants received 60 trials of intermittent SO tACS (0.75 Hz; 16 s on/off interleaved) at an intensity of 2 mA peak-to-peak. Motor cortical excitability was assessed using transcranial magnetic stimulation (TMS) of the hand region of the primary motor cortex (M1HAND) to induce motor evoked potentials (MEPs) in the contralateral thumb. MEPs were acquired at four time-points within each trial – early online, late online, early offline, and late offline – as well as at the start and end of the overall stimulation period (to probe longer-lasting aftereffects of tACS). A significant increase in MEP amplitude was observed from pre- to post-tACS (paired-sample t-test; t29 = 2.64, P = 0.013, d = 0.48) and from the first to the last tACS block (t29 = −2.93, P = 0.02, d = 0.54). However, no phase-dependent modulation of excitability was observed. Therefore, although SO tACS had a facilitatory effect on motor cortical excitability that outlasted stimulation, there was no evidence supporting entrainment of endogenous oscillations as the underlying mechanism.
Collapse
Affiliation(s)
- Asher Geffen
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, QLD, Australia
| | - Nicholas Bland
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, QLD, Australia.,Queensland Brain Institute, The University of Queensland, St Lucia, QLD, Australia.,School of Human Movement and Nutrition Sciences, The University of Queensland, St Lucia, QLD, Australia
| | - Martin V Sale
- School of Health and Rehabilitation Sciences, The University of Queensland, St Lucia, QLD, Australia.,Queensland Brain Institute, The University of Queensland, St Lucia, QLD, Australia
| |
Collapse
|
31
|
Pesnot Lerousseau J, Trébuchon A, Morillon B, Schön D. Frequency Selectivity of Persistent Cortical Oscillatory Responses to Auditory Rhythmic Stimulation. J Neurosci 2021; 41:7991-8006. [PMID: 34301825 PMCID: PMC8460151 DOI: 10.1523/jneurosci.0213-21.2021] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 06/28/2021] [Accepted: 07/01/2021] [Indexed: 11/21/2022] Open
Abstract
Cortical oscillations have been proposed to play a functional role in speech and music perception, attentional selection, and working memory, via the mechanism of neural entrainment. One of the properties of neural entrainment that is often taken for granted is that its modulatory effect on ongoing oscillations outlasts rhythmic stimulation. We tested the existence of this phenomenon by studying cortical neural oscillations during and after presentation of melodic stimuli in a passive perception paradigm. Melodies were composed of ∼60 and ∼80 Hz tones embedded in a 2.5 Hz stream. Using intracranial and surface recordings in male and female humans, we reveal persistent oscillatory activity in the high-γ band in response to the tones throughout the cortex, well beyond auditory regions. By contrast, in response to the 2.5 Hz stream, no persistent activity in any frequency band was observed. We further show that our data are well captured by a model of damped harmonic oscillator and can be classified into three classes of neural dynamics, with distinct damping properties and eigenfrequencies. This model provides a mechanistic and quantitative explanation of the frequency selectivity of auditory neural entrainment in the human cortex.SIGNIFICANCE STATEMENT It has been proposed that the functional role of cortical oscillations is subtended by a mechanism of entrainment, the synchronization in phase or amplitude of neural oscillations to a periodic stimulation. One of the properties of neural entrainment that is often taken for granted is that its modulatory effect on ongoing oscillations outlasts rhythmic stimulation. Using intracranial and surface recordings of humans passively listening to rhythmic auditory stimuli, we reveal consistent oscillatory responses throughout the cortex, with persistent activity of high-γ oscillations. On the contrary, neural oscillations do not outlast low-frequency acoustic dynamics. We interpret our results as reflecting harmonic oscillator properties, a model ubiquitous in physics but rarely used in neuroscience.
Collapse
Affiliation(s)
| | - Agnès Trébuchon
- Inserm, Inst Neurosci Syst, Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
- APHM, Hôpital de la Timone, Service de Neurophysiologie Clinique, Marseille 13005, France
| | - Benjamin Morillon
- Inserm, Inst Neurosci Syst, Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Daniele Schön
- Inserm, Inst Neurosci Syst, Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
32
|
Doelling KB, Assaneo MF. Neural oscillations are a start toward understanding brain activity rather than the end. PLoS Biol 2021; 19:e3001234. [PMID: 33945528 PMCID: PMC8121326 DOI: 10.1371/journal.pbio.3001234] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Revised: 05/14/2021] [Indexed: 11/18/2022] Open
Abstract
Does rhythmic neural activity merely echo the rhythmic features of the environment, or does it reflect a fundamental computational mechanism of the brain? This debate has generated a series of clever experimental studies attempting to find an answer. Here, we argue that the field has been obstructed by predictions of oscillators that are based more on intuition rather than biophysical models compatible with the observed phenomena. What follows is a series of cautionary examples that serve as reminders to ground our hypotheses in well-developed theories of oscillatory behavior put forth by theoretical study of dynamical systems. Ultimately, our hope is that this exercise will push the field to concern itself less with the vague question of "oscillation or not" and more with specific biophysical models that can be readily tested.
Collapse
Affiliation(s)
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Autónoma de México Santiago de Querétaro, México
| |
Collapse
|
33
|
Visual speech cues recruit neural oscillations to optimise auditory perception: Ways forward for research on human communication. CURRENT RESEARCH IN NEUROBIOLOGY 2021; 2:100015. [PMID: 36246513 PMCID: PMC9559900 DOI: 10.1016/j.crneur.2021.100015] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/21/2021] [Accepted: 06/21/2021] [Indexed: 11/22/2022] Open
|