1
|
Ahn E, Majumdar A, Lee T, Brang D. Evidence for a Causal Dissociation of the McGurk Effect and Congruent Audiovisual Speech Perception via TMS. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.27.568892. [PMID: 38077093 PMCID: PMC10705272 DOI: 10.1101/2023.11.27.568892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Congruent visual speech improves speech perception accuracy, particularly in noisy environments. Conversely, mismatched visual speech can alter what is heard, leading to an illusory percept known as the McGurk effect. This illusion has been widely used to study audiovisual speech integration, illustrating that auditory and visual cues are combined in the brain to generate a single coherent percept. While prior transcranial magnetic stimulation (TMS) and neuroimaging studies have identified the left posterior superior temporal sulcus (pSTS) as a causal region involved in the generation of the McGurk effect, it remains unclear whether this region is critical only for this illusion or also for the more general benefits of congruent visual speech (e.g., increased accuracy and faster reaction times). Indeed, recent correlative research suggests that the benefits of congruent visual speech and the McGurk effect reflect largely independent mechanisms. To better understand how these different features of audiovisual integration are causally generated by the left pSTS, we used single-pulse TMS to temporarily impair processing while subjects were presented with either incongruent (McGurk) or congruent audiovisual combinations. Consistent with past research, we observed that TMS to the left pSTS significantly reduced the strength of the McGurk effect. Importantly, however, left pSTS stimulation did not affect the positive benefits of congruent audiovisual speech (increased accuracy and faster reaction times), demonstrating a causal dissociation between the two processes. Our results are consistent with models proposing that the pSTS is but one of multiple critical areas supporting audiovisual speech interactions. Moreover, these data add to a growing body of evidence suggesting that the McGurk effect is an imperfect surrogate measure for more general and ecologically valid audiovisual speech behaviors.
Collapse
Affiliation(s)
- EunSeon Ahn
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| | - Areti Majumdar
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| | - Taraz Lee
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109
| |
Collapse
|
2
|
Gori M, Bertonati G, Campus C, Amadeo MB. Multisensory representations of space and time in sensory cortices. Hum Brain Mapp 2022; 44:656-667. [PMID: 36169038 PMCID: PMC9842891 DOI: 10.1002/hbm.26090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 08/05/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Giorgia Bertonati
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly,Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS)Università degli Studi di GenovaGenoaItaly
| | - Claudio Campus
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| | - Maria Bianca Amadeo
- Unit for Visually Impaired People (U‐VIP)Istituto Italiano di TecnologiaGenoaItaly
| |
Collapse
|
3
|
van Ackooij M, Paul JM, van der Zwaag W, van der Stoep N, Harvey BM. Auditory timing-tuned neural responses in the human auditory cortices. Neuroimage 2022; 258:119366. [PMID: 35690255 DOI: 10.1016/j.neuroimage.2022.119366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 11/27/2022] Open
Abstract
Perception of sub-second auditory event timing supports multisensory integration, and speech and music perception and production. Neural populations tuned for the timing (duration and rate) of visual events were recently described in several human extrastriate visual areas. Here we ask whether the brain also contains neural populations tuned for auditory event timing, and whether these are shared with visual timing. Using 7T fMRI, we measured responses to white noise bursts of changing duration and rate. We analyzed these responses using neural response models describing different parametric relationships between event timing and neural response amplitude. This revealed auditory timing-tuned responses in the primary auditory cortex, and auditory association areas of the belt, parabelt and premotor cortex. While these areas also showed tonotopic tuning for auditory pitch, pitch and timing preferences were not consistently correlated. Auditory timing-tuned response functions differed between these areas, though without clear hierarchical integration of responses. The similarity of auditory and visual timing tuned responses, together with the lack of overlap between the areas showing these responses for each modality, suggests modality-specific responses to event timing are computed similarly but from different sensory inputs, and then transformed differently to suit the needs of each modality.
Collapse
Affiliation(s)
- Martijn van Ackooij
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands
| | - Jacob M Paul
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands; Melbourne School of Psychological Sciences, University of Melbourne, Redmond Barry Building, Parkville 3010, Victoria, Australia
| | | | - Nathan van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands
| | - Ben M Harvey
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, Utrecht 3584 CS, the Netherlands.
| |
Collapse
|
4
|
Michail G, Senkowski D, Holtkamp M, Wächter B, Keil J. Early beta oscillations in multisensory association areas underlie crossmodal performance enhancement. Neuroimage 2022; 257:119307. [PMID: 35577024 DOI: 10.1016/j.neuroimage.2022.119307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/29/2022] [Accepted: 05/10/2022] [Indexed: 11/28/2022] Open
Abstract
The combination of signals from different sensory modalities can enhance perception and facilitate behavioral responses. While previous research described crossmodal influences in a wide range of tasks, it remains unclear how such influences drive performance enhancements. In particular, the neural mechanisms underlying performance-relevant crossmodal influences, as well as the latency and spatial profile of such influences are not well understood. Here, we examined data from high-density electroencephalography (N = 30) recordings to characterize the oscillatory signatures of crossmodal facilitation of response speed, as manifested in the speeding of visual responses by concurrent task-irrelevant auditory information. Using a data-driven analysis approach, we found that individual gains in response speed correlated with larger beta power difference (13-25 Hz) between the audiovisual and the visual condition, starting within 80 ms after stimulus onset in the secondary visual cortex and in multisensory association areas in the parietal cortex. In addition, we examined data from electrocorticography (ECoG) recordings in four epileptic patients in a comparable paradigm. These ECoG data revealed reduced beta power in audiovisual compared with visual trials in the superior temporal gyrus (STG). Collectively, our data suggest that the crossmodal facilitation of response speed is associated with reduced early beta power in multisensory association and secondary visual areas. The reduced early beta power may reflect an auditory-driven feedback signal to improve visual processing through attentional gating. These findings improve our understanding of the neural mechanisms underlying crossmodal response speed facilitation and highlight the critical role of beta oscillations in mediating behaviorally relevant multisensory processing.
Collapse
Affiliation(s)
- Georgios Michail
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany.
| | - Daniel Senkowski
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte (CCM), Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charitéplatz 1, Berlin 10117, Germany
| | - Martin Holtkamp
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany; Department of Neurology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Charité Campus Mitte (CCM), Charitéplatz 1, Berlin 10117, Germany
| | - Bettina Wächter
- Epilepsy-Center Berlin-Brandenburg, Institute for Diagnostics of Epilepsy, Berlin 10365, Germany
| | - Julian Keil
- Biological Psychology, Christian-Albrechts-University Kiel, Kiel 24118, Germany
| |
Collapse
|
5
|
Noguchi Y. Individual differences in beta frequency correlate with the audio-visual fusion illusion. Psychophysiology 2022; 59:e14041. [PMID: 35274314 DOI: 10.1111/psyp.14041] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 12/27/2021] [Accepted: 02/22/2022] [Indexed: 11/29/2022]
Abstract
Presenting one flash with two beeps induces a perception of two flashes (audio-visual [AV] fission illusion), while presenting two flashes with one beep induces a perception of one flash (fusion illusion). Although previous studies showed a relationship between the frequency of the alpha rhythm (alpha cycle) and one's susceptibility to the fission illusion, the relationship between neural oscillations and the fusion illusion is unknown. Using electroencephalography, here I investigated the frequency of oscillatory signals in the pre-stimulus period and found a significant correlation between the beta rhythm and the fusion illusion; specifically, participants with a lower beta frequency showed a larger fusion illusion. These data indicate two separate time windows of AV integration in the human brain, one defined by the alpha cycle (fission) and another defined by the beta cycle (fusion).
Collapse
Affiliation(s)
- Yasuki Noguchi
- Department of Psychology, Graduate School of Humanities, Kobe University, Kobe, Japan
| |
Collapse
|
6
|
Karthik G, Plass J, Beltz AM, Liu Z, Grabowecky M, Suzuki S, Stacey WC, Wasade VS, Towle VL, Tao JX, Wu S, Issa NP, Brang D. Visual speech differentially modulates beta, theta, and high gamma bands in auditory cortex. Eur J Neurosci 2021; 54:7301-7317. [PMID: 34587350 DOI: 10.1111/ejn.15482] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/20/2021] [Accepted: 08/28/2021] [Indexed: 12/13/2022]
Abstract
Speech perception is a central component of social communication. Although principally an auditory process, accurate speech perception in everyday settings is supported by meaningful information extracted from visual cues. Visual speech modulates activity in cortical areas subserving auditory speech perception including the superior temporal gyrus (STG). However, it is unknown whether visual modulation of auditory processing is a unitary phenomenon or, rather, consists of multiple functionally distinct processes. To explore this question, we examined neural responses to audiovisual speech measured from intracranially implanted electrodes in 21 patients with epilepsy. We found that visual speech modulated auditory processes in the STG in multiple ways, eliciting temporally and spatially distinct patterns of activity that differed across frequency bands. In the theta band, visual speech suppressed the auditory response from before auditory speech onset to after auditory speech onset (-93 to 500 ms) most strongly in the posterior STG. In the beta band, suppression was seen in the anterior STG from -311 to -195 ms before auditory speech onset and in the middle STG from -195 to 235 ms after speech onset. In high gamma, visual speech enhanced the auditory response from -45 to 24 ms only in the posterior STG. We interpret the visual-induced changes prior to speech onset as reflecting crossmodal prediction of speech signals. In contrast, modulations after sound onset may reflect a decrease in sustained feedforward auditory activity. These results are consistent with models that posit multiple distinct mechanisms supporting audiovisual speech perception.
Collapse
Affiliation(s)
- G Karthik
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| | - John Plass
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| | - Adriene M Beltz
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| | - Zhongming Liu
- Department of Biomedical Engineering and Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USA
| | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, Illinois, USA
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, Illinois, USA
| | - William C Stacey
- Department of Neurology and Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | - Vibhangini S Wasade
- Department of Neurology, Henry Ford Hospital, Detroit, Michigan, USA.,Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - James X Tao
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - Naoum P Issa
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
7
|
O'Sullivan AE, Crosse MJ, Liberto GMD, de Cheveigné A, Lalor EC. Neurophysiological Indices of Audiovisual Speech Processing Reveal a Hierarchy of Multisensory Integration Effects. J Neurosci 2021; 41:4991-5003. [PMID: 33824190 PMCID: PMC8197638 DOI: 10.1523/jneurosci.0906-20.2021] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 03/16/2021] [Accepted: 03/22/2021] [Indexed: 12/27/2022] Open
Abstract
Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information from the speaker's mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was shown to be more robust in AV speech responses than what would have been expected from the summation of the audio and visual speech responses, suggesting that multisensory integration occurs at both spectrotemporal and phonetic stages of speech processing. We also found evidence to suggest that the integration effects may change with listening conditions; however, this was an exploratory analysis and future work will be required to examine this effect using a within-subject design. These findings demonstrate that integration of audio and visual speech occurs at multiple stages along the speech processing hierarchy.SIGNIFICANCE STATEMENT During conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here, we examine audiovisual (AV) integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how AV integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions. These findings reveal neural indices of multisensory interactions at different stages of processing and provide support for the multistage integration framework.
Collapse
Affiliation(s)
- Aisling E O'Sullivan
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
| | - Michael J Crosse
- X, The Moonshot Factory, Mountain View, CA and Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, Paris 75005, France
| | - Alain de Cheveigné
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, Paris 75005, France
- University College London Ear Institute, University College London, London WC1X 8EE, United Kingdom
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Biomedical Engineering and Trinity College Institute of Neuroscience, Trinity College Dublin, Dublin 2, Ireland
- Department of Biomedical Engineering and Department of Neuroscience, University of Rochester, Rochester, New York 14627
| |
Collapse
|
8
|
Kumar VG, Dutta S, Talwar S, Roy D, Banerjee A. Biophysical mechanisms governing large-scale brain network dynamics underlying individual-specific variability of perception. Eur J Neurosci 2020; 52:3746-3762. [PMID: 32304122 DOI: 10.1111/ejn.14747] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 04/07/2020] [Accepted: 04/08/2020] [Indexed: 11/30/2022]
Abstract
Perception necessitates interaction among neuronal ensembles, the dynamics of which can be conceptualized as the emergent behavior of coupled dynamical systems. Here, we propose a detailed neurobiologically realistic model that captures the neural mechanisms of inter-individual variability observed in cross-modal speech perception. From raw EEG signals recorded from human participants when they were presented with speech vocalizations of McGurk-incongruent and congruent audio-visual (AV) stimuli, we computed the global coherence metric to capture the neural variability of large-scale networks. We identified that participants' McGurk susceptibility was negatively correlated to their alpha band global coherence. The proposed biophysical model conceptualized the global coherence dynamics emerge from coupling between the interacting neural masses-representing the sensory-specific auditory/visual areas and modality nonspecific associative/integrative regions. Subsequently, we could predict that an extremely weak direct AV coupling results in a decrease in alpha band global coherence-mimicking the cortical dynamics of participants with higher McGurk susceptibility. Source connectivity analysis also showed decreased connectivity between sensory-specific regions in participants more susceptible to McGurk effect, thus establishing an empirical validation to the prediction. Overall, our study provides an outline to link variability in structural and functional connectivity metrics to variability of performance that can be useful for several perception and action task paradigms.
Collapse
Affiliation(s)
- Vinodh G Kumar
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon, India
| | - Shrey Dutta
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon, India
| | - Siddharth Talwar
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon, India
| | - Dipanjan Roy
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon, India
| | - Arpan Banerjee
- Cognitive Brain Dynamics Lab, National Brain Research Centre, Gurgaon, India
| |
Collapse
|
9
|
Improving audio-visual temporal perception through training enhances beta-band activity. Neuroimage 2020; 206:116312. [DOI: 10.1016/j.neuroimage.2019.116312] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 09/18/2019] [Accepted: 10/22/2019] [Indexed: 11/19/2022] Open
|
10
|
Drijvers L, van der Plas M, Özyürek A, Jensen O. Native and non-native listeners show similar yet distinct oscillatory dynamics when using gestures to access speech in noise. Neuroimage 2019; 194:55-67. [DOI: 10.1016/j.neuroimage.2019.03.032] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2018] [Revised: 03/12/2019] [Accepted: 03/15/2019] [Indexed: 11/30/2022] Open
|
11
|
Bobilev AM, Hudgens-Haney ME, Hamm JP, Oliver WT, McDowell JE, Lauderdale JD, Clementz BA. Early and late auditory information processing show opposing deviations in aniridia. Brain Res 2019; 1720:146307. [PMID: 31247203 DOI: 10.1016/j.brainres.2019.146307] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 06/12/2019] [Accepted: 06/23/2019] [Indexed: 01/29/2023]
Abstract
Aniridia is a congenital disorder, predominantly caused by heterozygous mutations of the PAX6 gene. While ocular defects have been extensively characterized in this population, brain-related anatomical and functional abnormalities are emerging as a prominent feature of the disorder. Individuals with aniridia frequently exhibit auditory processing deficits despite normal audiograms. While previous studies have reported hypoplasia of the anterior commissure and corpus callosum in some of these individuals, the neurophysiological basis of these impairments remains unexplored. This study provides direct assessment of neural activity related to auditory processing in aniridia. Participants were presented with tones designed to elicit an auditory steady-state response (ASSR) at 22 Hz, 40 Hz, and 84 Hz, and infrequent broadband target tones to maintain attention during electroencephalography (EEG) recording. Persons with aniridia showed increased early cortical responses (P50 AEP) in response to all tones, and increased high-frequency oscillatory entrainment (84 Hz ASSR). In contrast, this group showed a decreased cortical integration response (P300 AEP to target tones) and reduced neural entrainment to cortical beta-band stimuli (22 Hz ASSR). Collectively, our results suggest that subcortical and early cortical auditory processing is augmented in aniridia, while functional cortical integration of auditory information is deficient in this population.
Collapse
Affiliation(s)
- Anastasia M Bobilev
- Department of Cellular Biology, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States; Department of Psychiatry, UT Southwestern Medical Center, Dallas, TX, United States.
| | - Matthew E Hudgens-Haney
- Department of Psychiatry, UT Southwestern Medical Center, Dallas, TX, United States; Departments of Psychology and Neuroscience, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States
| | - Jordan P Hamm
- Departments of Psychology and Neuroscience, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States; Neuroscience Institute, Georgia State University, Petit Science Center, Atlanta, GA, United States; Center for Neuroinflammation and Cardiometabolic Diseases, Georgia State University, Petit Science Center, Atlanta, GA, United States
| | - William T Oliver
- Departments of Psychology and Neuroscience, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States
| | - Jennifer E McDowell
- Departments of Psychology and Neuroscience, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States
| | - James D Lauderdale
- Department of Cellular Biology, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States
| | - Brett A Clementz
- Departments of Psychology and Neuroscience, Bio-Imaging Research Center, University of Georgia, Athens, GA, United States
| |
Collapse
|
12
|
Long-range functional coupling predicts performance: Oscillatory EEG networks in multisensory processing. Neuroimage 2019; 196:114-125. [PMID: 30959196 DOI: 10.1016/j.neuroimage.2019.04.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Revised: 02/25/2019] [Accepted: 04/01/2019] [Indexed: 12/12/2022] Open
Abstract
The integration of sensory signals from different modalities requires flexible interaction of remote brain areas. One candidate mechanism to establish communication in the brain is transient synchronization of oscillatory neural signals. Although there is abundant evidence for the involvement of cortical oscillations in brain functions based on the analysis of local power, assessment of the phase dynamics among spatially distributed neuronal populations and their relevance for behavior is still sparse. In the present study, we investigated the interaction between remote brain areas by analyzing high-density electroencephalogram (EEG) data obtained from human participants engaged in a visuotactile pattern matching task. We deployed an approach for purely data-driven clustering of neuronal phase coupling in source space, which allowed imaging of large-scale functional networks in space, time and frequency without defining a priori constraints. Based on the phase coupling results, we further explored how brain areas interacted across frequencies by computing phase-amplitude coupling. Several networks of interacting sources were identified with our approach, synchronizing their activity within and across the theta (∼5 Hz), alpha (∼10 Hz), and beta (∼20 Hz) frequency bands and involving multiple brain areas that have previously been associated with attention and motor control. We demonstrate the functional relevance of these networks by showing that phase delays - in contrast to spectral power - were predictive of task performance. The data-driven analysis approach employed in the current study allowed an unbiased examination of functional brain networks based on EEG source level connectivity data. Showcased for multisensory processing, our results provide evidence that large-scale neuronal coupling is vital to long-range communication in the human brain and relevant for the behavioral outcome in a cognitive task.
Collapse
|
13
|
Spüler M, López-Larraz E, Ramos-Murguialday A. On the design of EEG-based movement decoders for completely paralyzed stroke patients. J Neuroeng Rehabil 2018; 15:110. [PMID: 30458838 PMCID: PMC6247630 DOI: 10.1186/s12984-018-0438-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Accepted: 10/17/2018] [Indexed: 11/24/2022] Open
Abstract
Background Brain machine interface (BMI) technology has demonstrated its efficacy for rehabilitation of paralyzed chronic stroke patients. The critical component in BMI-training consists of the associative connection (contingency) between the intention and the feedback provided. However, the relationship between the BMI design and its performance in stroke patients is still an open question. Methods In this study we compare different methodologies to design a BMI for rehabilitation and evaluate their effects on movement intention decoding performance. We analyze the data of 37 chronic stroke patients who underwent 4 weeks of BMI intervention with different types of association between their brain activity and the proprioceptive feedback. We simulate the pseudo-online performance that a BMI would have under different conditions, varying: (1) the cortical source of activity (i.e., ipsilesional, contralesional, bihemispheric), (2) the type of spatial filter applied, (3) the EEG frequency band, (4) the type of classifier; and also evaluated the use of residual EMG activity to decode the movement intentions. Results We observed a significant influence of the different BMI designs on the obtained performances. Our results revealed that using bihemispheric beta activity with a common average reference and an adaptive support vector machine led to the best classification results. Furthermore, the decoding results based on brain activity were significantly higher than those based on muscle activity. Conclusions This paper underscores the relevance of the different parameters used to decode movement, using EEG in severely paralyzed stroke patients. We demonstrated significant differences in performance for the different designs, which supports further research that should elucidate if those approaches leading to higher accuracies also induce higher motor recovery in paralyzed stroke patients.
Collapse
Affiliation(s)
- Martin Spüler
- Department of Computer Engineering, Wilhelm-Schickard-Institute, University of Tübingen, Sand 14, 72076, Tübingen, Germany
| | - Eduardo López-Larraz
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Silcherstr. 5, 72076, Tübingen, Germany
| | - Ander Ramos-Murguialday
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Silcherstr. 5, 72076, Tübingen, Germany. .,TECNALIA, Health Technologies, Neural Enginering Laboratory, Mikeletegi Pasalekua 1, 20009, San Sebastian, Spain.
| |
Collapse
|
14
|
Drijvers L, Özyürek A, Jensen O. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech. J Cogn Neurosci 2018; 30:1086-1097. [DOI: 10.1162/jocn_a_01301] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
Collapse
Affiliation(s)
| | - Asli Özyürek
- Radboud University
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | | |
Collapse
|
15
|
Romero-Rivas C, Vera-Constán F, Rodríguez-Cuadrado S, Puigcerver L, Fernández-Prieto I, Navarra J. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli. Neuropsychologia 2018; 117:67-74. [PMID: 29753020 DOI: 10.1016/j.neuropsychologia.2018.05.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Revised: 05/07/2018] [Accepted: 05/08/2018] [Indexed: 11/19/2022]
Abstract
Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events.
Collapse
Affiliation(s)
| | - Fátima Vera-Constán
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Departamento de Metodología y Psicología Básica, Universidad de Murcia, Murcia, Spain
| | - Sara Rodríguez-Cuadrado
- Department of Psychology, Edge Hill University, Ormskirk, UK; Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Laura Puigcerver
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Irune Fernández-Prieto
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain; Neuropsychology & Cognition Group, Department of Psychology and Research Institute for Health Sciences (iUNICS), University of Balearic Islands, Palma, Spain
| | - Jordi Navarra
- Fundació Sant Joan de Déu, Psychiatry and Psychology Service, Hospital Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.
| |
Collapse
|
16
|
Brauchli C, Elmer S, Rogenmoser L, Burkhard A, Jäncke L. Top-down signal transmission and global hyperconnectivity in auditory-visual synesthesia: Evidence from a functional EEG resting-state study. Hum Brain Mapp 2017; 39:522-531. [PMID: 29086468 DOI: 10.1002/hbm.23861] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 10/12/2017] [Accepted: 10/15/2017] [Indexed: 11/10/2022] Open
Abstract
Auditory-visual (AV) synesthesia is a rare phenomenon in which an auditory stimulus induces a "concurrent" color sensation. Current neurophysiological models of synesthesia mainly hypothesize "hyperconnected" and "hyperactivated" brains, but differ in the directionality of signal transmission. The two-stage model proposes bottom-up signal transmission from inducer- to concurrent- to higher-order brain areas, whereas the disinhibited feedback model postulates top-down signal transmission from inducer- to higher-order- to concurrent brain areas. To test the different models of synesthesia, we estimated local current density, directed and undirected connectivity patterns in the intracranial space during 2 min of resting-state (RS) EEG in 11 AV synesthetes and 11 nonsynesthetes. AV synesthetes demonstrated increased parietal theta, alpha, and lower beta current density compared to nonsynesthetes. Furthermore, AV synesthetes were characterized by increased top-down signal transmission from the superior parietal lobe to the left color processing area V4 in the upper beta frequency band. Analyses of undirected connectivity revealed a global, synesthesia-specific hyperconnectivity in the alpha frequency band. The involvement of the superior parietal lobe even during rest is a strong indicator for its key role in AV synesthesia. By demonstrating top-down signal transmission in AV synesthetes, we provide direct support for the disinhibited feedback model of synesthesia. Finally, we suggest that synesthesia is a consequence of global hyperconnectivity. Hum Brain Mapp 39:522-531, 2018. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Christian Brauchli
- Department of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Stefan Elmer
- Department of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Lars Rogenmoser
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown, University Medical Center, Washington DC.,Neuroimaging and Stroke Recovery Laboratory, Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts
| | - Anja Burkhard
- Department of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Lutz Jäncke
- Department of Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.,Center for Integrative Human Physiology (ZIHP), University of Zurich, Zurich, Switzerland.,International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, Zurich, Switzerland.,University Research Priority Program (URPP) "Dynamic of Healthy Aging", University of Zurich, Zurich, Switzerland.,Department of Special Education, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
17
|
Genetic influences on functional connectivity associated with feedback processing and prediction error: Phase coupling of theta-band oscillations in twins. Int J Psychophysiol 2016; 115:133-141. [PMID: 28043892 DOI: 10.1016/j.ijpsycho.2016.12.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Revised: 11/25/2016] [Accepted: 12/28/2016] [Indexed: 02/04/2023]
Abstract
Detection and evaluation of the mismatch between the intended and actually obtained result of an action (reward prediction error) is an integral component of adaptive self-regulation of behavior. Extensive human and animal research has shown that evaluation of action outcome is supported by a distributed network of brain regions in which the anterior cingulate cortex (ACC) plays a central role, and the integration of distant brain regions into a unified feedback-processing network is enabled by long-range phase synchronization of cortical oscillations in the theta band. Neural correlates of feedback processing are associated with individual differences in normal and abnormal behavior, however, little is known about the role of genetic factors in the cerebral mechanisms of feedback processing. Here we examined genetic influences on functional cortical connectivity related to prediction error in young adult twins (age 18, n=399) using event-related EEG phase coherence analysis in a monetary gambling task. To identify prediction error-specific connectivity pattern, we compared responses to loss and gain feedback. Monetary loss produced a significant increase of theta-band synchronization between the frontal midline region and widespread areas of the scalp, particularly parietal areas, whereas gain resulted in increased synchrony primarily within the posterior regions. Genetic analyses showed significant heritability of frontoparietal theta phase synchronization (24 to 46%), suggesting that individual differences in large-scale network dynamics are under substantial genetic control. We conclude that theta-band synchronization of brain oscillations related to negative feedback reflects genetically transmitted differences in the neural mechanisms of feedback processing. To our knowledge, this is the first evidence for genetic influences on task-related functional brain connectivity assessed using direct real-time measures of neuronal synchronization.
Collapse
|
18
|
Oscillatory brain activity during multisensory attention reflects activation, disinhibition, and cognitive control. Sci Rep 2016; 6:32775. [PMID: 27604647 PMCID: PMC5015072 DOI: 10.1038/srep32775] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 07/28/2016] [Indexed: 11/25/2022] Open
Abstract
In this study, we used a novel multisensory attention paradigm to investigate attention-modulated cortical oscillations over a wide range of frequencies using magnetencephalography in healthy human participants. By employing a task that required the evaluation of the congruence of audio-visual stimuli, we promoted the formation of widespread cortical networks including early sensory cortices as well as regions associated with cognitive control. We found that attention led to increased high-frequency gamma-band activity and decreased lower frequency theta-, alpha-, and beta-band activity in early sensory cortex areas. Moreover, alpha-band coherence decreased in visual cortex. Frontal cortex was found to exert attentional control through increased low-frequency phase synchronisation. Crossmodal congruence modulated beta-band coherence in mid-cingulate and superior temporal cortex. Together, these results offer an integrative view on the concurrence of oscillations at different frequencies during multisensory attention.
Collapse
|
19
|
Benefit of interleaved practice of motor skills is associated with changes in functional brain network topology that differ between younger and older adults. Neurobiol Aging 2016; 42:189-98. [DOI: 10.1016/j.neurobiolaging.2016.03.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Revised: 12/11/2015] [Accepted: 03/13/2016] [Indexed: 11/20/2022]
|
20
|
Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions. J Neurosci 2016; 35:14195-204. [PMID: 26490860 DOI: 10.1523/jneurosci.1829-15.2015] [Citation(s) in RCA: 102] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually. SIGNIFICANCE STATEMENT Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to combine information from continuous auditory and visual speech has traditionally been methodologically difficult. Here we introduce a new approach for doing this using relatively inexpensive and noninvasive scalp recordings. Specifically, we show that the brain's representation of auditory speech is enhanced when the accompanying visual speech signal shares the same timing. Furthermore, we show that this enhancement is most pronounced at a time scale that corresponds to mean syllable length.
Collapse
|
21
|
Balz J, Keil J, Roa Romero Y, Mekle R, Schubert F, Aydin S, Ittermann B, Gallinat J, Senkowski D. GABA concentration in superior temporal sulcus predicts gamma power and perception in the sound-induced flash illusion. Neuroimage 2016; 125:724-730. [DOI: 10.1016/j.neuroimage.2015.10.087] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2015] [Revised: 10/30/2015] [Accepted: 10/31/2015] [Indexed: 10/22/2022] Open
|
22
|
Keller CJ, Honey CJ, Mégevand P, Entz L, Ulbert I, Mehta AD. Mapping human brain networks with cortico-cortical evoked potentials. Philos Trans R Soc Lond B Biol Sci 2015; 369:rstb.2013.0528. [PMID: 25180306 DOI: 10.1098/rstb.2013.0528] [Citation(s) in RCA: 120] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
The cerebral cortex forms a sheet of neurons organized into a network of interconnected modules that is highly expanded in humans and presumably enables our most refined sensory and cognitive abilities. The links of this network form a fundamental aspect of its organization, and a great deal of research is focusing on understanding how information flows within and between different regions. However, an often-overlooked element of this connectivity regards a causal, hierarchical structure of regions, whereby certain nodes of the cortical network may exert greater influence over the others. While this is difficult to ascertain non-invasively, patients undergoing invasive electrode monitoring for epilepsy provide a unique window into this aspect of cortical organization. In this review, we highlight the potential for cortico-cortical evoked potential (CCEP) mapping to directly measure neuronal propagation across large-scale brain networks with spatio-temporal resolution that is superior to traditional neuroimaging methods. We first introduce effective connectivity and discuss the mechanisms underlying CCEP generation. Next, we highlight how CCEP mapping has begun to provide insight into the neural basis of non-invasive imaging signals. Finally, we present a novel approach to perturbing and measuring brain network function during cognitive processing. The direct measurement of CCEPs in response to electrical stimulation represents a potentially powerful clinical and basic science tool for probing the large-scale networks of the human cerebral cortex.
Collapse
Affiliation(s)
- Corey J Keller
- Department of Neurosurgery, Hofstra North Shore LIJ School of Medicine, and Feinstein Institute for Medical Research, Manhasset, NY, USA Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Christopher J Honey
- Department of Psychology, Princeton University, Princeton, NJ, USA Department of Psychology, University of Toronto, Toronto, Ontario M5S 3G3, Canada
| | - Pierre Mégevand
- Department of Neurosurgery, Hofstra North Shore LIJ School of Medicine, and Feinstein Institute for Medical Research, Manhasset, NY, USA
| | - Laszlo Entz
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary Department of Functional Neurosurgery, National Institute of Clinical Neuroscience, Budapest, Hungary Peter Pazmany Catholic University, Faculty of Information Technology and Bionics, Budapest, Hungary
| | - Istvan Ulbert
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary Peter Pazmany Catholic University, Faculty of Information Technology and Bionics, Budapest, Hungary
| | - Ashesh D Mehta
- Department of Neurosurgery, Hofstra North Shore LIJ School of Medicine, and Feinstein Institute for Medical Research, Manhasset, NY, USA
| |
Collapse
|
23
|
Dissociated roles of the inferior frontal gyrus and superior temporal sulcus in audiovisual processing: top-down and bottom-up mismatch detection. PLoS One 2015; 10:e0122580. [PMID: 25822912 PMCID: PMC4379108 DOI: 10.1371/journal.pone.0122580] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2014] [Accepted: 02/18/2015] [Indexed: 11/21/2022] Open
Abstract
Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG) and superior temporal sulcus (STS) are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA), which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.
Collapse
|
24
|
Ramos-Murguialday A, Birbaumer N. Brain oscillatory signatures of motor tasks. J Neurophysiol 2015; 113:3663-82. [PMID: 25810484 DOI: 10.1152/jn.00467.2013] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2013] [Accepted: 03/12/2015] [Indexed: 11/22/2022] Open
Abstract
Noninvasive brain-computer-interfaces (BCI) coupled with prosthetic devices were recently introduced in the rehabilitation of chronic stroke and other disorders of the motor system. These BCI systems and motor rehabilitation in general involve several motor tasks for training. This study investigates the neurophysiological bases of an EEG-oscillation-driven BCI combined with a neuroprosthetic device to define the specific oscillatory signature of the BCI task. Controlling movements of a hand robotic orthosis with motor imagery of the same movement generates sensorimotor rhythm oscillation changes and involves three elements of tasks also used in stroke motor rehabilitation: passive and active movement, motor imagery, and motor intention. We recorded EEG while nine healthy participants performed five different motor tasks consisting of closing and opening of the hand as follows: 1) motor imagery without any external feedback and without overt hand movement, 2) motor imagery that moves the orthosis proportional to the produced brain oscillation change with online proprioceptive and visual feedback of the hand moving through a neuroprosthetic device (BCI condition), 3) passive and 4) active movement of the hand with feedback (seeing and feeling the hand moving), and 5) rest. During the BCI condition, participants received contingent online feedback of the decrease of power of the sensorimotor rhythm, which induced orthosis movement and therefore proprioceptive and visual information from the moving hand. We analyzed brain activity during the five conditions using time-frequency domain bootstrap-based statistical comparisons and Morlet transforms. Activity during rest was used as a reference. Significant contralateral and ipsilateral event-related desynchronization of sensorimotor rhythm was present during all motor tasks, largest in contralateral-postcentral, medio-central, and ipsilateral-precentral areas identifying the ipsilateral precentral cortex as an integral part of motor regulation. Changes in task-specific frequency power compared with rest were similar between motor tasks, and only significant differences in the time course and some narrow specific frequency bands were observed between motor tasks. We identified EEG features representing active and passive proprioception (with and without muscle contraction) and active intention and passive involvement (with and without voluntary effort) differentiating brain oscillations during motor tasks that could substantially support the design of novel motor BCI-based rehabilitation therapies. The BCI task induced significantly different brain activity compared with the other motor tasks, indicating neural processes unique to the use of body actuators control in a BCI context.
Collapse
Affiliation(s)
- Ander Ramos-Murguialday
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tubingen, Tubingen, Germany; TECNALIA, San Sebastian, Spain;
| | - Niels Birbaumer
- Institute of Medical Psychology and Behavioral Neurobiology, University of Tubingen, Tubingen, Germany; Ospedale San Camillo, Istituto di Ricovero e Cura a Carattere Scientifico, Lido de Venezia, Italy
| |
Collapse
|
25
|
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers. Brain Cogn 2014; 91:35-44. [PMID: 25222292 DOI: 10.1016/j.bandc.2014.08.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Revised: 06/20/2014] [Accepted: 08/10/2014] [Indexed: 11/21/2022]
Abstract
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms.
Collapse
|
26
|
ten Oever S, Schroeder CE, Poeppel D, van Atteveldt N, Zion-Golumbic E. Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia 2014; 63:43-50. [PMID: 25128589 DOI: 10.1016/j.neuropsychologia.2014.08.008] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Revised: 07/14/2014] [Accepted: 08/06/2014] [Indexed: 11/26/2022]
Abstract
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally.
Collapse
Affiliation(s)
- Sanne ten Oever
- Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD, Maastricht, The Netherlands
| | - Charles E Schroeder
- Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY 10032, USA; The Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Nienke van Atteveldt
- Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD, Maastricht, The Netherlands; Department of Educational Neuroscience, Faculty of Psychology and Education and Institute Learn, VU University Amsterdam, The Netherlands
| | - Elana Zion-Golumbic
- Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY 10032, USA; The Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
| |
Collapse
|
27
|
Schepers IM, Yoshor D, Beauchamp MS. Electrocorticography Reveals Enhanced Visual Cortex Responses to Visual Speech. Cereb Cortex 2014; 25:4103-10. [PMID: 24904069 DOI: 10.1093/cercor/bhu127] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Human speech contains both auditory and visual components, processed by their respective sensory cortices. We test a simple model in which task-relevant speech information is enhanced during cortical processing. Visual speech is most important when the auditory component is uninformative. Therefore, the model predicts that visual cortex responses should be enhanced to visual-only (V) speech compared with audiovisual (AV) speech. We recorded neuronal activity as patients perceived auditory-only (A), V, and AV speech. Visual cortex showed strong increases in high-gamma band power and strong decreases in alpha-band power to V and AV speech. Consistent with the model prediction, gamma-band increases and alpha-band decreases were stronger for V speech. The model predicts that the uninformative nature of the auditory component (not simply its absence) is the critical factor, a prediction we tested in a second experiment in which visual speech was paired with auditory white noise. As predicted, visual speech with auditory noise showed enhanced visual cortex responses relative to AV speech. An examination of the anatomical locus of the effects showed that all visual areas, including primary visual cortex, showed enhanced responses. Visual cortex responses to speech are enhanced under circumstances when visual information is most important for comprehension.
Collapse
Affiliation(s)
- Inga M Schepers
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA Current Address: Department of Psychology, Oldenburg University, Oldenburg, Germany
| | - Daniel Yoshor
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Michael S Beauchamp
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston, TX, USA
| |
Collapse
|
28
|
Crossmodal shaping of pain: a multisensory approach to nociception. Trends Cogn Sci 2014; 18:319-27. [PMID: 24751359 DOI: 10.1016/j.tics.2014.03.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2013] [Revised: 02/21/2014] [Accepted: 03/06/2014] [Indexed: 12/27/2022]
Abstract
Noxious stimuli in our environment are often accompanied by input from other sensory modalities that can affect the processing of these stimuli and the perception of pain. Stimuli from these other modalities may distract us from pain and reduce its perceived strength. Alternatively, they can enhance the saliency of the painful input, leading to an increased pain experience. We discuss factors that influence the crossmodal shaping of pain and highlight the important role of innocuous stimuli in peripersonal space. We propose that frequency-specific modulations in local oscillatory power and in long-range functional connectivity may serve as neural mechanisms underlying the crossmodal shaping of pain. Finally, we provide an outlook on future directions and clinical implications of this promising research field.
Collapse
|
29
|
Dynamic faces speed up the onset of auditory cortical spiking responses during vocal detection. Proc Natl Acad Sci U S A 2013; 110:E4668-77. [PMID: 24218574 DOI: 10.1073/pnas.1312518110] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
How low-level sensory areas help mediate the detection and discrimination advantages of integrating faces and voices is the subject of intense debate. To gain insights, we investigated the role of the auditory cortex in face/voice integration in macaque monkeys performing a vocal-detection task. Behaviorally, subjects were slower to detect vocalizations as the signal-to-noise ratio decreased, but seeing mouth movements associated with vocalizations sped up detection. Paralleling this behavioral relationship, as the signal to noise ratio decreased, the onset of spiking responses were delayed and magnitudes were decreased. However, when mouth motion accompanied the vocalization, these responses were uniformly faster. Conversely, and at odds with previous assumptions regarding the neural basis of face/voice integration, changes in the magnitude of neural responses were not related consistently to audiovisual behavior. Taken together, our data reveal that facilitation of spike latency is a means by which the auditory cortex partially mediates the reaction time benefits of combining faces and voices.
Collapse
|
30
|
van Wassenhove V. Speech through ears and eyes: interfacing the senses with the supramodal brain. Front Psychol 2013; 4:388. [PMID: 23874309 PMCID: PMC3709159 DOI: 10.3389/fpsyg.2013.00388] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2013] [Accepted: 06/10/2013] [Indexed: 12/02/2022] Open
Abstract
The comprehension of auditory-visual (AV) speech integration has greatly benefited from recent advances in neurosciences and multisensory research. AV speech integration raises numerous questions relevant to the computational rules needed for binding information (within and across sensory modalities), the representational format in which speech information is encoded in the brain (e.g., auditory vs. articulatory), or how AV speech ultimately interfaces with the linguistic system. The following non-exhaustive review provides a set of empirical findings and theoretical questions that have fed the original proposal for predictive coding in AV speech processing. More recently, predictive coding has pervaded many fields of inquiries and positively reinforced the need to refine the notion of internal models in the brain together with their implications for the interpretation of neural activity recorded with various neuroimaging techniques. However, it is argued here that the strength of predictive coding frameworks reside in the specificity of the generative internal models not in their generality; specifically, internal models come with a set of rules applied on particular representational formats themselves depending on the levels and the network structure at which predictive operations occur. As such, predictive coding in AV speech owes to specify the level(s) and the kinds of internal predictions that are necessary to account for the perceptual benefits or illusions observed in the field. Among those specifications, the actual content of a prediction comes first and foremost, followed by the representational granularity of that prediction in time. This review specifically presents a focused discussion on these issues.
Collapse
Affiliation(s)
- Virginie van Wassenhove
- Cognitive Neuroimaging Unit, Brain Dynamics, INSERM, U992 Gif/Yvette, France ; NeuroSpin Center, CEA, DSV/I2BM Gif/Yvette, France ; Cognitive Neuroimaging Unit, University Paris-Sud Gif/Yvette, France
| |
Collapse
|
31
|
Schepers IM, Schneider TR, Hipp JF, Engel AK, Senkowski D. Noise alters beta-band activity in superior temporal cortex during audiovisual speech processing. Neuroimage 2012; 70:101-12. [PMID: 23274182 DOI: 10.1016/j.neuroimage.2012.11.066] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2012] [Revised: 11/13/2012] [Accepted: 11/21/2012] [Indexed: 10/27/2022] Open
Abstract
Speech recognition is improved when complementary visual information is available, especially under noisy acoustic conditions. Functional neuroimaging studies have suggested that the superior temporal sulcus (STS) plays an important role for this improvement. The spectrotemporal dynamics underlying audiovisual speech processing in the STS, and how these dynamics are affected by auditory noise, are not well understood. Using electroencephalography, we investigated how auditory noise affects audiovisual speech processing in event-related potentials (ERPs) and oscillatory activity. Spoken syllables were presented in audiovisual (AV) and auditory only (A) trials at three different auditory noise levels (no, low, and high). Responses to A stimuli were subtracted from responses to AV stimuli, separately for each noise level, and these responses were subjected to the statistical analysis. Central ERPs differed between the no noise and the two noise conditions from 130 to 150 ms and 170 to 210 ms after auditory stimulus onset. Source localization using the local autoregressive average procedure revealed an involvement of the lateral temporal lobe, encompassing the superior and middle temporal gyrus. Neuronal activity in the beta-band (16 to 32 Hz) was suppressed at central channels around 100 to 400 ms after auditory stimulus onset in the averaged AV minus A signal over the three noise levels. This suppression was smaller in the high noise compared to the no noise and low noise condition, possibly reflecting disturbed recognition or altered processing of multisensory speech stimuli. Source analysis of the beta-band effect using linear beamforming demonstrated an involvement of the STS. Our study shows that auditory noise alters audiovisual speech processing in ERPs localized to lateral temporal lobe and provides evidence that beta-band activity in the STS plays a role for audiovisual speech processing under regular and noisy acoustic conditions.
Collapse
Affiliation(s)
- Inga M Schepers
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| | | | | | | | | |
Collapse
|
32
|
Chicharro D, Ledberg A. Framework to study dynamic dependencies in networks of interacting processes. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:041901. [PMID: 23214609 DOI: 10.1103/physreve.86.041901] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2012] [Revised: 07/30/2012] [Indexed: 06/01/2023]
Abstract
The analysis of dynamic dependencies in complex systems such as the brain helps to understand how emerging properties arise from interactions. Here we propose an information-theoretic framework to analyze the dynamic dependencies in multivariate time-evolving systems. This framework constitutes a fully multivariate extension and unification of previous approaches based on bivariate or conditional mutual information and Granger causality or transfer entropy. We define multi-information measures that allow us to study the global statistical structure of the system as a whole, the total dependence between subsystems, and the temporal statistical structure of each subsystem. We develop a stationary and a nonstationary formulation of the framework. We then examine different decompositions of these multi-information measures. The transfer entropy naturally appears as a term in some of these decompositions. This allows us to examine its properties not as an isolated measure of interdependence but in the context of the complete framework. More generally we use causal graphs to study the specificity and sensitivity of all the measures appearing in these decompositions to different sources of statistical dependence arising from the causal connections between the subsystems. We illustrate that there is no straightforward relation between the strength of specific connections and specific terms in the decompositions. Furthermore, causal and noncausal statistical dependencies are not separable. In particular, the transfer entropy can be nonmonotonic in dependence on the connectivity strength between subsystems and is also sensitive to internal changes of the subsystems, so it should not be interpreted as a measure of connectivity strength. Altogether, in comparison to an analysis based on single isolated measures of interdependence, this framework is more powerful to analyze emergent properties in multivariate systems and to characterize functionally relevant changes in the dynamics.
Collapse
Affiliation(s)
- Daniel Chicharro
- Center for Neuroscience and Cognitive Systems@UniTn, Istituto Italiano di Tecnologia, Via Bettini 31, 38068 Rovereto (TN), Italy.
| | | |
Collapse
|
33
|
Chicharro D, Ledberg A. When two become one: the limits of causality analysis of brain dynamics. PLoS One 2012; 7:e32466. [PMID: 22438878 PMCID: PMC3306364 DOI: 10.1371/journal.pone.0032466] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2011] [Accepted: 01/31/2012] [Indexed: 11/19/2022] Open
Abstract
Biological systems often consist of multiple interacting subsystems, the brain being a prominent example. To understand the functions of such systems it is important to analyze if and how the subsystems interact and to describe the effect of these interactions. In this work we investigate the extent to which the cause-and-effect framework is applicable to such interacting subsystems. We base our work on a standard notion of causal effects and define a new concept called natural causal effect. This new concept takes into account that when studying interactions in biological systems, one is often not interested in the effect of perturbations that alter the dynamics. The interest is instead in how the causal connections participate in the generation of the observed natural dynamics. We identify the constraints on the structure of the causal connections that determine the existence of natural causal effects. In particular, we show that the influence of the causal connections on the natural dynamics of the system often cannot be analyzed in terms of the causal effect of one subsystem on another. Only when the causing subsystem is autonomous with respect to the rest can this interpretation be made. We note that subsystems in the brain are often bidirectionally connected, which means that interactions rarely should be quantified in terms of cause-and-effect. We furthermore introduce a framework for how natural causal effects can be characterized when they exist. Our work also has important consequences for the interpretation of other approaches commonly applied to study causality in the brain. Specifically, we discuss how the notion of natural causal effects can be combined with Granger causality and Dynamic Causal Modeling (DCM). Our results are generic and the concept of natural causal effects is relevant in all areas where the effects of interactions between subsystems are of interest.
Collapse
Affiliation(s)
- Daniel Chicharro
- Center of Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- * E-mail: (DC); (AL)
| | - Anders Ledberg
- Center of Brain and Cognition, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
- * E-mail: (DC); (AL)
| |
Collapse
|
34
|
Abstract
Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.
Collapse
|
35
|
Emotional facial expressions modulate pain-induced beta and gamma oscillations in sensorimotor cortex. J Neurosci 2011; 31:14542-50. [PMID: 21994371 DOI: 10.1523/jneurosci.6002-10.2011] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Painful events in our environment are often accompanied by stimuli from other sensory modalities. These stimuli may influence the perception and processing of acute pain, in particular when they comprise emotional cues, like facial expressions of people surrounding us. In this whole-head magnetoencephalography (MEG) study, we examined the neuronal mechanisms underlying the influence of emotional (fearful, angry, or happy) compared to neutral facial expressions on the processing of pain in humans. Independent of their valence, subjective pain ratings for intracutaneous inputs were higher when pain stimuli were presented together with emotional facial expressions than when they were presented with a neutral facial expression. Source reconstruction using linear beamforming revealed pain-induced early (70-270 ms) oscillatory beta-band activity (BBA; 15-25 Hz) and gamma-band activity (GBA; 60-80 Hz) in the sensorimotor cortex. The presentation of faces with emotional expressions compared to faces with neutral expressions led to a stronger bilateral suppression of the pain-induced BBA, possibly reflecting enhanced response readiness of the sensorimotor system. Moreover, pain-induced GBA in the sensorimotor cortex was larger for faces expressing fear than for faces expressing anger, which might reflect the facilitation of avoidance-motivated behavior triggered by the concurrent presentation of faces with fearful expressions and painful stimuli. Thus, the presence of emotional cues, like facial expressions from people surrounding us, while receiving acute pain may facilitate neuronal processes involved in the preparation and execution of adequate protective motor responses.
Collapse
|
36
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/9781439812174-10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
37
|
Ghazanfar A. Unity of the Senses for Primate Vocal Communication. Front Neurosci 2011. [DOI: 10.1201/b11092-41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
38
|
Ghazanfar A. Unity of the Senses for Primate Vocal Communication. Front Neurosci 2011. [DOI: 10.1201/9781439812174-41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
39
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/9781439812174-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
40
|
Cappe C, Rouiller E, Barone P. Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay. Front Neurosci 2011. [DOI: 10.1201/9781439812174-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
41
|
Engel A, Senkowski D, Schneider T. Multisensory Integration through Neural Coherence. Front Neurosci 2011. [DOI: 10.1201/b11092-10] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
42
|
Cappe C, Rouiller E, Barone P. Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay. Front Neurosci 2011. [DOI: 10.1201/b11092-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
43
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/b11092-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
44
|
|
45
|
Franciotti R, Brancucci A, Della Penna S, Onofrj M, Tommasi L. Neuromagnetic responses reveal the cortical timing of audiovisual synchrony. Neuroscience 2011; 193:182-92. [PMID: 21787844 DOI: 10.1016/j.neuroscience.2011.07.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2011] [Revised: 07/01/2011] [Accepted: 07/06/2011] [Indexed: 11/25/2022]
Abstract
Multisensory processing involving visual and auditory inputs is modulated by their relative temporal offsets. In order to assess whether multisensory integration alters the activation timing of primary visual and auditory cortices as a function of the temporal offsets between auditory and visual stimuli, a task was designed in which subjects had to judge the perceptual simultaneity of the onset of visual stimuli and brief acoustic tones. These were presented repeatedly with three different inter-stimulus intervals that were chosen to meet three perceptual conditions: (1) physical synchrony perceived as synchrony by subjects (SYNC); (2) physical asynchrony perceived as asynchrony (ASYNC); (3) physical asynchrony perceived ambiguously (AMB, i.e. 50% perceived as synchrony, 50% as asynchrony). Magnetoencephalographic activity was recorded during crossmodal sessions and unimodal control sessions. The activation of primary visual and auditory cortices peaked at a longer latency for the crossmodal conditions as compared to the unimodal conditions. Moreover, the latency in the auditory cortex was longer in the SYNC than in the ASYNC condition, whereas in the visual cortex the latency in the AMB condition was longer than in the ASYNC condition. These findings suggest that multisensory processing affects temporal dynamics already in primary cortices, that such activity can differ regionally and can be sensitive to the temporal offsets of multisensory inputs. In addition, in the AMB condition the conscious awareness of asynchrony might be associated to a later activation of the primary auditory cortex.
Collapse
Affiliation(s)
- R Franciotti
- Department of Neuroscience and Imaging, G. d'Annunzio University, Chieti, Italy.
| | | | | | | | | |
Collapse
|
46
|
Joly O, Ramus F, Pressnitzer D, Vanduffel W, Orban GA. Interhemispheric Differences in Auditory Processing Revealed by fMRI in Awake Rhesus Monkeys. Cereb Cortex 2011; 22:838-53. [DOI: 10.1093/cercor/bhr150] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
|
47
|
Arnal LH, Wyart V, Giraud AL. Transitions in neural oscillations reflect prediction errors generated in audiovisual speech. Nat Neurosci 2011; 14:797-801. [PMID: 21552273 DOI: 10.1038/nn.2810] [Citation(s) in RCA: 223] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Accepted: 03/15/2011] [Indexed: 01/18/2023]
Abstract
According to the predictive coding theory, top-down predictions are conveyed by backward connections and prediction errors are propagated forward across the cortical hierarchy. Using MEG in humans, we show that violating multisensory predictions causes a fundamental and qualitative change in both the frequency and spatial distribution of cortical activity. When visual speech input correctly predicted auditory speech signals, a slow delta regime (3-4 Hz) developed in higher-order speech areas. In contrast, when auditory signals invalidated predictions inferred from vision, a low-beta (14-15 Hz) / high-gamma (60-80 Hz) coupling regime appeared locally in a multisensory area (area STS). This frequency shift in oscillatory responses scaled with the degree of audio-visual congruence and was accompanied by increased gamma activity in lower sensory regions. These findings are consistent with the notion that bottom-up prediction errors are communicated in predominantly high (gamma) frequency ranges, whereas top-down predictions are mediated by slower (beta) frequencies.
Collapse
Affiliation(s)
- Luc H Arnal
- Inserm U960 - École Normale Supérieure, Paris, France
| | | | | |
Collapse
|
48
|
Weisz N, Lecaignard F, Müller N, Bertrand O. The modulatory influence of a predictive cue on the auditory steady-state response. Hum Brain Mapp 2011; 33:1417-30. [PMID: 21538704 DOI: 10.1002/hbm.21294] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2010] [Accepted: 02/02/2011] [Indexed: 11/12/2022] Open
Abstract
Whether attention exerts its impact already on primary sensory levels is still a matter of debate. Particularly in the auditory domain the amount of empirical evidence is scarce. Recently noninvasive and invasive studies have shown attentional modulations of the auditory Steady-State Response (aSSR). This evoked oscillatory brain response is of importance to the issue, because the main generators have been shown to be located in primary auditory cortex. So far, the issue whether the aSSR is sensitive to the predictive value of a cue preceding a target has not been investigated. Participants in the present study had to indicate on which ear the faster amplitude modulated (AM) sound of a compound sound (42 and 19 Hz AM frequencies) was presented. A preceding auditory cue was either informative (75%) or uninformative (50%) with regards to the location of the target. Behaviorally we could confirm that typical attentional modulations of performance were present in case of a preceding informative cue. With regards to the aSSR we found differences between the informative and uninformative condition only when the cue/target combination was presented to the right ear. Source analysis indicated this difference to be generated by a reduced 42 Hz aSSR in right primary auditory cortex. Our and previous data by others show a default tendency of "40 Hz" AM sounds to be processed by the right auditory cortex. We interpret our results as active suppression of this automatic response pattern, when attention needs to be allocated to right ear input.
Collapse
Affiliation(s)
- Nathan Weisz
- Department of Psychology, University of Konstanz, Konstanz, Germany.
| | | | | | | |
Collapse
|
49
|
Verhoef BE, Vogels R, Janssen P. Synchronization between the end stages of the dorsal and the ventral visual stream. J Neurophysiol 2011; 105:2030-42. [PMID: 21325682 DOI: 10.1152/jn.00924.2010] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The end stage areas of the ventral (IT) and the dorsal (AIP) visual streams encode the shape of disparity-defined three-dimensional (3D) surfaces. Recent anatomical tracer studies have found direct reciprocal connections between the 3D-shape selective areas in IT and AIP. Whether these anatomical connections are used to facilitate 3D-shape perception is still unknown. We simultaneously recorded multi-unit activity (MUA) and local field potentials in IT and AIP while monkeys discriminated between concave and convex 3D shapes and measured the degree to which the activity in IT and AIP synchronized during the task. We observed strong beta-band synchronization between IT and AIP preceding stimulus onset that decreased shortly after stimulus onset and became modulated by stereo-signal strength and stimulus contrast during the later portion of the stimulus period. The beta-coherence modulation was unrelated to task-difficulty, regionally specific, and dependent on the MUA selectivity of the pairs of sites under study. The beta-spike-field coherence in AIP predicted the upcoming choice of the monkey. Several convergent lines of evidence suggested AIP as the primary source of the AIP-IT synchronized activity. The synchronized beta activity seemed to occur during perceptual anticipation and when the system has stabilized to a particular perceptual state but not during active visual processing. Our findings demonstrate for the first time that synchronized activity exists between the end stages of the dorsal and ventral stream during 3D-shape discrimination.
Collapse
Affiliation(s)
- Bram-Ernst Verhoef
- Laboratorium voor Neurologie en Psychofysiologie, Campus Gasthuisberg, Leuven, Belgium
| | | | | |
Collapse
|
50
|
Cappe C, Murray MM, Barone P, Rouiller EM. Multisensory facilitation of behavior in monkeys: effects of stimulus intensity. J Cogn Neurosci 2010; 22:2850-63. [PMID: 20044892 DOI: 10.1162/jocn.2010.21423] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Multisensory stimuli can improve performance, facilitating RTs on sensorimotor tasks. This benefit is referred to as the redundant signals effect (RSE) and can exceed predictions on the basis of probability summation, indicative of integrative processes. Although an RSE exceeding probability summation has been repeatedly observed in humans and nonprimate animals, there are scant and inconsistent data from nonhuman primates performing similar protocols. Rather, existing paradigms have instead focused on saccadic eye movements. Moreover, the extant results in monkeys leave unresolved how stimulus synchronicity and intensity impact performance. Two trained monkeys performed a simple detection task involving arm movements to auditory, visual, or synchronous auditory-visual multisensory pairs. RSEs in excess of predictions on the basis of probability summation were observed and thus forcibly follow from neural response interactions. Parametric variation of auditory stimulus intensity revealed that in both animals, RT facilitation was limited to situations where the auditory stimulus intensity was below or up to 20 dB above perceptual threshold, despite the visual stimulus always being suprathreshold. No RT facilitation or even behavioral costs were obtained with auditory intensities 30-40 dB above threshold. The present study demonstrates the feasibility and the suitability of behaving monkeys for investigating links between psychophysical and neurophysiologic instantiations of multisensory interactions.
Collapse
Affiliation(s)
- Céline Cappe
- Neuropsychology and Neurorehabilitation Service and Radiology Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland.
| | | | | | | |
Collapse
|