1
|
Carrier-frequency specific omission-related neural activity in ordered sound sequences is independent of omission-predictability. Eur J Neurosci 2024. [PMID: 38711271 DOI: 10.1111/ejn.16381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/20/2024] [Accepted: 04/20/2024] [Indexed: 05/08/2024]
Abstract
Regularities in our surroundings lead to predictions about upcoming events. Previous research has shown that omitted sounds during otherwise regular tone sequences elicit frequency-specific neural activity related to the upcoming but omitted tone. We tested whether this neural response is depending on the unpredictability of the omission. Therefore, we recorded magnetencephalography (MEG) data while participants listened to ordered or random tone sequences with omissions occurring either ordered or randomly. Using multivariate pattern analysis shows that the frequency-specific neural pattern during omission within ordered tone sequences occurs independent of the regularity of the omissions. These results suggest that the auditory predictions based on sensory experiences are not immediately updated by violations of those expectations.
Collapse
|
2
|
Eye movements track prioritized auditory features in selective attention to natural speech. Nat Commun 2024; 15:3692. [PMID: 38693186 PMCID: PMC11063150 DOI: 10.1038/s41467-024-48126-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/22/2024] [Indexed: 05/03/2024] Open
Abstract
Over the last decades, cognitive neuroscience has identified a distributed set of brain regions that are critical for attention. Strong anatomical overlap with brain regions critical for oculomotor processes suggests a joint network for attention and eye movements. However, the role of this shared network in complex, naturalistic environments remains understudied. Here, we investigated eye movements in relation to (un)attended sentences of natural speech. Combining simultaneously recorded eye tracking and magnetoencephalographic data with temporal response functions, we show that gaze tracks attended speech, a phenomenon we termed ocular speech tracking. Ocular speech tracking even differentiates a target from a distractor in a multi-speaker context and is further related to intelligibility. Moreover, we provide evidence for its contribution to neural differences in speech processing, emphasizing the necessity to consider oculomotor activity in future research and in the interpretation of neural differences in auditory cognition.
Collapse
|
3
|
Individual prediction tendencies do not generalize across modalities. Psychophysiology 2024; 61:e14435. [PMID: 37691098 PMCID: PMC10909557 DOI: 10.1111/psyp.14435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 08/04/2023] [Accepted: 08/22/2023] [Indexed: 09/12/2023]
Abstract
Predictive processing theories, which model the brain as a "prediction machine", explain a wide range of cognitive functions, including learning, perception and action. Furthermore, it is increasingly accepted that aberrant prediction tendencies play a crucial role in psychiatric disorders. Given this explanatory value for clinical psychiatry, prediction tendencies are often implicitly conceptualized as individual traits or as tendencies that generalize across situations. As this has not yet explicitly been shown, in the current study, we quantify to what extent the individual tendency to anticipate sensory features of high probability generalizes across modalities. Using magnetoencephalography (MEG), we recorded brain activity while participants were presented with a sequence of four different (either visual or auditory) stimuli, which changed according to predefined transitional probabilities of two entropy levels: ordered vs. random. Our results show that, on a group-level, under conditions of low entropy, stimulus features of high probability are preactivated in the auditory but not in the visual modality. Crucially, the magnitude of the individual tendency to predict sensory events seems not to correlate between the two modalities. Furthermore, reliability statistics indicate poor internal consistency, suggesting that the measures from the different modalities are unlikely to reflect a single, common cognitive process. In sum, our findings suggest that quantification and interpretation of individual prediction tendencies cannot be generalized across modalities.
Collapse
|
4
|
Neural Speech Tracking Highlights the Importance of Visual Speech in Multi-speaker Situations. J Cogn Neurosci 2024; 36:128-142. [PMID: 37977156 DOI: 10.1162/jocn_a_02059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Visual speech plays a powerful role in facilitating auditory speech processing and has been a publicly noticed topic with the wide usage of face masks during the COVID-19 pandemic. In a previous magnetoencephalography study, we showed that occluding the mouth area significantly impairs neural speech tracking. To rule out the possibility that this deterioration is because of degraded sound quality, in the present follow-up study, we presented participants with audiovisual (AV) and audio-only (A) speech. We further independently manipulated the trials by adding a face mask and a distractor speaker. Our results clearly show that face masks only affect speech tracking in AV conditions, not in A conditions. This shows that face masks indeed primarily impact speech processing by blocking visual speech and not by acoustic degradation. We can further highlight how the spectrogram, lip movements and lexical units are tracked on a sensor level. We can show visual benefits for tracking the spectrogram especially in the multi-speaker condition. While lip movements only show additional improvement and visual benefit over tracking of the spectrogram in clear speech conditions, lexical units (phonemes and word onsets) do not show visual enhancement at all. We hypothesize that in young normal hearing individuals, information from visual input is less used for specific feature extraction, but acts more as a general resource for guiding attention.
Collapse
|
5
|
Eavesdropping on Tinnitus Using MEG: Lessons Learned and Future Perspectives. J Assoc Res Otolaryngol 2023; 24:531-547. [PMID: 38015287 PMCID: PMC10752863 DOI: 10.1007/s10162-023-00916-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 11/06/2023] [Indexed: 11/29/2023] Open
Abstract
Tinnitus has been widely investigated in order to draw conclusions about the underlying causes and altered neural activity in various brain regions. Existing studies have based their work on different tinnitus frameworks, ranging from a more local perspective on the auditory cortex to the inclusion of broader networks and various approaches towards tinnitus perception and distress. Magnetoencephalography (MEG) provides a powerful tool for efficiently investigating tinnitus and aberrant neural activity both spatially and temporally. However, results are inconclusive, and studies are rarely mapped to theoretical frameworks. The purpose of this review was to firstly introduce MEG to interested researchers and secondly provide a synopsis of the current state. We divided recent tinnitus research in MEG into study designs using resting state measurements and studies implementing tone stimulation paradigms. The studies were categorized based on their theoretical foundation, and we outlined shortcomings as well as inconsistencies within the different approaches. Finally, we provided future perspectives on how to benefit more efficiently from the enormous potential of MEG. We suggested novel approaches from a theoretical, conceptual, and methodological point of view to allow future research to obtain a more comprehensive understanding of tinnitus and its underlying processes.
Collapse
|
6
|
Neural speech tracking shifts from the syllabic to the modulation rate of speech as intelligibility decreases. Psychophysiology 2023; 60:e14362. [PMID: 37350379 PMCID: PMC10909526 DOI: 10.1111/psyp.14362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 04/24/2023] [Accepted: 05/10/2023] [Indexed: 06/24/2023]
Abstract
The most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations supports speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking commonly do not distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.
Collapse
|
7
|
Involuntary shifts of spatial attention contribute to distraction-Evidence from oscillatory alpha power and reaction time data. Psychophysiology 2023; 60:e14353. [PMID: 37246813 DOI: 10.1111/psyp.14353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 02/18/2023] [Accepted: 05/06/2023] [Indexed: 05/30/2023]
Abstract
Imagine you are focusing on the traffic on a busy street to ride your bike safely when suddenly you hear the siren of an ambulance. This unexpected sound involuntarily captures your attention and interferes with ongoing performance. We tested whether this type of distraction involves a spatial shift of attention. We measured behavioral data and magnetoencephalographic alpha power during a cross-modal paradigm that combined an exogenous cueing task and a distraction task. In each trial, a task-irrelevant sound preceded a visual target (left or right). The sound was usually the same animal sound (i.e., standard sound). Rarely, it was replaced by an unexpected environmental sound (i.e., deviant sound). Fifty percent of the deviants occurred on the same side as the target, and 50% occurred on the opposite side. Participants responded to the location of the target. As expected, responses were slower to targets that followed a deviant compared to a standard. Crucially, this distraction effect was mitigated by the spatial relationship between the targets and the deviants: responses were faster when targets followed deviants on the same versus different side, indexing a spatial shift of attention. This was further corroborated by a posterior alpha power modulation that was higher in the hemisphere ipsilateral (vs. contralateral) to the location of the attention-capturing deviant. We suggest that this alpha power lateralization reflects a spatial attention bias. Overall, our data support the contention that spatial shifts of attention contribute to deviant distraction.
Collapse
|
8
|
Distinguishing Fine Structure and Summary Representation of Sound Textures from Neural Activity. eNeuro 2023; 10:ENEURO.0026-23.2023. [PMID: 37775312 PMCID: PMC10576259 DOI: 10.1523/eneuro.0026-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 08/25/2023] [Accepted: 08/31/2023] [Indexed: 10/01/2023] Open
Abstract
The auditory system relies on both local and summary representations; acoustic local features exceeding system constraints are compacted into a set of summary statistics. Such compression is pivotal for sound-object recognition. Here, we assessed whether computations subtending local and statistical representations of sounds could be distinguished at the neural level. A computational auditory model was employed to extract auditory statistics from natural sound textures (i.e., fire, rain) and to generate synthetic exemplars where local and statistical properties were controlled. Twenty-four human participants were passively exposed to auditory streams while the electroencephalography (EEG) was recorded. Each stream could consist of short, medium, or long sounds to vary the amount of acoustic information. Short and long sounds were expected to engage local or summary statistics representations, respectively. Data revealed a clear dissociation. Compared with summary-based ones, auditory-evoked responses based on local information were selectively greater in magnitude in short sounds. Opposite patterns emerged for longer sounds. Neural oscillations revealed that local features and summary statistics rely on neural activity occurring at different temporal scales, faster (beta) or slower (theta-alpha). These dissociations emerged automatically without explicit engagement in a discrimination task. Overall, this study demonstrates that the auditory system developed distinct coding mechanisms to discriminate changes in the acoustic environment based on fine structure and summary representations.
Collapse
|
9
|
Network topology in brain tumor patients with and without structural epilepsy: a prospective MEG study. Ther Adv Neurol Disord 2023; 16:17562864231190298. [PMID: 37655227 PMCID: PMC10467269 DOI: 10.1177/17562864231190298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 07/07/2023] [Indexed: 09/02/2023] Open
Abstract
Background It was proposed that network topology is altered in brain tumor patients. However, there is no consensus on the pattern of these changes and evidence on potential drivers is lacking. Objectives We aimed to characterize neurooncological patients' network topology by analyzing glial brain tumors (GBTs) and brain metastases (BMs) with respect to the presence of structural epilepsy. Methods Network topology derived from resting state magnetoencephalography was compared between (1) patients and controls, (2) GBTs and BMs, and (3) patients with (PSEs) and without structural epilepsy (PNSEs). Eligible patients were investigated from February 2019 to March 2021. We calculated whole brain (WB) connectivity in six frequency bands, network topological parameters (node degree, average shortest path length, local clustering coefficient) and performed a stratification, where differences in power were identified. For data analysis, we used Fieldtrip, Brain Connectivity MATLAB toolboxes, and in-house built scripts. Results We included 41 patients (21 men), with a mean age of 60.1 years (range 23-82), of those were: GBTs (n = 23), BMs (n = 14), and other histologies (n = 4). Statistical analysis revealed a significantly decreased WB node degree in patients versus controls in every frequency range at the corrected level (p1-30Hz = 0.002, pγ = 0.002, pβ = 0.002, pα = 0.002, pθ = 0.024, and pδ = 0.002). At the descriptive level, we found a significant augmentation for WB local clustering coefficient (p1-30Hz = 0.031, pδ = 0.013) in patients compared to controls, which did not persist the false discovery rate correction. No differences regarding networks of GBTs compared to BMs were identified. However, we found a significant increase in WB local clustering coefficient (pθ = 0.048) and decrease in WB node degree (pα = 0.039) in PSEs versus PNSEs at the uncorrected level. Conclusion Our data suggest that network topology is altered in brain tumor patients. Histology per se might not, however, tumor-related epilepsy seems to influence the brain's functional network. Longitudinal studies and analysis of possible confounders are required to substantiate these findings.
Collapse
|
10
|
Ageing as risk factor for tinnitus and its complex interplay with hearing loss-evidence from online and NHANES data. BMC Med 2023; 21:283. [PMID: 37533027 PMCID: PMC10394883 DOI: 10.1186/s12916-023-02998-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023] Open
Abstract
BACKGROUND Tinnitus affects 10 to 15% of the population, but its underlying causes are not yet fully understood. Hearing loss has been established as the most important risk factor. Ageing is also known to accompany increased prevalence; however, the risk is normally seen in context with (age-related) hearing loss. Whether ageing per se is a risk factor has not yet been established. We specifically focused on the effect of ageing and the relationship between age, hearing loss, and tinnitus. METHODS We used two samples for our analyses. The first, exploratory analyses comprised 2249 Austrian individuals. The second included data from 16,008 people, drawn from a publicly available dataset (NHANES). We used logistic regressions to investigate the effect of age on tinnitus. RESULTS In both samples, ageing per se was found to be a significant predictor of tinnitus. In the more decisive NHANES sample, there was an additional interaction effect between age and hearing loss. Odds ratio analyses show that per unit increase of hearing loss, the odds of reporting tinnitus is higher in older people (1.06 vs 1.03). CONCLUSIONS Expanding previous findings of hearing loss as the main risk factor for tinnitus, we established ageing as a risk factor in its own right. Underlying mechanisms remain unclear, and this work calls for urgent research efforts to link biological ageing processes, hearing loss, and tinnitus. We therefore suggest a novel working hypothesis that integrates these aspects from an ageing brain viewpoint.
Collapse
|
11
|
Cochlear Theta Activity Oscillates in Phase Opposition during Interaural Attention. J Cogn Neurosci 2023; 35:588-602. [PMID: 36626349 DOI: 10.1162/jocn_a_01959] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
It is widely established that sensory perception is a rhythmic process as opposed to a continuous one. In the context of auditory perception, this effect is only established on a cortical and behavioral level. Yet, the unique architecture of the auditory sensory system allows its primary sensory cortex to modulate the processes of its sensory receptors at the cochlear level. Previously, we could demonstrate the existence of a genuine cochlear theta (∼6-Hz) rhythm that is modulated in amplitude by intermodal selective attention. As the study's paradigm was not suited to assess attentional effects on the oscillatory phase of cochlear activity, the question of whether attention can also affect the temporal organization of the cochlea's ongoing activity remained open. The present study utilizes an interaural attention paradigm to investigate ongoing otoacoustic activity during a stimulus-free cue-target interval and an omission period of the auditory target in humans. We were able to replicate the existence of the cochlear theta rhythm. Importantly, we found significant phase opposition between the two ears and attention conditions of anticipatory as well as cochlear oscillatory activity during target presentation. Yet, the amplitude was unaffected by interaural attention. These results are the first to demonstrate that intermodal and interaural attention deploy different aspects of excitation and inhibition at the first level of auditory processing. Whereas intermodal attention modulates the level of cochlear activity, interaural attention modulates the timing.
Collapse
|
12
|
Brain areas associated with visual spatial attention display topographic organization during auditory spatial attention. Cereb Cortex 2023; 33:3478-3489. [PMID: 35972419 PMCID: PMC10068281 DOI: 10.1093/cercor/bhac285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 11/12/2022] Open
Abstract
Spatially selective modulation of alpha power (8-14 Hz) is a robust finding in electrophysiological studies of visual attention, and has been recently generalized to auditory spatial attention. This modulation pattern is interpreted as reflecting a top-down mechanism for suppressing distracting input from unattended directions of sound origin. The present study on auditory spatial attention extends this interpretation by demonstrating that alpha power modulation is closely linked to oculomotor action. We designed an auditory paradigm in which participants were required to attend to upcoming sounds from one of 24 loudspeakers arranged in a circular array around the head. Maintaining the location of an auditory cue was associated with a topographically modulated distribution of posterior alpha power resembling the findings known from visual attention. Multivariate analyses allowed the prediction of the sound location in the horizontal plane. Importantly, this prediction was also possible, when derived from signals capturing saccadic activity. A control experiment on auditory spatial attention confirmed that, in absence of any visual/auditory input, lateralization of alpha power is linked to the lateralized direction of gaze. Attending to an auditory target engages oculomotor and visual cortical areas in a topographic manner akin to the retinotopic organization associated with visual attention.
Collapse
|
13
|
Speech intelligibility changes the temporal evolution of neural speech tracking. Neuroimage 2023; 268:119894. [PMID: 36693596 DOI: 10.1016/j.neuroimage.2023.119894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/13/2022] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Listening to speech with poor signal quality is challenging. Neural speech tracking of degraded speech has been used to advance the understanding of how brain processes and speech intelligibility are interrelated. However, the temporal dynamics of neural speech tracking and their relation to speech intelligibility are not clear. In the present MEG study, we exploited temporal response functions (TRFs), which has been used to describe the time course of speech tracking on a gradient from intelligible to unintelligible degraded speech. In addition, we used inter-related facets of neural speech tracking (e.g., speech envelope reconstruction, speech-brain coherence, and components of broadband coherence spectra) to endorse our findings in TRFs. Our TRF analysis yielded marked temporally differential effects of vocoding: ∼50-110 ms (M50TRF), ∼175-230 ms (M200TRF), and ∼315-380 ms (M350TRF). Reduction of intelligibility went along with large increases of early peak responses M50TRF, but strongly reduced responses in M200TRF. In the late responses M350TRF, the maximum response occurred for degraded speech that was still comprehensible then declined with reduced intelligibility. Furthermore, we related the TRF components to our other neural "tracking" measures and found that M50TRF and M200TRF play a differential role in the shifting center frequency of the broadband coherence spectra. Overall, our study highlights the importance of time-resolved computation of neural speech tracking and decomposition of coherence spectra and provides a better understanding of degraded speech processing.
Collapse
|
14
|
Cortical speech tracking is related to individual prediction tendencies. Cereb Cortex 2023:6975346. [PMID: 36617790 DOI: 10.1093/cercor/bhac528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 12/13/2022] [Accepted: 12/14/2022] [Indexed: 01/10/2023] Open
Abstract
Listening can be conceptualized as a process of active inference, in which the brain forms internal models to integrate auditory information in a complex interaction of bottom-up and top-down processes. We propose that individuals vary in their "prediction tendency" and that this variation contributes to experiential differences in everyday listening situations and shapes the cortical processing of acoustic input such as speech. Here, we presented tone sequences of varying entropy level, to independently quantify auditory prediction tendency (as the tendency to anticipate low-level acoustic features) for each individual. This measure was then used to predict cortical speech tracking in a multi speaker listening task, where participants listened to audiobooks narrated by a target speaker in isolation or interfered by 1 or 2 distractors. Furthermore, semantic violations were introduced into the story, to also examine effects of word surprisal during speech processing. Our results show that cortical speech tracking is related to prediction tendency. In addition, we find interactions between prediction tendency and background noise as well as word surprisal in disparate brain regions. Our findings suggest that individual prediction tendencies are generalizable across different listening situations and may serve as a valuable element to explain interindividual differences in natural listening situations.
Collapse
|
15
|
Cortical tracking of formant modulations derived from silently presented lip movements and its decline with age. Cereb Cortex 2022; 32:4818-4833. [PMID: 35062025 PMCID: PMC9627034 DOI: 10.1093/cercor/bhab518] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 12/15/2021] [Accepted: 12/16/2021] [Indexed: 11/26/2022] Open
Abstract
The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.
Collapse
|
16
|
Degradation levels of continuous speech affect neural speech tracking and alpha power differently. Eur J Neurosci 2022; 55:3288-3302. [PMID: 32687616 PMCID: PMC9540197 DOI: 10.1111/ejn.14912] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/12/2020] [Accepted: 07/13/2020] [Indexed: 11/26/2022]
Abstract
Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1-7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.
Collapse
|
17
|
P 21 Functional connectivity and network topology in brain tumors: A prospective, pilot-, MEG- study. Clin Neurophysiol 2022. [DOI: 10.1016/j.clinph.2022.01.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
18
|
Recommendations and publication guidelines for studies using frequency domain and time-frequency domain analyses of neural time series. Psychophysiology 2022; 59:e14052. [PMID: 35398913 PMCID: PMC9717489 DOI: 10.1111/psyp.14052] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 03/08/2022] [Indexed: 01/29/2023]
Abstract
Since its beginnings in the early 20th century, the psychophysiological study of human brain function has included research into the spectral properties of electrical and magnetic brain signals. Now, dramatic advances in digital signal processing, biophysics, and computer science have enabled increasingly sophisticated methodology for neural time series analysis. Innovations in hardware and recording techniques have further expanded the range of tools available to researchers interested in measuring, quantifying, modeling, and altering the spectral properties of neural time series. These tools are increasingly used in the field, by a growing number of researchers who vary in their training, background, and research interests. Implementation and reporting standards also vary greatly in the published literature, causing challenges for authors, readers, reviewers, and editors alike. The present report addresses this issue by providing recommendations for the use of these methods, with a focus on foundational aspects of frequency domain and time-frequency analyses. It also provides publication guidelines, which aim to (1) foster replication and scientific rigor, (2) assist new researchers who wish to enter the field of brain oscillations, and (3) facilitate communication among authors, reviewers, and editors.
Collapse
|
19
|
Introduction to the special issue of human oscillatory brain activity: Methods, models, and mechanisms. Psychophysiology 2022; 59:e14038. [DOI: 10.1111/psyp.14038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 02/17/2022] [Indexed: 11/29/2022]
|
20
|
Corrigendum to 'Differential attention-dependent adjustment of frequency, power and phase in primary sensory and frontoparietal areas' [Cortex (2021) 179-193]. Cortex 2022; 150:47. [PMID: 35339786 DOI: 10.1016/j.cortex.2022.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Masking of the mouth area impairs reconstruction of acoustic speech features and higher-level segmentational features in the presence of a distractor speaker. Neuroimage 2022; 252:119044. [PMID: 35240298 DOI: 10.1016/j.neuroimage.2022.119044] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/26/2022] [Accepted: 02/27/2022] [Indexed: 11/29/2022] Open
Abstract
Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In the context of speech, when confronted with a degraded acoustic signal, congruent visual inputs promote comprehension. When this input is masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels of speech processing are affected under which circumstances by occluding the mouth area. To answer this question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. In half of the trials, the target speaker wore a (surgical) face mask, while we measured the brain activity of normal hearing participants via magnetoencephalography (MEG). We additionally added a distractor speaker in half of the trials in order to create an ecologically difficult listening situation. A decoding model on the clear AV speech was trained and used to reconstruct crucial speech features in each condition. We found significant main effects of face masks on the reconstruction of acoustic features, such as the speech envelope and spectral speech features (i.e. pitch and formant frequencies), while reconstruction of higher level features of speech segmentation (phoneme and word onsets) were especially impaired through masks in difficult listening situations. As we used surgical face masks in our study, which only show mild effects on speech acoustics, we interpret our findings as the result of the missing visual input. Our findings extend previous behavioural results, by demonstrating the complex contextual effects of occluding relevant visual information on speech processing.
Collapse
|
22
|
Predisposition to domain-wide maladaptive changes in predictive coding in auditory phantom perception. Neuroimage 2021; 248:118813. [PMID: 34923130 DOI: 10.1016/j.neuroimage.2021.118813] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 06/30/2021] [Accepted: 12/13/2021] [Indexed: 01/22/2023] Open
Abstract
Tinnitus is hypothesised to be a predictive coding problem. Previous research indicates lower sensitivity to prediction errors (PEs) in tinnitus patients while processing auditory deviants corresponding to tinnitus-specific stimuli. However, based on research with patients with hallucinations and no psychosis we hypothesise tinnitus patients may be more sensitive to PEs produced by auditory stimuli that are not related to tinnitus characteristics. Specifically in patients with minimal to no hearing loss, we hypothesise a more top-down subtype of tinnitus that may be driven by maladaptive changes in an auditory predictive coding network. To test this, we use an auditory oddball paradigm with omission of global and local deviants, a measure that is previously shown to empirically characterise hierarchical prediction errors (PEs). We observe: (1) increased predictions characterised by increased pre-stimulus response and increased alpha connectivity between the parahippocampus, dorsal anterior cingulate cortex and parahippocampus, pregenual anterior cingulate cortex and posterior cingulate cortex; (2) increased PEs characterised by increased P300 amplitude and gamma activity and increased theta connectivity between auditory cortices, parahippocampus and dorsal anterior cingulate cortex in the tinnitus group; (3) increased overall feed-forward connectivity in theta from the auditory cortex and parahippocampus to the dorsal anterior cingulate cortex; (4) correlations of pre-stimulus theta activity to tinnitus loudness and alpha activity to tinnitus distress. These results provide empirical evidence of maladaptive changes in a hierarchical predictive coding network in a subgroup of tinnitus patients with minimal to no hearing loss. The changes in pre-stimulus activity and connectivity to non-tinnitus specific stimuli suggest that tinnitus patients not only produce strong predictions about upcoming stimuli but also may be predisposed to stimulus a-specific PEs in the auditory domain. Correlations with tinnitus-related characteristics may be a biomarker for maladaptive changes in auditory predictive coding.
Collapse
|
23
|
Gender differentiates effects of acoustic stimulation in patients with tinnitus. PROGRESS IN BRAIN RESEARCH 2021; 263:25-57. [PMID: 34243890 DOI: 10.1016/bs.pbr.2021.04.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Gender constitutes a major factor to consider when tailoring subtype-based therapies for tinnitus. Previous reports showed important differences between men and women concerning basic perceptual tinnitus characteristics (i.e., laterality, frequency, tinnitus loudness) as well as psychological reactions linked to this condition. Therapeutic approaches based on acoustic stimulation involve processes beyond a pure masking effect and consist of sound presentation temporarily altering or alleviating tinnitus perception via residual and/or lateral inhibition mechanisms. Presented stimuli may include pure tones, noise, and music adjusted to or modulated to filter out tinnitus pitch and therefore trigger reparative functional and structural changes in the auditory system. Furthermore, recent findings suggest that in tonal tinnitus, the presentation of pitch-adjusted sounds which were altered by a 10Hz modulation of amplitude was more efficient than unmodulated stimulation. In this paper, we investigate sex differences in the outcome of different variants of acoustic stimulation, looking for factors revealing predictive value in the efficiency of tinnitus relief.
Collapse
|
24
|
Cochlear activity in silent cue-target intervals shows a theta-rhythmic pattern and is correlated to attentional alpha and theta modulations. BMC Biol 2021; 19:48. [PMID: 33726746 PMCID: PMC7968255 DOI: 10.1186/s12915-021-00992-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 02/24/2021] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND A long-standing debate concerns where in the processing hierarchy of the central nervous system (CNS) selective attention takes effect. In the auditory system, cochlear processes can be influenced via direct and mediated (by the inferior colliculus) projections from the auditory cortex to the superior olivary complex (SOC). Studies illustrating attentional modulations of cochlear responses have so far been limited to sound-evoked responses. The aim of the present study is to investigate intermodal (audiovisual) selective attention in humans simultaneously at the cortical and cochlear level during a stimulus-free cue-target interval. RESULTS We found that cochlear activity in the silent cue-target intervals was modulated by a theta-rhythmic pattern (~ 6 Hz). While this pattern was present independently of attentional focus, cochlear theta activity was clearly enhanced when attending to the upcoming auditory input. On a cortical level, classical posterior alpha and beta power enhancements were found during auditory selective attention. Interestingly, participants with a stronger release of inhibition in auditory brain regions show a stronger attentional modulation of cochlear theta activity. CONCLUSIONS These results hint at a putative theta-rhythmic sampling of auditory input at the cochlear level. Furthermore, our results point to an interindividual variable engagement of efferent pathways in an attentional context that are linked to processes within and beyond processes in auditory cortical regions.
Collapse
|
25
|
Pre-stimulus alpha-band power and phase fluctuations originate from different neural sources and exert distinct impact on stimulus-evoked responses. Eur J Neurosci 2021; 55:3178-3190. [PMID: 33539589 DOI: 10.1111/ejn.15138] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 01/22/2021] [Accepted: 01/31/2021] [Indexed: 11/28/2022]
Abstract
Ongoing oscillatory neural activity before stimulus onset influences subsequent visual perception. Specifically, both the power and the phase of oscillations in the alpha-frequency band (9-13 Hz) have been reported to predict the detection of visual stimuli. Up to now, the functional mechanisms underlying pre-stimulus power and phase effects on upcoming visual percepts are debated. Here, we used magnetoencephalography recordings together with a near-threshold visual detection task to investigate the neural generators of pre-stimulus power and phase and their impact on subsequent visual-evoked responses. Pre-stimulus alpha-band power and phase opposition effects were found consistent with previous reports. Source localization suggested clearly distinct neural generators for these pre-stimulus effects: Power effects were mainly found in occipital-temporal regions, whereas phase effects also involved prefrontal areas. In order to be functionally relevant, the pre-stimulus correlates should influence post-stimulus processing. Using a trial-sorting approach, we observed that only pre-stimulus power modulated the Hits versus Misses difference in the evoked response, a well-established post-stimulus neural correlate of near-threshold perception, such that trials with stronger pre-stimulus power effect showed greater post-stimulus difference. By contrast, no influence of pre-stimulus phase effects were found. In sum, our study shows distinct generators for two pre-stimulus neural patterns predicting visual perception, and that only alpha power impacts the post-stimulus correlate of conscious access. This underlines the functional relevance of prestimulus alpha power on perceptual awareness, while questioning the role of alpha phase.
Collapse
|
26
|
Abstract
The Psychophysics Toolbox (PTB) is one of the most popular toolboxes for the development of experimental paradigms. It is a very powerful library, providing low-level, platform independent access to the devices used in an experiment such as the graphics and the sound card. While this low-level design results in a high degree of flexibility and power, writing paradigms that interface the PTB directly might lead to code that is hard to read, maintain, reuse, and debug. Running an experiment in different facilities or organizations further requires it to work with various setups that differ in the availability of specialized hardware for response collection, triggering, and presentation of auditory stimuli. The Objective Psychophysics Toolbox (o_ptb) provides an intuitive, unified, and clear interface, built on top of the PTB that enables researchers to write readable, clean, and concise code. In addition to presenting the architecture of the o_ptb, the results of a timing accuracy test are presented. Exactly the same MATLAB code was run on two different systems, one of those using the VPixx system. Both systems showed sub-millisecond accuracy.
Collapse
|
27
|
A backward encoding approach to recover subcortical auditory activity. Neuroimage 2020; 218:116961. [PMID: 32439538 DOI: 10.1016/j.neuroimage.2020.116961] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 05/14/2020] [Indexed: 11/16/2022] Open
Abstract
Several subcortical nuclei along the auditory pathway are involved in the processing of sounds. One of the most commonly used methods of measuring the activity of these nuclei is the auditory brainstem response (ABR). Due to its low signal-to-noise ratio, ABR's have to be derived by averaging over activity generated by thousands of artificial sounds such as clicks or tone bursts. This approach cannot be easily applied to natural listening situations (e.g. speech, music), which limits auditory cognitive neuroscientific studies to investigate mostly cortical processes. We propose that by individually training backward encoding models to reconstruct evoked ABRs from high-density electrophysiological data, spatial filters can be tuned to auditory brainstem activity. Since these individualized filters can be applied (i.e. generalized) to any other data set using the same spatial coverage, this could allow for the estimation of auditory brainstem activity from any continuous sensor level data. In this study, we established a proof-of-concept by using backward encoding models generated using a click stimulation rate of 30 Hz to predict ABR activity recorded using EEG from an independent measurement using a stimulation rate of 9 Hz. We show that individually predicted and measured ABR's are highly correlated (r ~ 0.7). Importantly these predictions are stable even when applying the trained backward encoding model to a low number of trials, mimicking a situation with an unfavorable signal-to-noise ratio. Overall, this work lays the necessary foundation to use this approach in more interesting listening situations.
Collapse
|
28
|
Auditory cortical alpha/beta desynchronization prioritizes the representation of memory items during a retention period. eLife 2020; 9:55508. [PMID: 32378513 PMCID: PMC7242024 DOI: 10.7554/elife.55508] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 05/05/2020] [Indexed: 12/11/2022] Open
Abstract
To-be-memorized information in working-memory could be protected against distracting influences by processes of functional inhibition or prioritization. Modulations of oscillations in the alpha to beta range in task-relevant sensory regions have been suggested to play an important role for both mechanisms. We adapted a Sternberg task variant to the auditory modality, with a strong or a weak distracting sound presented at a predictable time during the retention period. Using a time-generalized decoding approach, relatively decreased strength of memorized information was found prior to strong distractors, paralleled by decreased pre-distractor alpha/beta power in the left superior temporal gyrus (lSTG). Over the entire group, reduced beta power in lSTG was associated with relatively increased strength of memorized information. The extent of alpha power modulations within participants was negatively correlated with strength of memorized information. Overall, our results are compatible with a prioritization account, but point to nuanced differences between alpha and beta oscillations.
Collapse
|
29
|
Head magnetomyography (hMMG): A novel approach to monitor face and whole head muscular activity. Psychophysiology 2019; 57:e13507. [PMID: 31763700 PMCID: PMC7027552 DOI: 10.1111/psyp.13507] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 10/04/2019] [Accepted: 10/07/2019] [Indexed: 11/28/2022]
Abstract
Muscular activity recording is of high basic science and clinical relevance and is typically achieved using electromyography (EMG). While providing detailed information about the state of a specific muscle, this technique has limitations such as the need for a priori assumptions about electrode placement and difficulty with recording muscular activity patterns from extended body areas at once. For head and face muscle activity, the present work aimed to overcome these restrictions by exploiting magnetoencephalography (MEG) as a whole head myographic recorder (head magnetomyography, hMMG). This is in contrast to common MEG studies, which treat muscular activity as artifact in electromagnetic brain activity. In a first proof‐of‐concept step, participants imitated emotional facial expressions performed by a model. Exploiting source projection algorithms, we were able to reconstruct muscular activity, showing spatial activation patterns in accord with the hypothesized muscular contractions. Going one step further, participants passively observed affective pictures with negative, neutral, or positive valence. Applying multivariate pattern analysis to the reconstructed hMMG signal, we were able to decode above chance the valence category of the presented pictures. Underlining the potential of hMMG, a searchlight analysis revealed that generally neglected neck muscles exhibit information on stimulus valence. Results confirm the utility of hMMG as a whole head electromyographic recorder to quantify muscular activation patterns including muscular regions that are typically not recorded with EMG. This key advantage beyond conventional EMG has substantial scientific and clinical potential. We present an innovative method called head magnetomyography (hMMG), which exploits magnetoencephalography (MEG) as a whole head electromyographic (EMG) recorder. Differently from the typical EMG recording, which needs an a priori selection of the placement of the electrodes, hMMG is able to detect muscular activity from many regions of the face and head simultaneously, including typically overlooked muscles. Our data show that hMMG can readily serve researchers in the emotion field and hold further scientific as well as clinical promise.
Collapse
|
30
|
Auditory cortical generators of the Frequency Following Response are modulated by intermodal attention. Neuroimage 2019; 203:116185. [PMID: 31520743 DOI: 10.1016/j.neuroimage.2019.116185] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 09/03/2019] [Accepted: 09/10/2019] [Indexed: 11/20/2022] Open
Abstract
The efferent auditory system suggests that brainstem auditory regions could also be sensitive to top-down processes. In electrophysiology, the Frequency Following Response (FFR) to speech stimuli has been used extensively to study brainstem areas. Despite seemingly straight-forward in addressing the issue of attentional modulations of brainstem regions by means of the FFR, the existing results are inconsistent. Moreover, the notion that the FFR exclusively represents subcortical generators has been challenged. We aimed to gain a more differentiated perspective on how the generators of the FFR are modulated by either attending to the visual or auditory input while neural activity was recorded using magnetoencephalography (MEG). In a first step our results confirm the strong contribution of also cortical regions to the FFR. Interestingly, of all regions exhibiting a measurable FFR response, only the right primary auditory cortex was significantly affected by intermodal attention. By showing a clear cortical contribution to the attentional FFR effect, our work significantly extends previous reports that focus on surface level recordings only. It underlines the importance of making a greater effort to disentangle the different contributing sources of the FFR and serves as a clear precaution of simplistically interpreting the FFR as brainstem response.
Collapse
|
31
|
Pre-stimulus connectivity patterns predict perception at binocular rivalry onset. J Vis 2019. [DOI: 10.1167/19.10.62a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
32
|
Alpha bursts in inferior parietal cortex underlie object individuation in dynamic scenes. J Vis 2019. [DOI: 10.1167/19.10.113c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
33
|
Abstract
Ongoing fluctuations in neural excitability and in networkwide activity patterns before stimulus onset have been proposed to underlie variability in near-threshold stimulus detection paradigms-that is, whether or not an object is perceived. Here, we investigated the impact of prestimulus neural fluctuations on the content of perception-that is, whether one or another object is perceived. We recorded neural activity with magnetoencephalography (MEG) before and while participants briefly viewed an ambiguous image, the Rubin face/vase illusion, and required them to report their perceived interpretation in each trial. Using multivariate pattern analysis, we showed robust decoding of the perceptual report during the poststimulus period. Applying source localization to the classifier weights suggested early recruitment of primary visual cortex (V1) and ∼160-ms recruitment of the category-sensitive fusiform face area (FFA). These poststimulus effects were accompanied by stronger oscillatory power in the gamma frequency band for face vs. vase reports. In prestimulus intervals, we found no differences in oscillatory power between face vs. vase reports in V1 or in FFA, indicating similar levels of neural excitability. Despite this, we found stronger connectivity between V1 and FFA before face reports for low-frequency oscillations. Specifically, the strength of prestimulus feedback connectivity (i.e., Granger causality) from FFA to V1 predicted not only the category of the upcoming percept but also the strength of poststimulus neural activity associated with the percept. Our work shows that prestimulus network states can help shape future processing in category-sensitive brain regions and in this way bias the content of visual experiences.
Collapse
|
34
|
Automatic and feature-specific prediction-related neural activity in the human auditory system. Nat Commun 2019; 10:3440. [PMID: 31371713 PMCID: PMC6672009 DOI: 10.1038/s41467-019-11440-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/11/2019] [Indexed: 12/04/2022] Open
Abstract
Prior experience enables the formation of expectations of upcoming sensory events. However, in the auditory modality, it is not known whether prediction-related neural signals carry feature-specific information. Here, using magnetoencephalography (MEG), we examined whether predictions of future auditory stimuli carry tonotopic specific information. Participants passively listened to sound sequences of four carrier frequencies (tones) with a fixed presentation rate, ensuring strong temporal expectations of when the next stimulus would occur. Expectation of which frequency would occur was parametrically modulated across the sequences, and sounds were occasionally omitted. We show that increasing the regularity of the sequence boosts carrier-frequency-specific neural activity patterns during both the anticipatory and omission periods, indicating that prediction-related neural activity is indeed feature-specific. Our results illustrate that even without bottom-up input, auditory predictions can activate tonotopically specific templates. After listening to a predictable sequence of sounds, we can anticipate and predict the next sound in the sequence. Here, the authors show that during expectation of a sound, the brain generates neural activity matching that which is produced by actually hearing the same sound.
Collapse
|
35
|
Detecting Pre-Stimulus Source-Level Effects on Object Perception with Magnetoencephalography. J Vis Exp 2019. [PMID: 31403630 DOI: 10.3791/60120] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Pre-stimulus oscillatory brain activity influences upcoming perception. The characteristics of this pre-stimulus activity can predict whether a near-threshold stimulus will be perceived or not perceived, but can they also predict which one of two competing stimuli with different perceptual contents is perceived? Ambiguous visual stimuli, which can be seen in one of two possible ways at a time, are ideally suited to investigate this question. Magnetoencephalography (MEG) is a neurophysiological measurement technique that records magnetic signals emitted as a result of brain activity. The millisecond temporal resolution of MEG allows for a characterization of oscillatory brain states from as little as 1 second of recorded data. Presenting an empty screen around 1 second prior to the ambiguous stimulus onset therefore provides a time window in which one can investigate whether pre-stimulus oscillatory activity biases the content of upcoming perception, as indicated by participants' reports. The spatial resolution of MEG is not excellent, but sufficient to localise sources of brain activity at the centimetre scale. Source reconstruction of MEG activity then allows for testing hypotheses about the oscillatory activity of specific regions of interest, as well as the time- and frequency-resolved connectivity between regions of interest. The described protocol enables a better understanding of the influence of spontaneous, ongoing brain activity on visual perception.
Collapse
|
36
|
Local Network-Level Integration Mediates Effects of Transcranial Alternating Current Stimulation. Brain Connect 2018; 8:212-219. [DOI: 10.1089/brain.2017.0564] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
|
37
|
A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements. Curr Biol 2018; 28:1453-1459.e3. [PMID: 29681475 PMCID: PMC5956463 DOI: 10.1016/j.cub.2018.03.044] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2018] [Revised: 02/25/2018] [Accepted: 03/20/2018] [Indexed: 11/26/2022]
Abstract
Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex-that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3-5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements.
Collapse
|
38
|
The Role of Working Memory in the Probabilistic Inference of Future Sensory Events. Cereb Cortex 2018; 27:2955-2969. [PMID: 27226445 DOI: 10.1093/cercor/bhw138] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
The ability to represent the emerging regularity of sensory information from the external environment has been thought to allow one to probabilistically infer future sensory occurrences and thus optimize behavior. However, the underlying neural implementation of this process is still not comprehensively understood. Through a convergence of behavioral and neurophysiological evidence, we establish that the probabilistic inference of future events is critically linked to people's ability to maintain the recent past in working memory. Magnetoencephalography recordings demonstrated that when visual stimuli occurring over an extended time series had a greater statistical regularity, individuals with higher working-memory capacity (WMC) displayed enhanced slow-wave neural oscillations in the θ frequency band (4-8 Hz.) prior to, but not during stimulus appearance. This prestimulus neural activity was specifically linked to contexts where information could be anticipated and influenced the preferential sensory processing for this visual information after its appearance. A separate behavioral study demonstrated that this process intrinsically emerges during continuous perception and underpins a realistic advantage for efficient behavioral responses. In this way, WMC optimizes the anticipation of higher level semantic concepts expected to occur in the near future.
Collapse
|
39
|
Innovations in Doctoral Training and Research on Tinnitus: The European School on Interdisciplinary Tinnitus Research (ESIT) Perspective. Front Aging Neurosci 2018; 9:447. [PMID: 29375369 PMCID: PMC5770576 DOI: 10.3389/fnagi.2017.00447] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 12/29/2017] [Indexed: 12/23/2022] Open
Abstract
Tinnitus is a common medical condition which interfaces many different disciplines, yet it is not a priority for any individual discipline. A change in its scientific understanding and clinical management requires a shift toward multidisciplinary cooperation, not only in research but also in training. The European School for Interdisciplinary Tinnitus research (ESIT) brings together a unique multidisciplinary consortium of clinical practitioners, academic researchers, commercial partners, patient organizations, and public health experts to conduct innovative research and train the next generation of tinnitus researchers. ESIT supports fundamental science and clinical research projects in order to: (1) advancing new treatment solutions for tinnitus, (2) improving existing treatment paradigms, (3) developing innovative research methods, (4) performing genetic studies on, (5) collecting epidemiological data to create new knowledge about prevalence and risk factors, (6) establishing a pan-European data resource. All research projects involve inter-sectoral partnerships through practical training, quite unlike anything that can be offered by any single university alone. Likewise, the postgraduate training curriculum fosters a deep knowledge about tinnitus whilst nurturing transferable competencies in personal qualities and approaches needed to be an effective researcher, knowledge of the standards, requirements and professionalism to do research, and skills to work with others and to ensure the wider impact of research. ESIT is the seed for future generations of creative, entrepreneurial, and innovative researchers, trained to master the upcoming challenges in the tinnitus field, to implement sustained changes in prevention and clinical management of tinnitus, and to shape doctoral education in tinnitus for the future.
Collapse
|
40
|
P166 Evidence for state dependent direct effects of alpha band transcranial alternating current stimulation. Clin Neurophysiol 2017. [DOI: 10.1016/j.clinph.2016.10.287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
41
|
Faith and oscillations recovered: On analyzing EEG/MEG signals during tACS. Neuroimage 2017; 147:960-963. [DOI: 10.1016/j.neuroimage.2016.11.022] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 10/18/2016] [Accepted: 11/05/2016] [Indexed: 10/20/2022] Open
|
42
|
Interpretability of Multivariate Brain Maps in Linear Brain Decoding: Definition, and Heuristic Quantification in Multivariate Analysis of MEG Time-Locked Effects. Front Neurosci 2017; 10:619. [PMID: 28167896 PMCID: PMC5253369 DOI: 10.3389/fnins.2016.00619] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Accepted: 12/27/2016] [Indexed: 01/18/2023] Open
Abstract
Brain decoding is a popular multivariate approach for hypothesis testing in neuroimaging. Linear classifiers are widely employed in the brain decoding paradigm to discriminate among experimental conditions. Then, the derived linear weights are visualized in the form of multivariate brain maps to further study spatio-temporal patterns of underlying neural activities. It is well known that the brain maps derived from weights of linear classifiers are hard to interpret because of high correlations between predictors, low signal to noise ratios, and the high dimensionality of neuroimaging data. Therefore, improving the interpretability of brain decoding approaches is of primary interest in many neuroimaging studies. Despite extensive studies of this type, at present, there is no formal definition for interpretability of multivariate brain maps. As a consequence, there is no quantitative measure for evaluating the interpretability of different brain decoding methods. In this paper, first, we present a theoretical definition of interpretability in brain decoding; we show that the interpretability of multivariate brain maps can be decomposed into their reproducibility and representativeness. Second, as an application of the proposed definition, we exemplify a heuristic for approximating the interpretability in multivariate analysis of evoked magnetoencephalography (MEG) responses. Third, we propose to combine the approximated interpretability and the generalization performance of the brain decoding into a new multi-objective criterion for model selection. Our results, for the simulated and real MEG data, show that optimizing the hyper-parameters of the regularized linear classifier based on the proposed criterion results in more informative multivariate brain maps. More importantly, the presented definition provides the theoretical background for quantitative evaluation of interpretability, and hence, facilitates the development of more effective brain decoding algorithms in the future.
Collapse
|
43
|
Spatially resolved time-frequency analysis of odour coding in the insect antennal lobe. Eur J Neurosci 2016; 44:2387-95. [PMID: 27452956 DOI: 10.1111/ejn.13344] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Revised: 06/15/2016] [Accepted: 07/18/2016] [Indexed: 11/28/2022]
Abstract
Antennal lobes constitute the first neurophils in the insect brain involved in coding and processing of olfactory information. With their stereotyped functional and anatomical organization, they provide an accessible model with which to investigate information processing of an external stimulus in a neural network in vivo. Here, by combining functional calcium imaging with time-frequency analysis, we have been able to monitor the oscillatory components of neural activity upon olfactory stimulation. The aim of this study is to investigate the presence of stimulus-induced oscillatory patterns in the honeybee antennal lobe, and to analyse the distribution of those patterns across the antennal lobe glomeruli. Fast two-photon calcium imaging reveals the presence of low-frequency oscillations, the intensity of which is perturbed by an incoming stimulus. Moreover, analysis of the spatial arrangement of this activity indicates that it is not homogeneous throughout the antennal lobe. On the contrary, each glomerulus displays an odorant-specific time-frequency profile, and acts as a functional unit of the oscillatory activity. The presented approach allows simultaneous recording of complex activity patterns across several nodes of the antennal lobe, providing the means to better understand the network dynamics regulating olfactory coding and leading to perception.
Collapse
|
44
|
Cross-modal distractors modulate oscillatory alpha power: the neural basis of impaired task performance. Psychophysiology 2016; 53:1651-1659. [DOI: 10.1111/psyp.12733] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 07/10/2016] [Indexed: 12/22/2022]
|
45
|
Limbic areas are functionally decoupled and visual cortex takes a more central role during fear conditioning in humans. Sci Rep 2016; 6:29220. [PMID: 27381479 PMCID: PMC4933895 DOI: 10.1038/srep29220] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 06/10/2016] [Indexed: 11/09/2022] Open
Abstract
Going beyond the focus on isolated brain regions (e.g. amygdala), recent neuroimaging studies on fear conditioning point to the relevance of a network of mutually interacting brain regions. In the present MEG study we used Graph Theory to uncover changes in the architecture of the brain functional network shaped by fear conditioning. Firstly, induced power analysis revealed differences in local cortical excitability (lower alpha and beta power) between CS+ and CS- localized to somatosensory cortex and insula. What is more striking however is that the graph theoretical measures unveiled a re-organization of brain functional connections, not evident using conventional power analysis. Subcortical fear-related structures exhibited reduced connectivity with temporal and frontal areas rendering the overall brain functional network more sparse during fear conditioning. At the same time, the calcarine took on a more central role in the network. Interestingly, the more the connectivity of limbic areas is reduced, the more central the role of the occipital cortex becomes. We speculated that both, the reduced coupling in some regions and the emerging centrality of others, contribute to the efficient processing of fear-relevant information during fear learning.
Collapse
|
46
|
Temporal Integration Windows in Neural Processing and Perception Aligned to Saccadic Eye Movements. Curr Biol 2016; 26:1659-1668. [PMID: 27291050 PMCID: PMC4942674 DOI: 10.1016/j.cub.2016.04.070] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2016] [Revised: 03/31/2016] [Accepted: 04/26/2016] [Indexed: 11/17/2022]
Abstract
When processing dynamic input, the brain balances the opposing needs of temporal integration and sensitivity to change. We hypothesized that the visual system might resolve this challenge by aligning integration windows to the onset of newly arriving sensory samples. In a series of experiments, human participants observed the same sequence of two displays separated by a brief blank delay when performing either an integration or segregation task. First, using magneto-encephalography (MEG), we found a shift in the stimulus-evoked time courses by a 150-ms time window between task signals. After stimulus onset, multivariate pattern analysis (MVPA) decoding of task in occipital-parietal sources remained above chance for almost 1 s, and the task-decoding pattern interacted with task outcome. In the pre-stimulus period, the oscillatory phase in the theta frequency band was informative about both task processing and behavioral outcome for each task separately, suggesting that the post-stimulus effects were caused by a theta-band phase shift. Second, when aligning stimulus presentation to the onset of eye fixations, there was a similar phase shift in behavioral performance according to task demands. In both MEG and behavioral measures, task processing was optimal first for segregation and then integration, with opposite phase in the theta frequency range (3-5 Hz). The best fit to neurophysiological and behavioral data was given by a dampened 3-Hz oscillation from stimulus or eye fixation onset. The alignment of temporal integration windows to input changes found here may serve to actively organize the temporal processing of continuous sensory input.
Collapse
|
47
|
Eyes wide shut: Transcranial alternating current stimulation drives alpha rhythm in a state dependent manner. Sci Rep 2016; 6:27138. [PMID: 27252047 PMCID: PMC4890046 DOI: 10.1038/srep27138] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Accepted: 05/13/2016] [Indexed: 11/09/2022] Open
Abstract
Transcranial alternating current stimulation (tACS) is used to modulate brain oscillations to measure changes in cognitive function. It is only since recently that brain activity in human subjects during tACS can be investigated. The present study aims to investigate the phase relationship between the external tACS signal and concurrent brain activity. Subjects were stimulated with tACS at individual alpha frequency during eyes open and eyes closed resting states. Electrodes were placed at Cz and Oz, which should affect parieto-occipital areas most strongly. Source space magnetoencephalography (MEG) data were used to estimate phase coherence between tACS and brain activity. Phase coherence was significantly increased in areas in the occipital pole in eyes open resting state only. The lag between tACS and brain responses showed considerable inter-individual variability. In conclusion, tACS at individual alpha frequency entrains brain activity in visual cortices. Interestingly, this effect is state dependent and is clearly observed with eyes open but only to a lesser extent with eyes closed.
Collapse
|
48
|
Flicker-Driven Responses in Visual Cortex Change during Matched-Frequency Transcranial Alternating Current Stimulation. Front Hum Neurosci 2016; 10:184. [PMID: 27199707 PMCID: PMC4844646 DOI: 10.3389/fnhum.2016.00184] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Accepted: 04/11/2016] [Indexed: 01/23/2023] Open
Abstract
We tested a novel combination of two neuro-stimulation techniques, transcranial alternating current stimulation (tACS) and frequency tagging, that promises powerful paradigms to study the causal role of rhythmic brain activity in perception and cognition. Participants viewed a stimulus flickering at 7 or 11 Hz that elicited periodic brain activity, termed steady-state responses (SSRs), at the same temporal frequency and its higher order harmonics. Further, they received simultaneous tACS at 7 or 11 Hz that either matched or differed from the flicker frequency. Sham tACS served as a control condition. Recent advances in reconstructing cortical sources of oscillatory activity allowed us to measure SSRs during concurrent tACS, which is known to impose strong artifacts in magnetoencephalographic (MEG) recordings. For the first time, we were thus able to demonstrate immediate effects of tACS on SSR-indexed early visual processing. Our data suggest that tACS effects are largely frequency-specific and reveal a characteristic pattern of differential influences on the harmonic constituents of SSRs.
Collapse
|
49
|
Alpha suppression and connectivity modulations in left temporal and parietal cortices index partial awareness of words. Neuroimage 2016; 133:279-287. [PMID: 27001501 PMCID: PMC4907686 DOI: 10.1016/j.neuroimage.2016.03.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 02/25/2016] [Accepted: 03/12/2016] [Indexed: 11/25/2022] Open
Abstract
The partial awareness hypothesis is a theoretical proposal that recently provided a reconciling solution to graded and dichotomous accounts of consciousness. It suggests that we can become conscious of distinct properties of an object independently, ranging from low-level features to complex forms of representation. We investigated this hypothesis using classic visual word masking adapted to a near-threshold paradigm. The masking intensity was adjusted to the individual perception threshold, at which individual alphabetical letters, but not words, could be perceived in approximately half of the trials. We confined perception to a pre-lexical stage of word processing that corresponded to a clear condition of partial awareness. At this level of representation, the stimulus properties began to emerge within consciousness, yet they did not escalate to full stimulus awareness. In other words, participants were able to perceive individual letters, while remaining unaware of the whole letter strings presented. Cortical activity measured with MEG was compared between physically identical trials that differed in perception (perceived, not perceived). We found that compared to no awareness, partial awareness of words was characterized by suppression of oscillatory alpha power in left temporal and parietal cortices. The analysis of functional connectivity with seeds based on the power effect in these two regions revealed sparse connections for the parietal seed, and strong connections between the temporal seed and other regions of the language network. We suggest that the engagement of language regions indexed by alpha power suppression is responsible for establishing and maintaining conscious representations of individual pre-lexical units. Near-threshold visual masking is used to characterize partial awareness of words. Partial awareness is indexed by left temporal and parietal alpha power suppression. Functional connectivity dissociates nodes in temporal and parietal cortices.
Collapse
|
50
|
The Tactile Window to Consciousness is Characterized by Frequency-Specific Integration and Segregation of the Primary Somatosensory Cortex. Sci Rep 2016; 6:20805. [PMID: 26864304 PMCID: PMC4749972 DOI: 10.1038/srep20805] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2015] [Accepted: 01/11/2016] [Indexed: 02/08/2023] Open
Abstract
We recently proposed that besides levels of local cortical excitability, also distinct pre-stimulus network states (windows to consciousness) determine whether a near-threshold stimulus will be consciously perceived. In the present magnetoencephalography study, we scrutinised these pre-stimulus network states with a focus on the primary somatosensory cortex. For this purpose participants performed a simple near-threshold tactile detection task. Confirming previous studies, we found reduced alpha and beta power in the somatosensory region contralateral to stimulation prior to correct stimulus detection as compared to undetected stimuli, and stronger event-related responses following successful stimulus detection. As expected, using graph theoretical measures, we also observed modulated pre-stimulus network level integration. Specifically, the right primary somatosensory cortex contralateral to stimulation showed an increased integration in the theta band, and additionally, a decreased integration in the beta band. Overall, these results underline the importance of network states for enabling conscious perception. Moreover, they indicate that also a reduction of irrelevant functional connections contributes to the window to consciousness by tuning pre-stimulus pathways of information flow.
Collapse
|