1
|
Kulasingham JP, Brodbeck C, Presacco A, Kuchinsky SE, Anderson S, Simon JZ. High gamma cortical processing of continuous speech in younger and older listeners. Neuroimage 2020; 222:117291. [PMID: 32835821 PMCID: PMC7736126 DOI: 10.1016/j.neuroimage.2020.117291] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 08/12/2020] [Accepted: 08/16/2020] [Indexed: 12/11/2022] Open
Abstract
Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.
Collapse
Affiliation(s)
- Joshua P Kulasingham
- (a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States.
| | - Christian Brodbeck
- (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States.
| | - Alessandro Presacco
- (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States.
| | - Stefanie E Kuchinsky
- (c)Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland, United States.
| | - Samira Anderson
- (d)Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States.
| | - Jonathan Z Simon
- (a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States; (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States; (e)Department of Biology, University of Maryland, College Park, Maryland, United States.
| |
Collapse
|
2
|
Shen G, Meltzoff AN, Marshall PJ. Body representations as indexed by oscillatory EEG activities in the context of tactile novelty processing. Neuropsychologia 2019; 132:107144. [PMID: 31319120 DOI: 10.1016/j.neuropsychologia.2019.107144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Revised: 05/07/2019] [Accepted: 07/12/2019] [Indexed: 11/16/2022]
Abstract
Neural oscillatory activities in different frequency bands are known to reflect different cognitive functions. The current study investigates neural oscillations involved in tactile novelty processing, in particular how physically different digits of the hand may be categorized as being more or less similar to one another. Time-frequency analyses were conducted on EEG responses recorded from a somatosensory mismatch protocol involving stimulation of the 1st, 3rd, and 5th digits. The pattern of tactile stimulation leveraged a functional category boundary between the 1st digit (thumb) and the other fingers. This functional category has been hypothesized to derive, in part, from the way that the hand is used to grasp and haptically explore objects. EEG responses to standard stimuli (the 3rd digit, probability of 80%) and two deviant stimuli (1st digit as across-boundary deviant and 5th digit as within-boundary deviant, probability of 10% each) were examined. Analyses of EEG responses examined changes in power as well as phase information. Deviant tactile stimuli evoked significantly greater theta event-related synchronization and greater phase-locking values compared to the corresponding control stimuli. The increase in theta power evoked by the contrast of the 3rd digit and the 1st digit was significantly larger than for the contrast between the 3rd and 5th digits. Desynchronization in the alpha and beta bands was greater for deviant than control stimuli, which may reflect increased local cortical excitation to novel stimuli, modulated by top-down feedback processes as part of a hierarchical novelty detection mechanism. The results are discussed in the context of the growing literature on neural processes involved in the generation and maintenance of body representations.
Collapse
Affiliation(s)
- Guannan Shen
- Department of Psychology, Temple University, 1701 N. 13th Street, Philadelphia, PA, 19122, USA.
| | - Andrew N Meltzoff
- Institute for Learning & Brain Sciences, University of Washington, USA
| | - Peter J Marshall
- Department of Psychology, Temple University, 1701 N. 13th Street, Philadelphia, PA, 19122, USA
| |
Collapse
|
3
|
Goossens T, Vercammen C, Wouters J, van Wieringen A. Aging Affects Neural Synchronization to Speech-Related Acoustic Modulations. Front Aging Neurosci 2016; 8:133. [PMID: 27378906 PMCID: PMC4908923 DOI: 10.3389/fnagi.2016.00133] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Accepted: 05/25/2016] [Indexed: 11/13/2022] Open
Abstract
As people age, speech perception problems become highly prevalent, especially in noisy situations. In addition to peripheral hearing and cognition, temporal processing plays a key role in speech perception. Temporal processing of speech features is mediated by synchronized activity of neural oscillations in the central auditory system. Previous studies indicate that both the degree and hemispheric lateralization of synchronized neural activity relate to speech perception performance. Based on these results, we hypothesize that impaired speech perception in older persons may, in part, originate from deviances in neural synchronization. In this study, auditory steady-state responses that reflect synchronized activity of theta, beta, low and high gamma oscillations (i.e., 4, 20, 40, and 80 Hz ASSR, respectively) were recorded in young, middle-aged, and older persons. As all participants had normal audiometric thresholds and were screened for (mild) cognitive impairment, differences in synchronized neural activity across the three age groups were likely to be attributed to age. Our data yield novel findings regarding theta and high gamma oscillations in the aging auditory system. At an older age, synchronized activity of theta oscillations is increased, whereas high gamma synchronization is decreased. In contrast to young persons who exhibit a right hemispheric dominance for processing of high gamma range modulations, older adults show a symmetrical processing pattern. These age-related changes in neural synchronization may very well underlie the speech perception problems in aging persons.
Collapse
Affiliation(s)
- Tine Goossens
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| | - Charlotte Vercammen
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| | - Jan Wouters
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| | - Astrid van Wieringen
- Research Group Experimental Oto-rhino-laryngology (ExpORL), Department of Neurosciences, KU Leuven - University of Leuven Leuven, Belgium
| |
Collapse
|
4
|
Hertrich I, Dietrich S, Ackermann H. How can audiovisual pathways enhance the temporal resolution of time-compressed speech in blind subjects? Front Psychol 2013; 4:530. [PMID: 23966968 PMCID: PMC3745084 DOI: 10.3389/fpsyg.2013.00530] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 07/26/2013] [Indexed: 11/13/2022] Open
Abstract
In blind people, the visual channel cannot assist face-to-face communication via lipreading or visual prosody. Nevertheless, the visual system may enhance the evaluation of auditory information due to its cross-links to (1) the auditory system, (2) supramodal representations, and (3) frontal action-related areas. Apart from feedback or top-down support of, for example, the processing of spatial or phonological representations, experimental data have shown that the visual system can impact auditory perception at more basic computational stages such as temporal signal resolution. For example, blind as compared to sighted subjects are more resistant against backward masking, and this ability appears to be associated with activity in visual cortex. Regarding the comprehension of continuous speech, blind subjects can learn to use accelerated text-to-speech systems for "reading" texts at ultra-fast speaking rates (>16 syllables/s), exceeding by far the normal range of 6 syllables/s. A functional magnetic resonance imaging study has shown that this ability, among other brain regions, significantly covaries with BOLD responses in bilateral pulvinar, right visual cortex, and left supplementary motor area. Furthermore, magnetoencephalographic measurements revealed a particular component in right occipital cortex phase-locked to the syllable onsets of accelerated speech. In sighted people, the "bottleneck" for understanding time-compressed speech seems related to higher demands for buffering phonological material and is, presumably, linked to frontal brain structures. On the other hand, the neurophysiological correlates of functions overcoming this bottleneck, seem to depend upon early visual cortex activity. The present Hypothesis and Theory paper outlines a model that aims at binding these data together, based on early cross-modal pathways that are already known from various audiovisual experiments on cross-modal adjustments during space, time, and object recognition.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen Tübingen, Germany
| | | | | |
Collapse
|
5
|
Hertrich I, Dietrich S, Ackermann H. Tracking the speech signal--time-locked MEG signals during perception of ultra-fast and moderately fast speech in blind and in sighted listeners. BRAIN AND LANGUAGE 2013; 124:9-21. [PMID: 23332808 DOI: 10.1016/j.bandl.2012.10.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2011] [Revised: 09/28/2012] [Accepted: 10/15/2012] [Indexed: 06/01/2023]
Abstract
Blind people can learn to understand speech at ultra-high syllable rates (ca. 20 syllables/s), a capability associated with hemodynamic activation of the central-visual system. To further elucidate the neural mechanisms underlying this skill, magnetoencephalographic (MEG) measurements during listening to sentence utterances were cross-correlated with time courses derived from the speech signal (envelope, syllable onsets and pitch periodicity) to capture phase-locked MEG components (14 blind, 12 sighted subjects; speech rate=8 or 16 syllables/s, pre-defined source regions: auditory and visual cortex, inferior frontal gyrus). Blind individuals showed stronger phase locking in auditory cortex than sighted controls, and right-hemisphere visual cortex activity correlated with syllable onsets in case of ultra-fast speech. Furthermore, inferior-frontal MEG components time-locked to pitch periodicity displayed opposite lateralization effects in sighted (towards right hemisphere) and blind subjects (left). Thus, ultra-fast speech comprehension in blind individuals appears associated with changes in early signal-related processing mechanisms both within and outside the central-auditory terrain.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | | | | |
Collapse
|
6
|
Tomaschek F, Truckenbrodt H, Hertrich I. Neural processing of acoustic duration and phonological German vowel length: time courses of evoked fields in response to speech and nonspeech signals. BRAIN AND LANGUAGE 2013; 124:117-131. [PMID: 23314420 DOI: 10.1016/j.bandl.2012.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Revised: 11/16/2012] [Accepted: 11/17/2012] [Indexed: 06/01/2023]
Abstract
Recent experiments showed that the perception of vowel length by German listeners exhibits the characteristics of categorical perception. The present study sought to find the neural activity reflecting categorical vowel length and the short-long boundary by examining the processing of non-contrastive durations and categorical length using MEG. Using disyllabic words with varying /a/-durations and temporally-matched nonspeech stimuli, we found that each syllable elicited an M50/M100-complex. The M50-amplitude to the second syllable varied along the durational continuum, possibly reflecting the mapping of duration onto a rhythm representation. Categorical length was reflected by an additional response elicited when vowel duration exceeded the short-long boundary. This was interpreted to reflect the integration of an additional timing unit for long in contrast to short vowels. Unlike to speech, responses to short nonspeech durations lacked a M100 to the first and M50 to the second syllable, indicating different integration windows for speech and nonspeech signals.
Collapse
Affiliation(s)
- Fabian Tomaschek
- Hertie Institute for Clinical Brain Research, Department of General Neurology, University of Tuebingen, Hoppe-Seyler-Straße 3, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
7
|
Hertrich I, Dietrich S, Trouvain J, Moos A, Ackermann H. Magnetic brain activity phase-locked to the envelope, the syllable onsets, and the fundamental frequency of a perceived speech signal. Psychophysiology 2011; 49:322-34. [PMID: 22175821 DOI: 10.1111/j.1469-8986.2011.01314.x] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2011] [Accepted: 09/08/2011] [Indexed: 11/27/2022]
Abstract
During speech perception, acoustic correlates of syllable structure and pitch periodicity are directly reflected in electrophysiological brain activity. Magnetoencephalography (MEG) recordings were made while 10 participants listened to natural or formant-synthesized speech at moderately fast or ultrafast rate. Cross-correlation analysis was applied to show brain activity time-locked to the speech envelope, to an acoustic marker of syllable onsets, and to pitch periodicity. The envelope yielded a right-lateralized M100-like response, syllable onsets gave rise to M50/M100-like fields with an additional anterior M50 component, and pitch (ca. 100 Hz) elicited a neural resonance bound to a central auditory source at a latency of 30 ms. The strength of these MEG components showed differential effects of syllable rate and natural versus synthetic speech. Presumingly, such phase-locking mechanisms serve as neuronal triggers for the extraction of information-bearing elements.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.
| | | | | | | | | |
Collapse
|
8
|
Langner G, Dinse HR, Godde B. A map of periodicity orthogonal to frequency representation in the cat auditory cortex. Front Integr Neurosci 2009; 3:27. [PMID: 19949464 PMCID: PMC2784045 DOI: 10.3389/neuro.07.027.2009] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2009] [Accepted: 10/06/2009] [Indexed: 12/02/2022] Open
Abstract
Harmonic sounds, such as voiced speech sounds and many animal communication signals, are characterized by a pitch related to the periodicity of their envelopes. While frequency information is extracted by mechanical filtering of the cochlea, periodicity information is analyzed by temporal filter mechanisms in the brainstem. In the mammalian auditory midbrain envelope periodicity is represented in maps orthogonal to the representation of sound frequency. However, how periodicity is represented across the cortical surface of primary auditory cortex (AI) remains controversial. Using optical recording of intrinsic signals, we here demonstrate that a periodicity map exists in primary AI of the cat. While pure tone stimulation confirmed the well-known frequency gradient along the rostro-caudal axis of AI, stimulation with harmonic sounds revealed segregated bands of activation, indicating spatially localized preferences to specific periodicities along a dorso-ventral axis, nearly orthogonal to the tonotopic gradient. Analysis of the response locations revealed an average gradient of - 100 degrees +/- 10 degrees for the periodotopic, and -12 degrees +/- 18 degrees for the tonotopic map resulting in a mean angle difference of 88 degrees . The gradients were 0.65 +/- 0.08 mm/octave for periodotopy and 1.07 +/- 0.16 mm/octave for tonotopy indicating that more cortical territory is devoted to the representation of an octave along the tonotopic than along the periodotopic gradient. Our results suggest that the fundamental importance of pitch, as evident in human perception, is also reflected in the layout of cortical maps and that the orthogonal spatial organization of frequency and periodicity might be a more general cortical organization principle.
Collapse
Affiliation(s)
- Gerald Langner
- Neuroacoustics, Darmstadt University of TechnologyDarmstadt, Germany
| | - Hubert R. Dinse
- Institute for Neuroinformatics, Department of Theoretical Biology, Neural Plasticity Laboratory, Ruhr-University BochumBochum, Germany
| | - Ben Godde
- Neuroscience and Human Performance, Jacobs Center on Lifelong Learning, Jacobs UniversityBremen, Germany
| |
Collapse
|
9
|
Hertrich I, Mathiak K, Lutzenberger W, Ackermann H. Time course of early audiovisual interactions during speech and nonspeech central auditory processing: a magnetoencephalography study. J Cogn Neurosci 2009; 21:259-74. [PMID: 18510440 DOI: 10.1162/jocn.2008.21019] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Cross-modal fusion phenomena suggest specific interactions of auditory and visual sensory information both within the speech and nonspeech domains. Using whole-head magnetoencephalography, this study recorded M50 and M100 fields evoked by ambiguous acoustic stimuli that were visually disambiguated to perceived /ta/ or /pa/ syllables. As in natural speech, visual motion onset preceded the acoustic signal by 150 msec. Control conditions included visual and acoustic nonspeech signals as well as visual-only and acoustic-only stimuli. (a) Both speech and nonspeech motion yielded a consistent attenuation of the auditory M50 field, suggesting a visually induced "preparatory baseline shift" at the level of the auditory cortex. (b) Within the temporal domain of the auditory M100 field, visual speech and nonspeech motion gave rise to different response patterns (nonspeech: M100 attenuation; visual /pa/: left-hemisphere M100 enhancement; /ta/: no effect). (c) These interactions could be further decomposed using a six-dipole model. One of these three pairs of dipoles (V270) was fitted to motion-induced activity at a latency of 270 msec after motion onset, that is, the time domain of the auditory M100 field, and could be attributed to the posterior insula. This dipole source responded to nonspeech motion and visual /pa/, but was found suppressed in the case of visual /ta/. Such a nonlinear interaction might reflect the operation of a binary distinction between the marked phonological feature "labial" versus its underspecified competitor "coronal." Thus, visual processing seems to be shaped by linguistic data structures even prior to its fusion with auditory information channel.
Collapse
|
10
|
Hsiao FJ, Wu ZA, Ho LT, Lin YY. Theta oscillation during auditory change detection: An MEG study. Biol Psychol 2009; 81:58-66. [DOI: 10.1016/j.biopsycho.2009.01.007] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2008] [Revised: 01/29/2009] [Accepted: 01/29/2009] [Indexed: 11/16/2022]
|
11
|
Neiman AB, Russell DF, Yakusheva TA, DiLullo A, Tass PA. Response clustering in transient stochastic synchronization and desynchronization of coupled neuronal bursters. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2007; 76:021908. [PMID: 17930066 DOI: 10.1103/physreve.76.021908] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2006] [Revised: 04/12/2007] [Indexed: 05/25/2023]
Abstract
We studied the transient dynamics of synchronized coupled neuronal bursters subjected to repeatedly applied stimuli, using a hybrid neuroelectronic system of paddlefish electroreceptors. We show experimentally that the system characteristically undergoes poststimulus transients, in which the relative phases of the oscillators may be grouped in several clusters, traversing alternate phase trajectories. These signature transient dynamics can be detected and characterized quantitatively using specific statistical measures based on a stochastic approach to transient oscillator responses.
Collapse
Affiliation(s)
- Alexander B Neiman
- Department of Physics and Astronomy and Quantitative Biology Institute, Ohio University, Athens, Ohio 45701, USA
| | | | | | | | | |
Collapse
|
12
|
Wilson TW, Hernandez OO, Asherin RM, Teale PD, Reite ML, Rojas DC. Cortical gamma generators suggest abnormal auditory circuitry in early-onset psychosis. Cereb Cortex 2007; 18:371-8. [PMID: 17557901 PMCID: PMC2648842 DOI: 10.1093/cercor/bhm062] [Citation(s) in RCA: 85] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Neurobiological theories of schizophrenia and related psychoses have increasingly emphasized impaired neuronal coordination (i.e., dysfunctional connectivity) as central to the pathophysiology. Although neuroimaging evidence has mostly corroborated these accounts, the basic mechanism(s) of reduced functional connectivity remains elusive. In this study, we examine the developmental trajectory and underlying mechanism(s) of dysfunctional connectivity by using gamma oscillatory power as an index of local and long-range circuit integrity. An early-onset psychosis group and a matched cohort of typically developing adolescents listened to monaurally presented click-trains, as whole-head magnetoencephalography data were acquired. Consistent with previous work, gamma-band power was significantly higher in right auditory cortices across groups and conditions. However, patients exhibited significantly reduced overall gamma power relative to controls, and showed a reduced ear-of-stimulation effect indicating that ipsi- versus contralateral presentation had less impact on hemispheric power. Gamma-frequency oscillations are thought to be dependent on gamma-aminobutyric acidergic interneuronal networks, thus these patients' impairment in generating and/or maintaining such activity may indicate that local circuit integrity is at least partially compromised early in the disease process. In addition, patients also showed abnormality in long-range networks (i.e., ear-of-stimulation effects) potentially suggesting that multiple stages along auditory pathways contribute to connectivity aberrations found in patients with psychosis.
Collapse
Affiliation(s)
- Tony W Wilson
- Magnetoencephalography Laboratory, Department of Neurology, Wake Forest University Health Sciences, Winston-Salem, NC 27103, USA.
| | | | | | | | | | | |
Collapse
|
13
|
Kuriki S, Ohta K, Koyama S. Persistent responsiveness of long-latency auditory cortical activities in response to repeated stimuli of musical timbre and vowel sounds. ACTA ACUST UNITED AC 2007; 17:2725-32. [PMID: 17289776 DOI: 10.1093/cercor/bhl182] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Long-latency auditory-evoked magnetic field and potential show strong attenuation of N1m/N1 responses when an identical stimulus is presented repeatedly due to adaptation of auditory cortical neurons. This adaptation is weak in subsequently occurring P2m/P2 responses, being weaker for piano chords than single piano notes. The adaptation of P2m is more suppressed in musicians having long-term musical training than in nonmusicians, whereas the amplitude of P2 is enhanced preferentially in musicians as the spectral complexity of musical tones increases. To address the key issues of whether such high responsiveness of P2m/P2 responses to complex sounds is intrinsic and common to nonmusical sounds, we conducted a magnetoencephalographic study on participants who had no experience of musical training, using consecutive trains of piano and vowel sounds. The dipole moment of the P2m sources located in the auditory cortex indicated significantly suppressed adaptation in the right hemisphere both to piano and vowel sounds. Thus, the persistent responsiveness of the P2m activity may be inherent, not induced by intensive training, and common to spectrally complex sounds. The right hemisphere dominance of the responsiveness to musical and speech sounds suggests analysis of acoustic features of object sounds to be a significant function of P2m activity.
Collapse
Affiliation(s)
- Shinya Kuriki
- Research Institute for Electronic Science, Hokkaido University, Sapporo, Japan
| | | | | |
Collapse
|
14
|
Hertrich I, Mathiak K, Lutzenberger W, Menning H, Ackermann H. Sequential audiovisual interactions during speech perception: A whole-head MEG study. Neuropsychologia 2007; 45:1342-54. [PMID: 17067640 DOI: 10.1016/j.neuropsychologia.2006.09.019] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2006] [Revised: 07/26/2006] [Accepted: 09/26/2006] [Indexed: 11/21/2022]
Abstract
Using whole-head magnetoencephalography (MEG), audiovisual (AV) interactions during speech perception (/ta/- and /pa/-syllables) were investigated in 20 subjects. Congruent AV events served as the 'standards' of an oddball design. The deviants encompassed incongruent /ta/-/pa/ configurations differing from the standards either in the acoustic or the visual domain. As an auditory non-speech control condition, the same video signals were synchronized with either one of two complex tones. As in natural speech, visual movement onset preceded acoustic signals by about 150 ms. First, the impact of visual information on auditorily evoked fields to non-speech sounds was determined. Larger facial movements (/pa/ versus /ta/) yielded enhanced early responses such as the M100 component, indicating, most presumably, anticipatory pre-activation of auditory cortex by visual motion cues. As a second step of analysis, mismatch fields (MMF) were calculated. Acoustic deviants elicited a typical MMF, peaking ca. 180 ms after stimulus onset, whereas visual deviants gave rise to later responses (220 ms) of a more posterior-medial source location. Finally, a late (275 ms), left-lateralized visually-induced MMF component, resembling the acoustic mismatch response, emerged during the speech condition, presumably reflecting phonetic/linguistic operations. There is mounting functional imaging evidence for an early impact of visual information on auditory cortical regions during speech perception. The present study suggests at least two successive AV interactions in association with syllable recognition tasks: early activation of auditory areas depending upon visual motion cues and a later speech-specific left-lateralized response mediated, conceivably, by backward-projections from multisensory areas.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of General Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | | | | | | | | |
Collapse
|
15
|
Boonstra TW, Daffertshofer A, Peper CE, Beek PJ. Amplitude and phase dynamics associated with acoustically paced finger tapping. Brain Res 2006; 1109:60-9. [PMID: 16860292 DOI: 10.1016/j.brainres.2006.06.039] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2006] [Revised: 06/12/2006] [Accepted: 06/13/2006] [Indexed: 11/17/2022]
Abstract
To gain insight into the brain activity associated with the performance of an acoustically paced synchronization task, we analyzed the amplitude and phase dynamics inherent in magnetoencephalographic (MEG) signals across frequency bands in order to discriminate between evoked and induced responses. MEG signals were averaged with respect to motor and auditory events (tap and tone onsets). Principal component analysis was used to compare amplitude and phase changes during listening and during paced and unpaced tapping, allowing a separation of brain activity related to motor and auditory processes, respectively. Motor performance was accompanied by phasic amplitude changes and increased phase locking in the beta band. Auditory processing of acoustic stimuli resulted in a simultaneous increase of amplitude and phase locking in the theta and alpha band. The temporal overlap of auditory-related amplitude changes and phase locking indicated an evoked response, in accordance with previous studies on auditory perception. The temporal difference of movement-related amplitude and phase dynamics in the beta band, on the other hand, suggested a change in ongoing brain activity, i.e., an induced response supporting previous results on motor-related brain dynamics in the beta band.
Collapse
Affiliation(s)
- T W Boonstra
- Institute for Fundamental and Clinical Human Movement Sciences, Faculty of Human Movement Sciences, Vrije Universiteit, Van der Boechorststraat 9, 1081 BT Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
16
|
Ross B, Herdman AT, Pantev C. Right Hemispheric Laterality of Human 40 Hz Auditory Steady-state Responses. Cereb Cortex 2005; 15:2029-39. [PMID: 15772375 DOI: 10.1093/cercor/bhi078] [Citation(s) in RCA: 137] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Hemispheric asymmetries during auditory sensory processing were examined using whole-head magnetoencephalographic recordings of auditory evoked responses to monaurally and binaurally presented amplitude-modulated sounds. Laterality indices were calculated for the transient onset responses (P1m and N1m), the transient gamma-band response, the sustained field (SF) and the 40 Hz auditory steady-state response (ASSR). All response components showed laterality toward the hemisphere contralateral to the stimulated ear. In addition, the SF and ASSR showed right hemispheric (RH) dominance. Thus, laterality of sustained response components (SF and ASSR) was distinct from that of transient responses. ASSR and SF are sensitive to stimulus periodicity. Consequently, ASSR and SF likely reflect periodic stimulus attributes and might be relevant for pitch processing based on temporal stimulus regularities. In summary, the results of the present studies demonstrate that asymmetric organization in the cerebral auditory cortex is already established on the level of sensory processing.
Collapse
Affiliation(s)
- B Ross
- The Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada, and Institute for Biomagnetism and Biosignalanalysis, Münster University Hospital, Germany.
| | | | | |
Collapse
|
17
|
Abstract
The functional significance of the M50 and M100 auditory evoked fields remains unclear. Here we report auditory evoked field data from three different studies employing wide-band noise stimuli. We find that, for the same stimuli, the strength of the M100, as well as its lateralization, are task-modulated. The M50, in contrast, shows three properties: It is dramatically more pronounced for noise stimuli than for pure tones, does not seem to be task dependent, and, is significantly stronger in the left hemisphere in all task conditions. These contrasting patterns of activation shed light on the properties of the response-generating mechanisms and suggest roles in the process of auditory figure-ground segregation.
Collapse
Affiliation(s)
- Maria Chait
- Neuroscience and Cognitive Science Program, University of Maryland, 1401 Marie Mount Hall, College Park, MD 20742, USA.
| | | | | |
Collapse
|