1
|
Shestopalova LB, Petropavlovskaia EA, Salikova DA, Semenova VV. Temporal integration of sound motion: Motion-onset response and perception. Hear Res 2024; 441:108922. [PMID: 38043403 DOI: 10.1016/j.heares.2023.108922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 11/14/2023] [Accepted: 11/20/2023] [Indexed: 12/05/2023]
Abstract
The purpose of our study was to estimate the time interval required for integrating the acoustical changes related to sound motion using both psychophysical and EEG measures. Healthy listeners performed direction identification tasks under dichotic conditions in the delayed-motion paradigm. Minimal audible movement angle (MAMA) has been measured over the range of velocities from 60 to 360 deg/s. We also measured minimal duration of motion, at which the listeners could identify its direction. EEG was recorded in the same group of subjects during passive listening. Motion onset responses (MOR) were analyzed. MAMA increased linearly with motion velocity. Minimum audible angle (MAA) calculated from this linear function was about 2 deg. For higher velocities of the delayed motion, we found 2- to 3-fold better spatial resolution than the one previously reported for motion starting at the sound onset. The time required for optimal discrimination of motion direction was about 34 ms. The main finding of our study was that both direction identification time obtained in the behavioral task and cN1 latency behaved like hyperbolic functions of the sound's velocity. Direction identification time decreased asymptotically to 8 ms, which was considered minimal integration time for the instantaneous shift detection. Peak latency of cN1 also decreased with increasing velocity and asymptotically approached 137 ms. This limit corresponded to the latency of response to the instantaneous sound shift and was 37 ms later than the latency of the sound-onset response. The direction discrimination time (34 ms) was of the same magnitude as the additional time required for motion processing to be reflected in the MOR potential. Thus, MOR latency can be viewed as a neurophysiological index of temporal integration. Based on the findings obtained, we may assume that no measurable MOR would be evoked by slowly moving stimuli as they would reach their MAMAs in a time longer than the optimal integration time.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia.
| | | | - Diana A Salikova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| |
Collapse
|
2
|
İlhan B, Kurt S, Ungan P. Auditory cortical responses to abrupt lateralization shifts do not reflect the activity of hemifield-specific units involved in opponent coding of auditory space. Neuropsychologia 2023; 188:108629. [PMID: 37356539 DOI: 10.1016/j.neuropsychologia.2023.108629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Recent studies show that the classical model based on axonal delay-lines may not explain interaural time difference (ITD) based spatial coding in humans. Instead, a population-code model called "opponent channels model" (OCM) has been suggested. This model comprises two competing channels respectively for the two auditory hemifields, each with a sigmoidal tuning curve. Event-related potentials (ERPs) to ITD-changes are used in some studies to test the predictions of this model by considering the sounds before and after the change as adaptor and probe stimuli, respectively. It is assumed in these studies that the former stimulus causes adaptation of the neurons selective to its side, and that the ERP N1-P2 response to the ITD-change is the specific response of the neurons with selectivity to the side of probe sound. However, these ERP components are known as a global, non-specific acoustic change complex of cortical origin evoked by any change in the auditory environment. It probably does not genuinely reflect the activity of some stimulus-specific neuronal units that have escaped the refractory effect of the preceding adaptor, which means a violation of the crucial assumption in an adaptor-probe paradigm. To assess this viewpoint, we conducted two experiments. In the first one, we recorded ERPs to abrupt lateralization shifts of click trains having various pre- and post-shift ITDs within the physiological range of -600μs to +600μs. Magnitudes of the ERP components P1, N1, and P2 to these ITD-shifts did not comply with the additive behavior of partial probe responses presumed for an adaptor-probe paradigm, casting doubt on the accuracy of testing sensory coding models by using ERPs to abrupt lateralization changes. Findings of the second experiment, involving ERPs to conjoint outwards/transverse shift stimuli also supported this conclusion.
Collapse
Affiliation(s)
- Barkın İlhan
- Department of Biophysics, Necmettin Erbakan University Meram Medical Faculty, Konya, Türkiye.
| | - Saliha Kurt
- Department of Audiometry, Selçuk University Vocational School of Health Services, Konya, Türkiye.
| | | |
Collapse
|
3
|
Zeng T, Wang Z, Lin Y, Cheng Y, Shan X, Tao Y, Zhao X, Xu H, Liu Y. Doppler Frequency-Shift Information Processing in WO x -Based Memristive Synapse for Auditory Motion Perception. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2300030. [PMID: 36862024 PMCID: PMC10161103 DOI: 10.1002/advs.202300030] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/10/2023] [Indexed: 05/06/2023]
Abstract
Auditory motion perception is one crucial capability to decode and discriminate the spatiotemporal information for neuromorphic auditory systems. Doppler frequency-shift feature and interaural time difference (ITD) are two fundamental cues of auditory information processing. In this work, the functions of azimuth detection and velocity detection, as the typical auditory motion perception, are demonstrated in a WOx -based memristive synapse. The WOx memristor presents both the volatile mode (M1) and semi-nonvolatile mode (M2), which are capable of implementing the high-pass filtering and processing the spike trains with a relative timing and frequency shift. In particular, the Doppler frequency-shift information processing for velocity detection is emulated in the WOx memristor based auditory system for the first time, which relies on a scheme of triplet spike-timing-dependent-plasticity in the memristor. These results provide new opportunities for the mimicry of auditory motion perception and enable the auditory sensory system to be applied in future neuromorphic sensing.
Collapse
Affiliation(s)
- Tao Zeng
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Zhongqiang Wang
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Ya Lin
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - YanKun Cheng
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Xuanyu Shan
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Ye Tao
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Xiaoning Zhao
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Haiyang Xu
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| | - Yichun Liu
- Key Laboratory for UV Light-Emitting Materials and Technology (Northeast Normal University), Ministry of Education, 5268 Renmin Street, Changchun, 130024, P. R. China
| |
Collapse
|
4
|
Zhang H, Xie J, Xiao Y, Cui G, Xu G, Tao Q, Gebrekidan YY, Yang Y, Ren Z, Li M. Steady-state auditory motion based potentials evoked by intermittent periodic virtual sound source and the effect of auditory noise on EEG enhancement. Hear Res 2023; 428:108670. [PMID: 36563411 DOI: 10.1016/j.heares.2022.108670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 12/12/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
Hearing is one of the most important human perception forms, and humans can capture the movement of sound in complex environments. On the basis of this phenomenon, this study explored the possibility of eliciting a steady-state brain response in an intermittent periodic motion sound source. In this study, a novel discrete continuous and orderly change of sound source positions stimulation paradigm was designed based on virtual sound using head-related transfer functions (HRTFs). And then the auditory motion stimulation paradigms with different noise levels were designed by changing the signal-to-noise ratio (SNR). The characteristics of brain response and the effects of different noises on brain response were studied by analyzing electroencephalogram (EEG) signals evoked by the proposed stimulation. Experimental results showed that the proposed paradigm could elicit a novel steady-state auditory evoked potential (AEP), i.e., steady-state motion auditory evoked potential (SSMAEP). And moderate noise could enhance SSMAEP amplitude and corresponding brain connectivity. This study enriches the types of AEPs and provides insights into the mechanism of brain processing of motion sound sources and the impact of noise on brain processing.
Collapse
Affiliation(s)
- Huanqing Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jun Xie
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China; School of Mechanical Engineering, Xinjiang University, Urumqi, China.
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China.
| | - Guiling Cui
- National Key Laboratory of Human Factors Engineering, China Astronauts Research and Training Center, Beijing, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Urumqi, China
| | | | - Yuzhe Yang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Zhiyuan Ren
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Min Li
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China; State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
5
|
Ozmeral EJ, Menon KN. Selective auditory attention modulates cortical responses to sound location change for speech in quiet and in babble. PLoS One 2023; 18:e0268932. [PMID: 36638116 PMCID: PMC9838839 DOI: 10.1371/journal.pone.0268932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
Listeners use the spatial location or change in spatial location of coherent acoustic cues to aid in auditory object formation. From stimulus-evoked onset responses in normal-hearing listeners using electroencephalography (EEG), we have previously shown measurable tuning to stimuli changing location in quiet, revealing a potential window into the cortical representations of auditory scene analysis. These earlier studies used non-fluctuating, spectrally narrow stimuli, so it was still unknown whether previous observations would translate to speech stimuli, and whether responses would be preserved for stimuli in the presence of background maskers. To examine the effects that selective auditory attention and interferers have on object formation, we measured cortical responses to speech changing location in the free field with and without background babble (+6 dB SNR) during both passive and active conditions. Active conditions required listeners to respond to the onset of the speech stream when it occurred at a new location, explicitly indicating 'yes' or 'no' to whether the stimulus occurred at a block-specific location either 30 degrees to the left or right of midline. In the aggregate, results show similar evoked responses to speech stimuli changing location in quiet compared to babble background. However, the effect of the two background environments diverges somewhat when considering the magnitude and direction of the location change and where the subject was attending. In quiet, attention to the right hemifield appeared to evoke a stronger response than attention to the left hemifield when speech shifted in the rightward direction. No such difference was found in babble conditions. Therefore, consistent with challenges associated with cocktail party listening, directed spatial attention could be compromised in the presence of stimulus noise and likely leads to poorer use of spatial cues in auditory streaming.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States of America
| | - Katherine N Menon
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States of America
| |
Collapse
|
6
|
Ozmeral EJ, Eddins DA, Eddins AC. Selective auditory attention modulates cortical responses to sound location change in younger and older adults. J Neurophysiol 2021; 126:803-815. [PMID: 34288759 DOI: 10.1152/jn.00609.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The present study measured scalp potentials in response to low-frequency, narrowband noise bursts changing location in the front, azimuthal plane. At question was whether selective auditory attention has a modulatory effect on the cortical encoding of spatial change and whether older listeners with normal-hearing thresholds would show depressed cortical representation for spatial changes relative to younger listeners. Young and older normal-hearing listeners were instructed to either passively listen to the stimulus presentation or actively attend to a single location (either 30° left or right of midline) and detect when a noise stream moved to the attended location. Prominent peaks of the electroencephalographic scalp waveforms were compared across groups, locations, and attention conditions. In addition, an opponent-channel model of spatial coding was performed to capture the effect of attention on spatial-change tuning. Younger listeners showed not only larger responses overall but a greater dynamic range in their response to location changes. Results suggest that younger listeners were acquiring and encoding key spatial cues at early cortical processing areas. On the other hand, each group exhibited modulatory effects of attention to spatial-change tuning, indicating that both younger and older listeners selectively attend to space in a manner that amplifies the available signal.NEW & NOTEWORTHY In complex acoustic scenes, listeners take advantage of spatial cues to selectively attend to sounds that are deemed immediately relevant. At the neural level, selective attention amplifies electrical responses to spatial changes. We tested whether older and younger listeners have comparable modulatory effects of attention to stimuli moving in the free field. Results indicate that although older listeners do have depressed overall responses, selective attention enhances spatial-change tuning in younger and older listeners alike.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Ann Clock Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
7
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Lateralization of brain responses to auditory motion: A study using single-trial analysis. Neurosci Res 2020; 162:31-44. [PMID: 32001322 DOI: 10.1016/j.neures.2020.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 12/17/2019] [Accepted: 01/10/2020] [Indexed: 11/19/2022]
Abstract
The present study investigates hemispheric asymmetry of the ERPs and low-frequency oscillatory responses evoked in both hemispheres of the brain by the sound stimuli with delayed onset of motion. EEG was recorded for three patterns of sound motion produced by changes in interaural time differences. Event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) were computed from the time-frequency decomposition of EEG signals. The participants either read books of their choice (passive listening) or indicated the sound trajectories perceived using a graphic tablet (active listening). Our goal was to find out whether the lateralization of the motion-onset response (MOR) and oscillatory responses to sound motion were more consistent with the right-hemispheric dominance, contralateral or neglect model of interhemispheric asymmetry. Apparent dominance of the right hemisphere was found only in the ERSP responses. Stronger contralaterality of the left hemisphere corresponding to the "neglect model" of asymmetry was shown by the MOR components and by the phase coherence of the delta-alpha oscillations. Velocity and attention did not change consistently the interhemispheric asymmetry of both the MOR and the oscillatory responses. Our findings demonstrate how the lateralization pattern shown by the MOR potential was interrelated with that of the motion-related single-trial measures.
Collapse
Affiliation(s)
- L B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - E A Petropavlovskaia
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - V V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - N I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| |
Collapse
|
8
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
9
|
Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG. Neuroimage 2018; 181:683-691. [PMID: 30053517 DOI: 10.1016/j.neuroimage.2018.07.054] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 07/10/2018] [Accepted: 07/23/2018] [Indexed: 12/29/2022] Open
Abstract
It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.
Collapse
|
10
|
Sound frequency affects the auditory motion-onset response in humans. Exp Brain Res 2018; 236:2713-2726. [PMID: 29998350 DOI: 10.1007/s00221-018-5329-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 07/04/2018] [Indexed: 10/28/2022]
Abstract
The current study examines the modulation of the motion-onset response based on the frequency-range of sound stimuli. Delayed motion-onset and stationary stimuli were presented in a free-field by sequentially activating loudspeakers on an azimuthal plane keeping the natural percept of externalized sound presentation. The sounds were presented in low- or high-frequency ranges and had different motion direction within each hemifield. Difference waves were calculated by contrasting the moving and stationary sounds to isolate the motion-onset responses. Analyses carried out at the peak amplitudes and latencies on the difference waves showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Subsequent post hoc analysis of the normalized amplitude of the motion response confirmed the previous finding by excluding the possibility that the frequency range had an overall effect on the waveform, and showing that this effect was instead limited to the motion response. These results support the idea of a modular organization of the motion-onset response with the processing of primary sound motion characteristics being reflected in the early part of the response. Also, the article highlights the importance of specificity in auditory stimulus design.
Collapse
|
11
|
Motion processing after sight restoration: No competition between visual recovery and auditory compensation. Neuroimage 2018; 167:284-296. [DOI: 10.1016/j.neuroimage.2017.11.050] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 11/16/2017] [Accepted: 11/22/2017] [Indexed: 11/17/2022] Open
|
12
|
Event-Related Potentials to Sound Stimuli with Delayed Onset of Motion in Conditions of Active and Passive Listening. ACTA ACUST UNITED AC 2017. [DOI: 10.1007/s11055-017-0536-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
13
|
Altmann CF, Ueda R, Bucher B, Furukawa S, Ono K, Kashino M, Mima T, Fukuyama H. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography. Neuroimage 2017; 159:185-194. [DOI: 10.1016/j.neuroimage.2017.07.055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 06/15/2017] [Accepted: 07/25/2017] [Indexed: 11/29/2022] Open
|
14
|
Asymmetries in behavioral and neural responses to spectral cues demonstrate the generality of auditory looming bias. Proc Natl Acad Sci U S A 2017; 114:9743-9748. [PMID: 28827336 DOI: 10.1073/pnas.1703247114] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Studies of auditory looming bias have shown that sources increasing in intensity are more salient than sources decreasing in intensity. Researchers have argued that listeners are more sensitive to approaching sounds compared with receding sounds, reflecting an evolutionary pressure. However, these studies only manipulated overall sound intensity; therefore, it is unclear whether looming bias is truly a perceptual bias for changes in source distance, or only in sound intensity. Here we demonstrate both behavioral and neural correlates of looming bias without manipulating overall sound intensity. In natural environments, the pinnae induce spectral cues that give rise to a sense of externalization; when spectral cues are unnatural, sounds are perceived as closer to the listener. We manipulated the contrast of individually tailored spectral cues to create sounds of similar intensity but different naturalness. We confirmed that sounds were perceived as approaching when spectral contrast decreased, and perceived as receding when spectral contrast increased. We measured behavior and electroencephalography while listeners judged motion direction. Behavioral responses showed a looming bias in that responses were more consistent for sounds perceived as approaching than for sounds perceived as receding. In a control experiment, looming bias disappeared when spectral contrast changes were discontinuous, suggesting that perceived motion in distance and not distance itself was driving the bias. Neurally, looming bias was reflected in an asymmetry of late event-related potentials associated with motion evaluation. Hence, both our behavioral and neural findings support a generalization of the auditory looming bias, representing a perceptual preference for approaching auditory objects.
Collapse
|
15
|
Ozmeral EJ, Eddins DA, Eddins AC. Reduced temporal processing in older, normal-hearing listeners evident from electrophysiological responses to shifts in interaural time difference. J Neurophysiol 2016; 116:2720-2729. [PMID: 27683889 PMCID: PMC5133308 DOI: 10.1152/jn.00560.2016] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 09/24/2016] [Indexed: 11/22/2022] Open
Abstract
Previous electrophysiological studies of interaural time difference (ITD) processing have demonstrated that ITDs are represented by a nontopographic population rate code. Rather than narrow tuning to ITDs, neural channels have broad tuning to ITDs in either the left or right auditory hemifield, and the relative activity between the channels determines the perceived lateralization of the sound. With advancing age, spatial perception weakens and poor temporal processing contributes to declining spatial acuity. At present, it is unclear whether age-related temporal processing deficits are due to poor inhibitory controls in the auditory system or degraded neural synchrony at the periphery. Cortical processing of spatial cues based on a hemifield code are susceptible to potential age-related physiological changes. We consider two distinct predictions of age-related changes to ITD sensitivity: declines in inhibitory mechanisms would lead to increased excitation and medial shifts to rate-azimuth functions, whereas a general reduction in neural synchrony would lead to reduced excitation and shallower slopes in the rate-azimuth function. The current study tested these possibilities by measuring an evoked response to ITD shifts in a narrow-band noise. Results were more in line with the latter outcome, both from measured latencies and amplitudes of the global field potentials and source-localized waveforms in the left and right auditory cortices. The measured responses for older listeners also tended to have reduced asymmetric distribution of activity in response to ITD shifts, which is consistent with other sensory and cognitive processing models of aging.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Ann C Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
16
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
17
|
A selective impairment of perception of sound motion direction in peripheral space: A case study. Neuropsychologia 2015; 80:79-89. [PMID: 26586155 DOI: 10.1016/j.neuropsychologia.2015.11.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2015] [Revised: 11/06/2015] [Accepted: 11/09/2015] [Indexed: 11/22/2022]
Abstract
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space.
Collapse
|
18
|
Freeman TCA, Leung J, Wufong E, Orchard-Mills E, Carlile S, Alais D. Discrimination contours for moving sounds reveal duration and distance cues dominate auditory speed perception. PLoS One 2014; 9:e102864. [PMID: 25076211 PMCID: PMC4116163 DOI: 10.1371/journal.pone.0102864] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Accepted: 06/25/2014] [Indexed: 11/18/2022] Open
Abstract
Evidence that the auditory system contains specialised motion detectors is mixed. Many psychophysical studies confound speed cues with distance and duration cues and present sound sources that do not appear to move in external space. Here we use the 'discrimination contours' technique to probe the probabilistic combination of speed, distance and duration for stimuli moving in a horizontal arc around the listener in virtual auditory space. The technique produces a set of motion discrimination thresholds that define a contour in the distance-duration plane for different combination of the three cues, based on a 3-interval oddity task. The orientation of the contour (typically elliptical in shape) reveals which cue or combination of cues dominates. If the auditory system contains specialised motion detectors, stimuli moving over different distances and durations but defining the same speed should be more difficult to discriminate. The resulting discrimination contours should therefore be oriented obliquely along iso-speed lines within the distance-duration plane. However, we found that over a wide range of speeds, distances and durations, the ellipses aligned with distance-duration axes and were stretched vertically, suggesting that listeners were most sensitive to duration. A second experiment showed that listeners were able to make speed judgements when distance and duration cues were degraded by noise, but that performance was worse. Our results therefore suggest that speed is not a primary cue to motion in the auditory system, but that listeners are able to use speed to make discrimination judgements when distance and duration cues are unreliable.
Collapse
Affiliation(s)
| | - Johahn Leung
- Auditory Neuroscience Laboratory, Department of Physiology and Bosch Institute, School of Medicine, University of Sydney, Sydney, New South Wales, Australia
| | - Ella Wufong
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia
| | - Emily Orchard-Mills
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia
| | - Simon Carlile
- Auditory Neuroscience Laboratory, Department of Physiology and Bosch Institute, School of Medicine, University of Sydney, Sydney, New South Wales, Australia
| | - David Alais
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
19
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
20
|
Wang Q, Bao M, Chen L. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display. Exp Brain Res 2013; 232:273-82. [PMID: 24141518 DOI: 10.1007/s00221-013-3738-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2013] [Accepted: 10/03/2013] [Indexed: 11/30/2022]
Abstract
Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.
Collapse
Affiliation(s)
- Qingcui Wang
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, 100190, China,
| | | | | |
Collapse
|
21
|
Lewald J, Getzmann S. Ventral and dorsal visual pathways support auditory motion processing in the blind: evidence from electrical neuroimaging. Eur J Neurosci 2013; 38:3201-9. [DOI: 10.1111/ejn.12306] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Revised: 06/05/2013] [Accepted: 06/10/2013] [Indexed: 11/30/2022]
Affiliation(s)
- Jörg Lewald
- Ruhr University Bochum; Faculty of Psychology; D-44780 Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Stephan Getzmann
- Ruhr University Bochum; Faculty of Psychology; D-44780 Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| |
Collapse
|
22
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Verhey JL, Hoffmann MB. Direction-specific adaptation of motion-onset auditory evoked potentials. Eur J Neurosci 2013; 38:2557-65. [PMID: 23725339 DOI: 10.1111/ejn.12264] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 04/12/2013] [Accepted: 04/26/2013] [Indexed: 11/26/2022]
Abstract
Auditory evoked potentials (AEPs) to motion onset in humans are dominated by a fronto-central complex, with a change-negative deflection 1 (cN1) and a change-positive deflection 2 (cP2) component. Here the contribution of veridical motion detectors to motion-onset AEPs was investigated with the hypothesis that direction-specific adaptation effects would indicate the contribution of such motion detectors. AEPs were recorded from 33 electroencephalographic channels to the test stimulus, i.e. motion onset of horizontal virtual auditory motion (60° per s) from straight ahead to the left. AEPs were compared in two experiments for three conditions, which differed in their history prior to the motion-onset test stimulus: (i) without motion history (Baseline), (ii) with motion history in the same direction as the test stimulus (Adaptation Same), and (iii) a reference condition with auditory history. For Experiment 1, condition (iii) comprised motion in the opposite direction (Adaptation Opposite). For Experiment 2, a noise in the absence of coherent motion (Matched Noise) was used as the reference condition. In Experiment 1, the amplitude difference cP2 - cN1 obtained for Adaptation Same was significantly smaller than for Baseline and Adaptation Opposite. In Experiment 2, it was significantly smaller than for Matched Noise. Adaptation effects were absent for cN1 and cP2 latencies. These findings demonstrate direction-specific adaptation of the motion-onset AEP. This suggests that veridical auditory motion detectors contribute to the motion-onset AEP.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Ophthalmology, Visual Processing Laboratory, Otto von Guericke University Magdeburg, Leipziger Strasse 44, 39120, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
23
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|
24
|
Magezi DA, Buetler KA, Chouiter L, Annoni JM, Spierer L. Electrical neuroimaging during auditory motion aftereffects reveals that auditory motion processing is motion sensitive but not direction selective. J Neurophysiol 2012; 109:321-31. [PMID: 23076114 DOI: 10.1152/jn.00625.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Following prolonged exposure to adaptor sounds moving in a single direction, participants may perceive stationary-probe sounds as moving in the opposite direction [direction-selective auditory motion aftereffect (aMAE)] and be less sensitive to motion of any probe sounds that are actually moving (motion-sensitive aMAE). The neural mechanisms of aMAEs, and notably whether they are due to adaptation of direction-selective motion detectors, as found in vision, is presently unknown and would provide critical insight into auditory motion processing. We measured human behavioral responses and auditory evoked potentials to probe sounds following four types of moving-adaptor sounds: leftward and rightward unidirectional, bidirectional, and stationary. Behavioral data replicated both direction-selective and motion-sensitive aMAEs. Electrical neuroimaging analyses of auditory evoked potentials to stationary probes revealed no significant difference in either global field power (GFP) or scalp topography between leftward and rightward conditions, suggesting that aMAEs are not based on adaptation of direction-selective motion detectors. By contrast, the bidirectional and stationary conditions differed significantly in the stationary-probe GFP at 200 ms poststimulus onset without concomitant topographic modulation, indicative of a difference in the response strength between statistically indistinguishable intracranial generators. The magnitude of this GFP difference was positively correlated with the magnitude of the motion-sensitive aMAE, supporting the functional relevance of the neurophysiological measures. Electrical source estimations revealed that the GFP difference followed from a modulation of activity in predominantly right hemisphere frontal-temporal-parietal brain regions previously implicated in auditory motion processing. Our collective results suggest that auditory motion processing relies on motion-sensitive, but, in contrast to vision, non-direction-selective mechanisms.
Collapse
Affiliation(s)
- David A Magezi
- Neurology Unit, Department of Medicine, Faculty of Sciences, University of Fribourg, Fribourg, Switzerland.
| | | | | | | | | |
Collapse
|
25
|
Population-wide bias of surround suppression in auditory spatial receptive fields of the owl's midbrain. J Neurosci 2012; 32:10470-8. [PMID: 22855796 DOI: 10.1523/jneurosci.0047-12.2012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The physical arrangement of receptive fields (RFs) within neural structures is important for local computations. Nonuniform distribution of tuning within populations of neurons can influence emergent tuning properties, causing bias in local processing. This issue was studied in the auditory system of barn owls. The owl's external nucleus of the inferior colliculus (ICx) contains a map of auditory space in which the frontal region is overrepresented. We measured spatiotemporal RFs of ICx neurons using spatial white noise. We found a population-wide bias in surround suppression such that suppression from frontal space was stronger. This asymmetry increased with laterality in spatial tuning. The bias could be explained by a model of lateral inhibition based on the overrepresentation of frontal space observed in ICx. The model predicted trends in surround suppression across ICx that matched the data. Thus, the uneven distribution of spatial tuning within the map could explain the topography of time-dependent tuning properties. This mechanism may have significant implications for the analysis of natural scenes by sensory systems.
Collapse
|
26
|
Getzmann S, Lewald J. Cortical processing of change in sound location: Smooth motion versus discontinuous displacement. Brain Res 2012; 1466:119-27. [DOI: 10.1016/j.brainres.2012.05.033] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 03/29/2012] [Accepted: 05/17/2012] [Indexed: 10/28/2022]
|
27
|
When and where of auditory spatial processing in cortex: a novel approach using electrotomography. PLoS One 2011; 6:e25146. [PMID: 21949873 PMCID: PMC3176323 DOI: 10.1371/journal.pone.0025146] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Accepted: 08/29/2011] [Indexed: 11/19/2022] Open
Abstract
The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1) in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2) in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.
Collapse
|