1
|
Shestopalova LB, Petropavlovskaia EA, Salikova DA, Semenova VV. Temporal integration of sound motion: Motion-onset response and perception. Hear Res 2024; 441:108922. [PMID: 38043403 DOI: 10.1016/j.heares.2023.108922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 11/14/2023] [Accepted: 11/20/2023] [Indexed: 12/05/2023]
Abstract
The purpose of our study was to estimate the time interval required for integrating the acoustical changes related to sound motion using both psychophysical and EEG measures. Healthy listeners performed direction identification tasks under dichotic conditions in the delayed-motion paradigm. Minimal audible movement angle (MAMA) has been measured over the range of velocities from 60 to 360 deg/s. We also measured minimal duration of motion, at which the listeners could identify its direction. EEG was recorded in the same group of subjects during passive listening. Motion onset responses (MOR) were analyzed. MAMA increased linearly with motion velocity. Minimum audible angle (MAA) calculated from this linear function was about 2 deg. For higher velocities of the delayed motion, we found 2- to 3-fold better spatial resolution than the one previously reported for motion starting at the sound onset. The time required for optimal discrimination of motion direction was about 34 ms. The main finding of our study was that both direction identification time obtained in the behavioral task and cN1 latency behaved like hyperbolic functions of the sound's velocity. Direction identification time decreased asymptotically to 8 ms, which was considered minimal integration time for the instantaneous shift detection. Peak latency of cN1 also decreased with increasing velocity and asymptotically approached 137 ms. This limit corresponded to the latency of response to the instantaneous sound shift and was 37 ms later than the latency of the sound-onset response. The direction discrimination time (34 ms) was of the same magnitude as the additional time required for motion processing to be reflected in the MOR potential. Thus, MOR latency can be viewed as a neurophysiological index of temporal integration. Based on the findings obtained, we may assume that no measurable MOR would be evoked by slowly moving stimuli as they would reach their MAMAs in a time longer than the optimal integration time.
Collapse
Affiliation(s)
- Lidia B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia.
| | | | - Diana A Salikova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| | - Varvara V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences, Makarova emb., 6, 199034, Saint Petersburg, Russia
| |
Collapse
|
2
|
Warnecke M, Litovsky RY. Signal envelope and speech intelligibility differentially impact auditory motion perception. Sci Rep 2021; 11:15117. [PMID: 34302032 PMCID: PMC8302594 DOI: 10.1038/s41598-021-94662-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 07/14/2021] [Indexed: 11/09/2022] Open
Abstract
Our acoustic environment contains a plethora of complex sounds that are often in motion. To gauge approaching danger and communicate effectively, listeners need to localize and identify sounds, which includes determining sound motion. This study addresses which acoustic cues impact listeners' ability to determine sound motion. Signal envelope (ENV) cues are implicated in both sound motion tracking and stimulus intelligibility, suggesting that these processes could be competing for sound processing resources. We created auditory chimaera from speech and noise stimuli and varied the number of frequency bands, effectively manipulating speech intelligibility. Normal-hearing adults were presented with stationary or moving chimaeras and reported perceived sound motion and content. Results show that sensitivity to sound motion is not affected by speech intelligibility, but shows a clear difference for original noise and speech stimuli. Further, acoustic chimaera with speech-like ENVs which had intelligible content induced a strong bias in listeners to report sounds as stationary. Increasing stimulus intelligibility systematically increased that bias and removing intelligible content reduced it, suggesting that sound content may be prioritized over sound motion. These findings suggest that sound motion processing in the auditory system can be biased by acoustic parameters related to speech intelligibility.
Collapse
Affiliation(s)
- Michaela Warnecke
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Ave, Madison, WI, 53705, USA.
| | - Ruth Y Litovsky
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Ave, Madison, WI, 53705, USA
| |
Collapse
|
3
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
4
|
Sound frequency affects the auditory motion-onset response in humans. Exp Brain Res 2018; 236:2713-2726. [PMID: 29998350 DOI: 10.1007/s00221-018-5329-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 07/04/2018] [Indexed: 10/28/2022]
Abstract
The current study examines the modulation of the motion-onset response based on the frequency-range of sound stimuli. Delayed motion-onset and stationary stimuli were presented in a free-field by sequentially activating loudspeakers on an azimuthal plane keeping the natural percept of externalized sound presentation. The sounds were presented in low- or high-frequency ranges and had different motion direction within each hemifield. Difference waves were calculated by contrasting the moving and stationary sounds to isolate the motion-onset responses. Analyses carried out at the peak amplitudes and latencies on the difference waves showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Subsequent post hoc analysis of the normalized amplitude of the motion response confirmed the previous finding by excluding the possibility that the frequency range had an overall effect on the waveform, and showing that this effect was instead limited to the motion response. These results support the idea of a modular organization of the motion-onset response with the processing of primary sound motion characteristics being reflected in the early part of the response. Also, the article highlights the importance of specificity in auditory stimulus design.
Collapse
|
5
|
Rigoulot S, Knoth IS, Lafontaine M, Vannasing P, Major P, Jacquemont S, Michaud JL, Jerbi K, Lippé S. Altered visual repetition suppression in Fragile X Syndrome: New evidence from ERPs and oscillatory activity. Int J Dev Neurosci 2017; 59:52-59. [DOI: 10.1016/j.ijdevneu.2017.03.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 12/31/2016] [Accepted: 03/17/2017] [Indexed: 12/13/2022] Open
Affiliation(s)
- Simon Rigoulot
- Departement de PsychologieUniversité de MontréalMontrealCanada
- Neuroscience of Early Development (NED)MontrealCanada
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC)MontrealCanada
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
- International Laboratory for Brain, Music and Sound Research (BRAMS)MontrealQuebecCanada
| | - Inga S. Knoth
- Neuroscience of Early Development (NED)MontrealCanada
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC)MontrealCanada
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
| | - Marc‐Philippe Lafontaine
- Departement de PsychologieUniversité de MontréalMontrealCanada
- Neuroscience of Early Development (NED)MontrealCanada
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC)MontrealCanada
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
| | - Phetsamone Vannasing
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
| | - Philippe Major
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
| | - Sébastien Jacquemont
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
| | - Jacques L. Michaud
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
| | - Karim Jerbi
- Departement de PsychologieUniversité de MontréalMontrealCanada
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC)MontrealCanada
- International Laboratory for Brain, Music and Sound Research (BRAMS)MontrealQuebecCanada
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Montréal (CRIUSMM)
- Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal (CRIUGM)
| | - Sarah Lippé
- Departement de PsychologieUniversité de MontréalMontrealCanada
- Neuroscience of Early Development (NED)MontrealCanada
- Centre de Recherche en Neuropsychologie et Cognition (CERNEC)MontrealCanada
- Research Center of the CHU Ste‐Justine Mother and Child University Hospital Center, Université de MontrealQuebecCanada
- International Laboratory for Brain, Music and Sound Research (BRAMS)MontrealQuebecCanada
| |
Collapse
|
6
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
7
|
Grzeschik R, Lewald J, Verhey JL, Hoffmann MB, Getzmann S. Absence of direction-specific cross-modal visual-auditory adaptation in motion-onset event-related potentials. Eur J Neurosci 2015; 43:66-77. [PMID: 26469706 DOI: 10.1111/ejn.13102] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 09/10/2015] [Accepted: 10/08/2015] [Indexed: 11/28/2022]
Abstract
Adaptation to visual or auditory motion affects within-modality motion processing as reflected by visual or auditory free-field motion-onset evoked potentials (VEPs, AEPs). Here, a visual-auditory motion adaptation paradigm was used to investigate the effect of visual motion adaptation on VEPs and AEPs to leftward motion-onset test stimuli. Effects of visual adaptation to (i) scattered light flashes, and motion in the (ii) same or in the (iii) opposite direction of the test stimulus were compared. For the motion-onset VEPs, i.e. the intra-modal adaptation conditions, direction-specific adaptation was observed--the change-N2 (cN2) and change-P2 (cP2) amplitudes were significantly smaller after motion adaptation in the same than in the opposite direction. For the motion-onset AEPs, i.e. the cross-modal adaptation condition, there was an effect of motion history only in the change-P1 (cP1), and this effect was not direction-specific--cP1 was smaller after scatter than after motion adaptation to either direction. No effects were found for later components of motion-onset AEPs. While the VEP results provided clear evidence for the existence of a direction-specific effect of motion adaptation within the visual modality, the AEP findings suggested merely a motion-related, but not a direction-specific effect. In conclusion, the adaptation of veridical auditory motion detectors by visual motion is not reflected by the AEPs of the present study.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg, Leipziger Straße 44, Magdeburg, D-39120, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Auditory Cognitive Neuroscience Laboratory, Ruhr University Bochum, Bochum, Germany.,Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jesko L Verhey
- Department of Experimental Audiology, Otto-von-Guericke-University Magdeburg, Leipziger Straße 44, Magdeburg, D-39120, Germany.,Department of Ophthalmology, Visual Processing Laboratory, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Michael B Hoffmann
- Center for Behavioral Brain Sciences, Magdeburg, Germany.,Department of Ophthalmology, Visual Processing Laboratory, Otto-von-Guericke-University Magdeburg, Magdeburg, Germany
| | - Stephan Getzmann
- Aging Research Group, Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
8
|
Freeman TCA, Leung J, Wufong E, Orchard-Mills E, Carlile S, Alais D. Discrimination contours for moving sounds reveal duration and distance cues dominate auditory speed perception. PLoS One 2014; 9:e102864. [PMID: 25076211 PMCID: PMC4116163 DOI: 10.1371/journal.pone.0102864] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Accepted: 06/25/2014] [Indexed: 11/18/2022] Open
Abstract
Evidence that the auditory system contains specialised motion detectors is mixed. Many psychophysical studies confound speed cues with distance and duration cues and present sound sources that do not appear to move in external space. Here we use the 'discrimination contours' technique to probe the probabilistic combination of speed, distance and duration for stimuli moving in a horizontal arc around the listener in virtual auditory space. The technique produces a set of motion discrimination thresholds that define a contour in the distance-duration plane for different combination of the three cues, based on a 3-interval oddity task. The orientation of the contour (typically elliptical in shape) reveals which cue or combination of cues dominates. If the auditory system contains specialised motion detectors, stimuli moving over different distances and durations but defining the same speed should be more difficult to discriminate. The resulting discrimination contours should therefore be oriented obliquely along iso-speed lines within the distance-duration plane. However, we found that over a wide range of speeds, distances and durations, the ellipses aligned with distance-duration axes and were stretched vertically, suggesting that listeners were most sensitive to duration. A second experiment showed that listeners were able to make speed judgements when distance and duration cues were degraded by noise, but that performance was worse. Our results therefore suggest that speed is not a primary cue to motion in the auditory system, but that listeners are able to use speed to make discrimination judgements when distance and duration cues are unreliable.
Collapse
Affiliation(s)
| | - Johahn Leung
- Auditory Neuroscience Laboratory, Department of Physiology and Bosch Institute, School of Medicine, University of Sydney, Sydney, New South Wales, Australia
| | - Ella Wufong
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia
| | - Emily Orchard-Mills
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia
| | - Simon Carlile
- Auditory Neuroscience Laboratory, Department of Physiology and Bosch Institute, School of Medicine, University of Sydney, Sydney, New South Wales, Australia
| | - David Alais
- School of Psychology, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
9
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
10
|
Leung AWS, He Y, Grady CL, Alain C. Age differences in the neuroelectric adaptation to meaningful sounds. PLoS One 2013; 8:e68892. [PMID: 23935900 PMCID: PMC3723892 DOI: 10.1371/journal.pone.0068892] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2013] [Accepted: 06/02/2013] [Indexed: 11/18/2022] Open
Abstract
Much of what we know regarding the effect of stimulus repetition on neuroelectric adaptation comes from studies using artificially produced pure tones or harmonic complex sounds. Little is known about the neural processes associated with the representation of everyday sounds and how these may be affected by aging. In this study, we used real life, meaningful sounds presented at various azimuth positions and found that auditory evoked responses peaking at about 100 and 180 ms after sound onset decreased in amplitude with stimulus repetition. This neural adaptation was greater in young than in older adults and was more pronounced when the same sound was repeated at the same location. Moreover, the P2 waves showed differential patterns of domain-specific adaptation when location and identity was repeated among young adults. Background noise decreased ERP amplitudes and modulated the magnitude of repetition effects on both the N1 and P2 amplitude, and the effects were comparable in young and older adults. These findings reveal an age-related difference in the neural processes associated with adaptation to meaningful sounds, which may relate to older adults' difficulty in ignoring task-irrelevant stimuli.
Collapse
Affiliation(s)
- Ada W. S. Leung
- Department of Occupational Therapy and Centre for Neuroscience, University of Alberta, Edmonton, Canada
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Cheryl L. Grady
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Ontario, Canada
| |
Collapse
|
11
|
Ahveninen J, Kopčo N, Jääskeläinen IP. Psychophysics and neuronal bases of sound localization in humans. Hear Res 2013; 307:86-97. [PMID: 23886698 DOI: 10.1016/j.heares.2013.07.008] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 07/02/2013] [Accepted: 07/10/2013] [Indexed: 10/26/2022]
Abstract
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory "where" pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | | | | |
Collapse
|
12
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|
13
|
Magezi DA, Buetler KA, Chouiter L, Annoni JM, Spierer L. Electrical neuroimaging during auditory motion aftereffects reveals that auditory motion processing is motion sensitive but not direction selective. J Neurophysiol 2012; 109:321-31. [PMID: 23076114 DOI: 10.1152/jn.00625.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Following prolonged exposure to adaptor sounds moving in a single direction, participants may perceive stationary-probe sounds as moving in the opposite direction [direction-selective auditory motion aftereffect (aMAE)] and be less sensitive to motion of any probe sounds that are actually moving (motion-sensitive aMAE). The neural mechanisms of aMAEs, and notably whether they are due to adaptation of direction-selective motion detectors, as found in vision, is presently unknown and would provide critical insight into auditory motion processing. We measured human behavioral responses and auditory evoked potentials to probe sounds following four types of moving-adaptor sounds: leftward and rightward unidirectional, bidirectional, and stationary. Behavioral data replicated both direction-selective and motion-sensitive aMAEs. Electrical neuroimaging analyses of auditory evoked potentials to stationary probes revealed no significant difference in either global field power (GFP) or scalp topography between leftward and rightward conditions, suggesting that aMAEs are not based on adaptation of direction-selective motion detectors. By contrast, the bidirectional and stationary conditions differed significantly in the stationary-probe GFP at 200 ms poststimulus onset without concomitant topographic modulation, indicative of a difference in the response strength between statistically indistinguishable intracranial generators. The magnitude of this GFP difference was positively correlated with the magnitude of the motion-sensitive aMAE, supporting the functional relevance of the neurophysiological measures. Electrical source estimations revealed that the GFP difference followed from a modulation of activity in predominantly right hemisphere frontal-temporal-parietal brain regions previously implicated in auditory motion processing. Our collective results suggest that auditory motion processing relies on motion-sensitive, but, in contrast to vision, non-direction-selective mechanisms.
Collapse
Affiliation(s)
- David A Magezi
- Neurology Unit, Department of Medicine, Faculty of Sciences, University of Fribourg, Fribourg, Switzerland.
| | | | | | | | | |
Collapse
|
14
|
Discrimination of auditory motion patterns: The mismatch negativity study. Neuropsychologia 2012; 50:2720-2729. [DOI: 10.1016/j.neuropsychologia.2012.07.043] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 07/17/2012] [Accepted: 07/30/2012] [Indexed: 11/20/2022]
|
15
|
Getzmann S, Lewald J. Cortical processing of change in sound location: Smooth motion versus discontinuous displacement. Brain Res 2012; 1466:119-27. [DOI: 10.1016/j.brainres.2012.05.033] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 03/29/2012] [Accepted: 05/17/2012] [Indexed: 10/28/2022]
|