1
|
Fine I, Park WJ. Do you hear what I see? How do early blind individuals experience object motion? Philos Trans R Soc Lond B Biol Sci 2023; 378:20210460. [PMID: 36511418 PMCID: PMC9745882 DOI: 10.1098/rstb.2021.0460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022] Open
Abstract
One of the most important tasks for 3D vision is tracking the movement of objects in space. The ability of early blind individuals to understand motion in the environment from noisy and unreliable auditory information is an impressive example of cortical adaptation that is only just beginning to be understood. Here, we compare visual and auditory motion processing, and discuss the effect of early blindness on the perception of auditory motion. Blindness leads to cross-modal recruitment of the visual motion area hMT+ for auditory motion processing. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical basis of motion processing might influence the perceptual experience of motion in early blind individuals. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| | - Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| |
Collapse
|
2
|
Blenkmann AO, Collavini S, Lubell J, Llorens A, Funderud I, Ivanovic J, Larsson PG, Meling TR, Bekinschtein T, Kochen S, Endestad T, Knight RT, Solbakk AK. Auditory deviance detection in the human insula: An intracranial EEG study. Cortex 2019; 121:189-200. [PMID: 31629197 DOI: 10.1016/j.cortex.2019.09.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 06/24/2019] [Accepted: 09/01/2019] [Indexed: 11/27/2022]
Abstract
The human insula is known to be involved in auditory processing, but knowledge about its precise functional role and the underlying electrophysiology is limited. To assess its role in automatic auditory deviance detection we analyzed the EEG high frequency activity (HFA; 75-145 Hz) and ERPs from 90 intracranial insular channels across 16 patients undergoing pre-surgical intracranial monitoring for epilepsy treatment. Subjects passively listened to a stream of standard and deviant tones differing in four physical dimensions: intensity, frequency, location or time. HFA responses to auditory stimuli were found in the short and long gyri, and the anterior, superior, and inferior segments of the circular sulcus of the insular cortex. Only a subset of channels in the inferior segment of the circular sulcus of the insula showed HFA deviance detection responses, i.e., a greater and longer latency response to specific deviants relative to standards. Auditory deviancy processing was also later in the insula when compared with the superior temporal cortex. ERP results were more widespread and supported the HFA insular findings. These results provide evidence that the human insula is engaged during auditory deviance detection.
Collapse
Affiliation(s)
| | - Santiago Collavini
- Studies in Neurosciences and Complex Systems, National Scientific and Technical Research Council, El Cruce Hospital, Arturo Jauretche National University, Argentina.
| | - James Lubell
- Department of Psychology, University of Oslo, Norway.
| | - Anaïs Llorens
- Department of Psychology, University of Oslo, Norway; Department of Neurosurgery, Oslo University Hospital, Rikshospitalet, Norway.
| | | | - Jugoslav Ivanovic
- Department of Neurosurgery, Oslo University Hospital, Rikshospitalet, Norway.
| | - Pål G Larsson
- Department of Neurosurgery, Oslo University Hospital, Rikshospitalet, Norway.
| | - Torstein R Meling
- Department of Neurosurgery, Oslo University Hospital, Rikshospitalet, Norway.
| | | | - Silvia Kochen
- Studies in Neurosciences and Complex Systems, National Scientific and Technical Research Council, El Cruce Hospital, Arturo Jauretche National University, Argentina.
| | - Tor Endestad
- Department of Psychology, University of Oslo, Norway; Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway.
| | - Robert T Knight
- Helen Wills Neuroscience Institute and Department of Psychology, University of California at Berkeley, USA.
| | - Anne-Kristin Solbakk
- Department of Psychology, University of Oslo, Norway; Department of Neurosurgery, Oslo University Hospital, Rikshospitalet, Norway; Department of Neuropsychology, Helgeland Hospital, Mosjøen, Norway.
| |
Collapse
|
3
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
4
|
Rinne T, Muers RS, Salo E, Slater H, Petkov CI. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences? Cereb Cortex 2018; 27:3471-3484. [PMID: 28419201 PMCID: PMC5654311 DOI: 10.1093/cercor/bhx092] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Indexed: 11/22/2022] Open
Abstract
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals.
Collapse
Affiliation(s)
- Teemu Rinne
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland
| | - Ross S Muers
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Emma Salo
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Heather Slater
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
5
|
Meuret S, Ludwig A, Predel D, Staske B, Fuchs M. Localization and Spatial Discrimination in Children and Adolescents with Moderate Sensorineural Hearing Loss Tested without Their Hearing Aids. Audiol Neurootol 2018; 22:326-342. [DOI: 10.1159/000485826] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Accepted: 11/27/2017] [Indexed: 11/19/2022] Open
Abstract
The present study investigated two measures of spatial acoustic perception in children and adolescents with sensorineural hearing loss (SNHL) tested without their hearing aids and compared it to age-matched controls. Auditory localization was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests to separately address spatial auditory processing based on interaural time and intensity differences. In SNHL children, localization (hit accuracy) was significantly reduced compared to normal-hearing children and intraindividual variability (dispersion) considerably increased. Given the respective impairments, the performance based on interaural time differences (low frequencies) was still better than that based on intensity differences (high frequencies). For MAA, age-matched comparisons yielded not only increased MAA values in SNHL children, but also no decrease with increasing age compared to normal-hearing children. Deficits in MAA were most apparent in the frontal azimuth. Thus, children with SNHL do not seem to benefit from frontal positions of the sound sources as do normal-hearing children. The results give an indication that the processing of spatial cues in SNHL children is restricted, which could also imply problems regarding speech understanding in challenging hearing situations.
Collapse
|
6
|
Altmann CF, Ueda R, Bucher B, Furukawa S, Ono K, Kashino M, Mima T, Fukuyama H. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography. Neuroimage 2017; 159:185-194. [DOI: 10.1016/j.neuroimage.2017.07.055] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 06/15/2017] [Accepted: 07/25/2017] [Indexed: 11/29/2022] Open
|
7
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
8
|
Ortiz-Rios M, Azevedo FAC, Kuśmierek P, Balla DZ, Munk MH, Keliris GA, Logothetis NK, Rauschecker JP. Widespread and Opponent fMRI Signals Represent Sound Location in Macaque Auditory Cortex. Neuron 2017; 93:971-983.e4. [PMID: 28190642 DOI: 10.1016/j.neuron.2017.01.013] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 12/05/2016] [Accepted: 01/15/2017] [Indexed: 11/15/2022]
Abstract
In primates, posterior auditory cortical areas are thought to be part of a dorsal auditory pathway that processes spatial information. But how posterior (and other) auditory areas represent acoustic space remains a matter of debate. Here we provide new evidence based on functional magnetic resonance imaging (fMRI) of the macaque indicating that space is predominantly represented by a distributed hemifield code rather than by a local spatial topography. Hemifield tuning in cortical and subcortical regions emerges from an opponent hemispheric pattern of activation and deactivation that depends on the availability of interaural delay cues. Importantly, these opponent signals allow responses in posterior regions to segregate space similarly to a hemifield code representation. Taken together, our results reconcile seemingly contradictory views by showing that the representation of space follows closely a hemifield code and suggest that enhanced posterior-dorsal spatial specificity in primates might emerge from this form of coding.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany; Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute of Neuroscience, Henry Welcome Building, Medical School, Framlington Place, Newcastle upon Tyne, NE2 4HH, UK.
| | - Frederico A C Azevedo
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Graduate School of Neural & Behavioural Sciences, International Max Planck Research School (IMPRS), University of Tübingen, Österbergstraße 3, 72074 Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA
| | - Dávid Z Balla
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany
| | - Matthias H Munk
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Department of Systems Neurophysiology, Fachbereich Biologie, Technische Universität Darmstadt, Schnittspahnstraße 10, 64287, Darmstadt, Germany
| | - Georgios A Keliris
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Bio-Imaging Lab, Department of Biomedical Sciences, University of Antwerp, Wilrijk, 2610, Belgium
| | - Nikos K Logothetis
- Department Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Spemannstraße 36, 72072 Tübingen, Germany; Division of Imaging Science and Biomedical Engineering, University of Manchester, Manchester, M13 9PL, UK
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, 3970 Reservoir Road, N.W. Washington, D.C., 20057, USA; Institute for Advanced Study of Technische Universität München, Lichtenbergstraße 2 a, 85748 Garching, Germany
| |
Collapse
|
9
|
Abstract
We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.
Collapse
|
10
|
Abstract
The final position of a moving visual object usually appears to be displaced in the direction of motion. We investigated this phenomenon, termed representational momentum, in the auditory modality. In a dark anechoic environment, an acoustic target (continuous noise or noise pulses) moved from left to right or from right to left along the frontal horizontal plane. Listeners judged the final position of the target using a hand pointer. Target velocity was 8° s−1 or 16° s−1. Generally, the final target positions were localised as displaced in the direction of motion. With presentation of continuous noise, target velocity had a strong influence on mean displacement: displacements were stronger with lower velocity. No influence of sound velocity on displacement was found with motion of pulsed noise. Although these findings suggest that the underlying mechanisms may be different in the auditory and visual modality, the occurrence of displacements indicates that representational-momentum-like effects are not restricted to the visual modality, but may reflect a general phenomenon with judgments of dynamic events.
Collapse
Affiliation(s)
- Stephan Getzmann
- Kognitions- und Umweltpsychologie, Fakultät für Psychologie, Ruhr-Universität Bochum, D 44780 Bochum, Germany.
| | | | | |
Collapse
|
11
|
Carlile S, Leung J. The Perception of Auditory Motion. Trends Hear 2016; 20:2331216516644254. [PMID: 27094029 PMCID: PMC4871213 DOI: 10.1177/2331216516644254] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 03/22/2016] [Accepted: 03/22/2016] [Indexed: 11/16/2022] Open
Abstract
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, NSW, Australia Starkey Hearing Research Center, Berkeley, CA, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW, Australia
| |
Collapse
|
12
|
A selective impairment of perception of sound motion direction in peripheral space: A case study. Neuropsychologia 2015; 80:79-89. [PMID: 26586155 DOI: 10.1016/j.neuropsychologia.2015.11.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2015] [Revised: 11/06/2015] [Accepted: 11/09/2015] [Indexed: 11/22/2022]
Abstract
It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space.
Collapse
|
13
|
Andreeva IG. The motion aftereffect as a universal phenomenon in sensory systems involved in space orientation: II. Auditory motion aftereffect. J EVOL BIOCHEM PHYS+ 2015. [DOI: 10.1134/s0022093015030015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
14
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
15
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
16
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
17
|
Golden HL, Nicholas JM, Yong KXX, Downey LE, Schott JM, Mummery CJ, Crutch SJ, Warren JD. Auditory spatial processing in Alzheimer's disease. Brain 2015; 138:189-202. [PMID: 25468732 PMCID: PMC4285196 DOI: 10.1093/brain/awu337] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Revised: 10/01/2014] [Accepted: 10/10/2014] [Indexed: 11/13/2022] Open
Abstract
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease syndromic spectrum.
Collapse
Affiliation(s)
- Hannah L Golden
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Jennifer M Nicholas
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK 2 Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, WC1E 7HT, UK
| | - Keir X X Yong
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Laura E Downey
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Jonathan M Schott
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Catherine J Mummery
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Sebastian J Crutch
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Jason D Warren
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| |
Collapse
|
18
|
Abstract
The auditory system derives locations of sound sources from spatial cues provided by the interaction of sound with the head and external ears. Those cues are analyzed in specific brainstem pathways and then integrated as cortical representation of locations. The principal cues for horizontal localization are interaural time differences (ITDs) and interaural differences in sound level (ILDs). Vertical and front/back localization rely on spectral-shape cues derived from direction-dependent filtering properties of the external ears. The likely first sites of analysis of these cues are the medial superior olive (MSO) for ITDs, lateral superior olive (LSO) for ILDs, and dorsal cochlear nucleus (DCN) for spectral-shape cues. Localization in distance is much less accurate than that in horizontal and vertical dimensions, and interpretation of the basic cues is influenced by additional factors, including acoustics of the surroundings and familiarity of source spectra and levels. Listeners are quite sensitive to sound motion, but it remains unclear whether that reflects specific motion detection mechanisms or simply detection of changes in static location. Intact auditory cortex is essential for normal sound localization. Cortical representation of sound locations is highly distributed, with no evidence for point-to-point topography. Spatial representation is strictly contralateral in laboratory animals that have been studied, whereas humans show a prominent right-hemisphere dominance.
Collapse
Affiliation(s)
- John C Middlebrooks
- Departments of Otolaryngology, Neurobiology and Behavior, Cognitive Sciences, and Biomedical Engineering, University of California at Irvine, Irvine, CA, USA.
| |
Collapse
|
19
|
|
20
|
Mendonça C. A review on auditory space adaptations to altered head-related cues. Front Neurosci 2014; 8:219. [PMID: 25120422 PMCID: PMC4110508 DOI: 10.3389/fnins.2014.00219] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Accepted: 07/05/2014] [Indexed: 11/23/2022] Open
Abstract
In this article we present a review of current literature on adaptations to altered head-related auditory localization cues. Localization cues can be altered through ear blocks, ear molds, electronic hearing devices, and altered head-related transfer functions (HRTFs). Three main methods have been used to induce auditory space adaptation: sound exposure, training with feedback, and explicit training. Adaptations induced by training, rather than exposure, are consistently faster. Studies on localization with altered head-related cues have reported poor initial localization, but improved accuracy and discriminability with training. Also, studies that displaced the auditory space by altering cue values reported adaptations in perceived source position to compensate for such displacements. Auditory space adaptations can last for a few months even without further contact with the learned cues. In most studies, localization with the subject's own unaltered cues remained intact despite the adaptation to a second set of cues. Generalization is observed from trained to untrained sound source positions, but there is mixed evidence regarding cross-frequency generalization. Multiple brain areas might be involved in auditory space adaptation processes, but the auditory cortex (AC) may play a critical role. Auditory space plasticity may involve context-dependent cue reweighting.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, School of Electrical Engineering, Aalto University Espoo, Finland
| |
Collapse
|
21
|
Abstract
Neurophysiological findings suggested that auditory and visual motion information is integrated at an early stage of auditory cortical processing, already starting in primary auditory cortex. Here, the effect of visual motion on processing of auditory motion was investigated by employing electrotomography in combination with free-field sound motion. A delayed-motion paradigm was used in which the onset of motion was delayed relative to the onset of an initially stationary stimulus. The results indicated that activity related to the motion-onset response, a neurophysiological correlate of auditory motion processing, interacts with the processing of visual motion at quite early stages of auditory analysis in the dimensions of both the time and the location of cortical processing. A modulation of auditory motion processing by concurrent visual motion was found already around 170 ms after motion onset (cN1 component) in the regions of primary auditory cortex and posterior superior temporal gyrus: Incongruent visual motion enhanced the auditory motion onset response in auditory regions ipsilateral to the sound motion stimulus, thus reducing the pattern of contralaterality observed with unimodal auditory stimuli. No modulation was found in parietal cortex nor around 250 ms after motion onset (cP2 component) in any auditory region of interest. These findings may reflect the integration of auditory and visual motion information in low-level areas of the auditory cortical system at relatively early points in time.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
22
|
Ahveninen J, Kopčo N, Jääskeläinen IP. Psychophysics and neuronal bases of sound localization in humans. Hear Res 2013; 307:86-97. [PMID: 23886698 DOI: 10.1016/j.heares.2013.07.008] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 07/02/2013] [Accepted: 07/10/2013] [Indexed: 10/26/2022]
Abstract
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory "where" pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | | | | |
Collapse
|
23
|
Hoffmann S, Warmbold A, Wiegrebe L, Firzlaff U. Spatiotemporal contrast enhancement and feature extraction in the bat auditory midbrain and cortex. J Neurophysiol 2013; 110:1257-68. [PMID: 23785132 DOI: 10.1152/jn.00226.2013] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Navigating on the wing in complete darkness is a challenging task for echolocating bats. It requires the detailed analysis of spatial and temporal information gained through echolocation. Thus neural encoding of spatiotemporal echo information is a major function in the bat auditory system. In this study we presented echoes in virtual acoustic space and used a reverse-correlation technique to investigate the spatiotemporal response characteristics of units in the inferior colliculus (IC) and the auditory cortex (AC) of the bat Phyllostomus discolor. Spatiotemporal response maps (STRMs) of IC units revealed an organization of suppressive and excitatory regions that provided pronounced contrast enhancement along both the time and azimuth axes. Most IC units showed either spatially centralized short-latency excitation spatiotemporally imbedded in strong suppression, or the opposite, i.e., central short-latency suppression imbedded in excitation. This complementary arrangement of excitation and suppression was very rarely seen in AC units. In contrast, STRMs in the AC revealed much less suppression, sharper spatiotemporal tuning, and often a special spatiotemporal arrangement of two excitatory regions. Temporal separation of excitatory regions ranged up to 25 ms and was thus in the range of temporal delays occurring in target ranging in bats in a natural situation. Our data indicate that spatiotemporal processing of echo information in the bat auditory midbrain and cortex serves very different purposes: Whereas the spatiotemporal contrast enhancement provided by the IC contributes to echo-feature extraction, the AC reflects the result of this processing in terms of a high selectivity and task-oriented recombination of the extracted features.
Collapse
Affiliation(s)
- Susanne Hoffmann
- Division of Neurobiology, Department of Biology II, Ludwig-Maximilians-University Munich, Planegg-Martinsried, Germany; and
| | | | | | | |
Collapse
|
24
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Verhey JL, Hoffmann MB. Direction-specific adaptation of motion-onset auditory evoked potentials. Eur J Neurosci 2013; 38:2557-65. [PMID: 23725339 DOI: 10.1111/ejn.12264] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2012] [Revised: 04/12/2013] [Accepted: 04/26/2013] [Indexed: 11/26/2022]
Abstract
Auditory evoked potentials (AEPs) to motion onset in humans are dominated by a fronto-central complex, with a change-negative deflection 1 (cN1) and a change-positive deflection 2 (cP2) component. Here the contribution of veridical motion detectors to motion-onset AEPs was investigated with the hypothesis that direction-specific adaptation effects would indicate the contribution of such motion detectors. AEPs were recorded from 33 electroencephalographic channels to the test stimulus, i.e. motion onset of horizontal virtual auditory motion (60° per s) from straight ahead to the left. AEPs were compared in two experiments for three conditions, which differed in their history prior to the motion-onset test stimulus: (i) without motion history (Baseline), (ii) with motion history in the same direction as the test stimulus (Adaptation Same), and (iii) a reference condition with auditory history. For Experiment 1, condition (iii) comprised motion in the opposite direction (Adaptation Opposite). For Experiment 2, a noise in the absence of coherent motion (Matched Noise) was used as the reference condition. In Experiment 1, the amplitude difference cP2 - cN1 obtained for Adaptation Same was significantly smaller than for Baseline and Adaptation Opposite. In Experiment 2, it was significantly smaller than for Matched Noise. Adaptation effects were absent for cN1 and cP2 latencies. These findings demonstrate direction-specific adaptation of the motion-onset AEP. This suggests that veridical auditory motion detectors contribute to the motion-onset AEP.
Collapse
Affiliation(s)
- Ramona Grzeschik
- Department of Ophthalmology, Visual Processing Laboratory, Otto von Guericke University Magdeburg, Leipziger Strasse 44, 39120, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
25
|
Richter N, Schröger E, Rübsamen R. Differences in evoked potentials during the active processing of sound location and motion. Neuropsychologia 2013; 51:1204-14. [PMID: 23499852 DOI: 10.1016/j.neuropsychologia.2013.03.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Revised: 02/25/2013] [Accepted: 03/04/2013] [Indexed: 10/27/2022]
Abstract
Difference in the processing of motion and static sounds in the human cortex was studied by electroencephalography with subjects performing an active discrimination task. Sound bursts were presented in the acoustic free-field between 47° to the left and 47° to the right under three different stimulus conditions: (i) static, (ii) leftward motion, and (iii) rightward motion. In an active oddball design, subject was asked to detect target stimuli which were randomly embedded within a stream of frequently occurring non-target events (i.e. 'standards') and rare non-target stimuli (i.e. 'deviants'). The respective acoustic stimuli were presented in blocks with each stimulus type presented in either of three stimulus conditions: as target, as non-target, or as standard. The analysis focussed on the event related potentials evoked by the different stimulus types under the respective standard condition. Same as in previous studies, all three different acoustic stimuli elicited the obligatory P1/N1/P2 complex in the range of 50-200 ms. However, comparisons of ERPs elicited by static stimuli and both kinds of motion stimuli yielded differences as early as ~100 ms after stimulus-onset, i.e. at the level of the exogenous N1 and P2 components. Differences in signal amplitudes were also found in a time window 300-400 ms ('d300-400 ms' component in 'motion-minus-static' difference wave). For motion stimuli, the N1 amplitudes were larger over the hemisphere contralateral to the origin of motion, while for static stimuli N1 amplitudes over both hemispheres were in the same range. Contrary to the N1 component, the ERP in the 'd300-400 ms' time period showed stronger responses over the hemisphere contralateral to motion termination, with the static stimuli again yielding equal bilateral amplitudes. For the P2 component a motion-specific effect with larger signal amplitudes over the left hemisphere was found compared to static stimuli. The presently documented N1 components comply with the results of previous studies on auditory space processing and suggest a contralateral dominance during the process of cortical integration of spatial acoustic information. Additionally, the cortical activity in the 'd300-400 ms' time period indicates, that in addition to the motion origin (as reflected by the N1) also the direction of motion (leftward/ rightward motion) or rather motion termination is cortically encoded. These electrophysiological results are in accordance with the 'snap shot' hypothesis, assuming that auditory motion processing is not based on a genuine motion-sensitive system, but rather on a comparison process of spatial positions of motion origin (onset) and motion termination (offset). Still, specificities of the present P2 component provides evidence for additional motion-specific processes possibly associated with the evaluation of motion-specific attributes, i.e. motion direction and/or velocity which is preponderant in the left hemisphere.
Collapse
Affiliation(s)
- Nicole Richter
- University of Leipzig, Institute for Biology, Talstr 33, 04103 Leipzig, Germany.
| | | | | |
Collapse
|
26
|
Kühnle S, Ludwig A, Meuret S, Küttner C, Witte C, Scholbach J, Fuchs M, Rübsamen R. Development of Auditory Localization Accuracy and Auditory Spatial Discrimination in Children and Adolescents. ACTA ACUST UNITED AC 2013; 18:48-62. [DOI: 10.1159/000342904] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2012] [Accepted: 08/21/2012] [Indexed: 11/19/2022]
|
27
|
Magezi DA, Buetler KA, Chouiter L, Annoni JM, Spierer L. Electrical neuroimaging during auditory motion aftereffects reveals that auditory motion processing is motion sensitive but not direction selective. J Neurophysiol 2012; 109:321-31. [PMID: 23076114 DOI: 10.1152/jn.00625.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Following prolonged exposure to adaptor sounds moving in a single direction, participants may perceive stationary-probe sounds as moving in the opposite direction [direction-selective auditory motion aftereffect (aMAE)] and be less sensitive to motion of any probe sounds that are actually moving (motion-sensitive aMAE). The neural mechanisms of aMAEs, and notably whether they are due to adaptation of direction-selective motion detectors, as found in vision, is presently unknown and would provide critical insight into auditory motion processing. We measured human behavioral responses and auditory evoked potentials to probe sounds following four types of moving-adaptor sounds: leftward and rightward unidirectional, bidirectional, and stationary. Behavioral data replicated both direction-selective and motion-sensitive aMAEs. Electrical neuroimaging analyses of auditory evoked potentials to stationary probes revealed no significant difference in either global field power (GFP) or scalp topography between leftward and rightward conditions, suggesting that aMAEs are not based on adaptation of direction-selective motion detectors. By contrast, the bidirectional and stationary conditions differed significantly in the stationary-probe GFP at 200 ms poststimulus onset without concomitant topographic modulation, indicative of a difference in the response strength between statistically indistinguishable intracranial generators. The magnitude of this GFP difference was positively correlated with the magnitude of the motion-sensitive aMAE, supporting the functional relevance of the neurophysiological measures. Electrical source estimations revealed that the GFP difference followed from a modulation of activity in predominantly right hemisphere frontal-temporal-parietal brain regions previously implicated in auditory motion processing. Our collective results suggest that auditory motion processing relies on motion-sensitive, but, in contrast to vision, non-direction-selective mechanisms.
Collapse
Affiliation(s)
- David A Magezi
- Neurology Unit, Department of Medicine, Faculty of Sciences, University of Fribourg, Fribourg, Switzerland.
| | | | | | | | | |
Collapse
|
28
|
Getzmann S, Lewald J. Cortical processing of change in sound location: Smooth motion versus discontinuous displacement. Brain Res 2012; 1466:119-27. [DOI: 10.1016/j.brainres.2012.05.033] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 03/29/2012] [Accepted: 05/17/2012] [Indexed: 10/28/2022]
|
29
|
Duffour-Nikolov C, Tardif E, Maeder P, Thiran AB, Bloch J, Frischknecht R, Clarke S. Auditory spatial deficits following hemispheric lesions: dissociation of explicit and implicit processing. Neuropsychol Rehabil 2012; 22:674-96. [PMID: 22672110 DOI: 10.1080/09602011.2012.686818] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Collapse
|
30
|
Abstract
Auditory scene analysis requires the listener to parse the incoming flow of acoustic information into perceptual "streams," such as sentences from a single talker in the midst of background noise. Behavioral and neural data show that the formation of streams is not instantaneous; rather, streaming builds up over time and can be reset by sudden changes in the acoustics of the scene. Here, we investigated the effect of changes induced by voluntary head motion on streaming. We used a telepresence robot in a virtual reality setup to disentangle all potential consequences of head motion: changes in acoustic cues at the ears, changes in apparent source location, and changes in motor or attentional processes. The results showed that self-motion influenced streaming in at least two ways. Right after the onset of movement, self-motion always induced some resetting of perceptual organization to one stream, even when the acoustic scene itself had not changed. Then, after the motion, the prevalent organization was rapidly biased by the binaural cues discovered through motion. Auditory scene analysis thus appears to be a dynamic process that is affected by the active sensing of the environment.
Collapse
|
31
|
Alink A, Euler F, Kriegeskorte N, Singer W, Kohler A. Auditory motion direction encoding in auditory cortex and high-level visual cortex. Hum Brain Mapp 2011; 33:969-78. [PMID: 21692141 DOI: 10.1002/hbm.21263] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2010] [Revised: 11/23/2010] [Accepted: 12/27/2010] [Indexed: 11/11/2022] Open
Abstract
The aim of this functional magnetic resonance imaging (fMRI) study was to identify human brain areas that are sensitive to the direction of auditory motion. Such directional sensitivity was assessed in a hypothesis-free manner by analyzing fMRI response patterns across the entire brain volume using a spherical-searchlight approach. In addition, we assessed directional sensitivity in three predefined brain areas that have been associated with auditory motion perception in previous neuroimaging studies. These were the primary auditory cortex, the planum temporale and the visual motion complex (hMT/V5+). Our whole-brain analysis revealed that the direction of sound-source movement could be decoded from fMRI response patterns in the right auditory cortex and in a high-level visual area located in the right lateral occipital cortex. Our region-of-interest-based analysis showed that the decoding of the direction of auditory motion was most reliable with activation patterns of the left and right planum temporale. Auditory motion direction could not be decoded from activation patterns in hMT/V5+. These findings provide further evidence for the planum temporale playing a central role in supporting auditory motion perception. In addition, our findings suggest a cross-modal transfer of directional information to high-level visual cortex in healthy humans.
Collapse
Affiliation(s)
- Arjen Alink
- Department of Neurophysiology, Max Planck Institute for Brain Research, D-60528 Frankfurt am Main, Germany.
| | | | | | | | | |
Collapse
|
32
|
Getzmann S. Auditory motion perception: onset position and motion direction are encoded in discrete processing stages. Eur J Neurosci 2011; 33:1339-50. [DOI: 10.1111/j.1460-9568.2011.07617.x] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
33
|
Kreitewolf J, Lewald J, Getzmann S. Effect of attention on cortical processing of sound motion: an EEG study. Neuroimage 2010; 54:2340-9. [PMID: 20965256 DOI: 10.1016/j.neuroimage.2010.10.031] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2010] [Revised: 10/06/2010] [Accepted: 10/11/2010] [Indexed: 12/01/2022] Open
Abstract
The onset of motion in an otherwise continuous sound elicits a prominent auditory evoked potential, the so-called motion onset response (MOR). The MOR has recently been shown to be modulated by stimulus-dependent factors, such as velocity, while the possible role of task-dependent factors has remained unclear. Here, the effect of spatial attention on the MOR was investigated in 19 listeners. In each trial, the subject initially heard a free-field sound, consisting of a stationary period and a subsequent period of motion. Then, two successive stationary test tones were presented that differed in location and pitch. Subjects either judged whether or not the starting and final positions of the preceded motion matched the positions of the two test tones ('motion-focused condition'), or whether or not the test tones were identical in pitch, irrespective of the preceded motion stimulus ('baseline condition'). These two tasks were presented in separate experimental blocks. The performance level in both tasks was similar. However, especially later portions of the MOR were significantly increased in amplitude when auditory motion was task-relevant. Cortical source localization indicated that this extra activation originated in dorsofrontal areas that have been proposed to be part of the dorsal auditory processing stream. These results support the assumption that auditory motion processing is based on a complex interaction of both stimulus-specific and attentional processes.
Collapse
|
34
|
Rauschecker JP. An expanded role for the dorsal auditory pathway in sensorimotor control and integration. Hear Res 2010; 271:16-25. [PMID: 20850511 DOI: 10.1016/j.heares.2010.09.001] [Citation(s) in RCA: 198] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2010] [Revised: 08/08/2010] [Accepted: 09/05/2010] [Indexed: 11/30/2022]
Abstract
The dual-pathway model of auditory cortical processing assumes that two largely segregated processing streams originating in the lateral belt subserve the two main functions of hearing: identification of auditory "objects", including speech; and localization of sounds in space (Rauschecker and Tian, 2000). Evidence has accumulated, chiefly from work in humans and nonhuman primates, that an antero-ventral pathway supports the former function, whereas a postero-dorsal stream supports the latter, i.e processing of space and motion-in-space. In addition, the postero-dorsal stream has also been postulated to subserve some functions of speech and language in humans. A recent review (Rauschecker and Scott, 2009) has proposed the possibility that both functions of the postero-dorsal pathway can be subsumed under the same structural forward model: an efference copy sent from prefrontal and premotor cortex provides the basis for "optimal state estimation" in the inferior parietal lobe and in sensory areas of the posterior auditory cortex. The current article corroborates this model by adding and discussing recent evidence.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Physiology and Biophysics, Laboratory of Integrative Neuroscience and Cognition, Georgetown University Medical Center, New Research Building, Room WP19, Washington, DC 20057-1460, USA.
| |
Collapse
|
35
|
Spierer L, De Lucia M, Bernasconi F, Grivel J, Bourquin NMP, Clarke S, Murray MM. Learning-induced plasticity in human audition: objects, time, and space. Hear Res 2010; 271:88-102. [PMID: 20430070 DOI: 10.1016/j.heares.2010.03.086] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2009] [Revised: 02/16/2010] [Accepted: 03/03/2010] [Indexed: 10/19/2022]
Abstract
The human auditory system is comprised of specialized but interacting anatomic and functional pathways encoding object, spatial, and temporal information. We review how learning-induced plasticity manifests along these pathways and to what extent there are common mechanisms subserving such plasticity. A first series of experiments establishes a temporal hierarchy along which sounds of objects are discriminated along basic to fine-grained categorical boundaries and learned representations. A widespread network of temporal and (pre)frontal brain regions contributes to object discrimination via recursive processing. Learning-induced plasticity typically manifested as repetition suppression within a common set of brain regions. A second series considered how the temporal sequence of sound sources is represented. We show that lateralized responsiveness during the initial encoding phase of pairs of auditory spatial stimuli is critical for their accurate ordered perception. Finally, we consider how spatial representations are formed and modified through training-induced learning. A population-based model of spatial processing is supported wherein temporal and parietal structures interact in the encoding of relative and absolute spatial information over the initial ~300 ms post-stimulus onset. Collectively, these data provide insights into the functional organization of human audition and open directions for new developments in targeted diagnostic and neurorehabilitation strategies.
Collapse
Affiliation(s)
- Lucas Spierer
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neuroscience, Vaudois University Hospital Center and University of Lausanne, Switzerland
| | | | | | | | | | | | | |
Collapse
|
36
|
Getzmann S, Lewald J. Shared Cortical Systems for Processing of Horizontal and Vertical Sound Motion. J Neurophysiol 2010; 103:1896-904. [DOI: 10.1152/jn.00333.2009] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cortical processing of horizontal and vertical sound motion in free-field space was investigated using high-density electroencephalography in combination with standardized low-resolution brain electromagnetic tomography (sLORETA). Eighteen subjects heard sound stimuli that, after an initial stationary phase in a central position, started to move centrifugally, either to the left, to the right, upward, or downward. The delayed onset of both horizontal and vertical motion elicited a specific motion-onset response (MOR), resulting in widely distributed activations, with prominent maxima in primary and nonprimary auditory cortices, insula, and parietal lobe. The comparison of MORs to horizontal and vertical motion orientations did not indicate any significant differences in latency or topography. Contrasting the sLORETA solutions for the two motion orientations revealed only marginal activation in postcentral gyrus. These data are consistent with the notion that azimuth and elevation components of dynamic auditory spatial information are processed in common, rather than separate, cortical substrates. Furthermore, the findings support the assumption that the MOR originates at a stage of auditory analysis after the different spatial cues (interaural and monaural spectral cues) have been integrated into a unified space code.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany; and
| | - Jörg Lewald
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany; and
- Department of Cognitive Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
37
|
Grzeschik R, Böckmann-Barthel M, Mühler R, Hoffmann MB. Motion-onset auditory-evoked potentials critically depend on history. Exp Brain Res 2010; 203:159-68. [DOI: 10.1007/s00221-010-2221-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2010] [Accepted: 03/05/2010] [Indexed: 11/30/2022]
|
38
|
Eddins DA, Hall JW. Binaural Processing and Auditory Asymmetries. THE AGING AUDITORY SYSTEM 2010. [DOI: 10.1007/978-1-4419-0993-0_6] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
39
|
Murray MM, Spierer L. Auditory spatio-temporal brain dynamics and their consequences for multisensory interactions in humans. Hear Res 2009; 258:121-33. [DOI: 10.1016/j.heares.2009.04.022] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2009] [Revised: 04/28/2009] [Accepted: 04/28/2009] [Indexed: 11/24/2022]
|
40
|
Getzmann S, Lewald J. Effects of natural versus artificial spatial cues on electrophysiological correlates of auditory motion. Hear Res 2009; 259:44-54. [PMID: 19800957 DOI: 10.1016/j.heares.2009.09.021] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2009] [Revised: 09/30/2009] [Accepted: 09/30/2009] [Indexed: 11/24/2022]
Abstract
The effect of the type of the auditory motion stimulus on neural correlates of motion processing were investigated using high-density electroencephalography. Sound motion was implemented by (a) gradual shifts in interaural time or (b) level difference; (c) motion of virtual 3D sound sources; or (d) successive activation of 45 loudspeakers along the horizontal plane. In a subset of trials, listeners (N=20) performed a two-alternative forced-choice motion discrimination task. Each trial began with a stationary phase of the acoustic stimulus in a central position, immediately followed by a motion of the stimulus. The motion onset elicited a specific cortical response that was dominated by large negative and positive deflections, the so-called change-N1 and change-P2. The temporal dynamics of these components depended on the auditory motion cues presented: Free-field motion and virtual 3D motion were associated with earlier cortical responses and with shorter reaction times than shifts in interaural time or level. Also, free-field motion elicited much stronger onset responses than simulated motion. These findings suggest that natural-like stimulation using stimuli presented in the free sound field allows more reliable conclusions on neural processing of sound motion, whereas artificial motion stimuli, in particular gradual shifts in interaural time or level, seem to be less suited with respect to this aim.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, D-44139 Dortmund, Germany.
| | | |
Collapse
|
41
|
Kawabe T. Sequential Stream Segregation Affects Localisation of Diotic Tones among Tones with Time-Varying Interaural Time Difference. Perception 2009; 38:1377-85. [DOI: 10.1068/p6369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
In this study, I examined how sequential stream segregation contributes to the detection of diotic tones among tones with time-varying interaural time differences (ITDs). Target (T) and distractor (D) tones, and a silent duration (–) formed a sequence (DTD–) and this sequence was presented repeatedly. A frequency difference was introduced between target and distractor tones. The distractor tones were also given time-varying ITDs to produce a percept of smooth auditory motion along the interaural axis. In half of the trials, the target tones were not given time-varying ITDs, and thus were diotically presented. The task of the listeners was to determine whether the repeated sequences of DTD–had target tones without motion. The sensitivity d′ for the detection of diotic target tones was higher with larger frequency differences. On the other hand, the criterion c was lower with larger frequency differences. In another session, I confirmed that proportions of reports “two streams” was positively and negatively correlated with d′ and c, respectively. The results indicate that the localisation of a sound image could be influenced by sequential stream segregation in complex sound environments.
Collapse
Affiliation(s)
- Takahiro Kawabe
- Kyushu University, 6-19-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan
| |
Collapse
|
42
|
Spierer L, Bellmann-Thiran A, Maeder P, Murray MM, Clarke S. Hemispheric competence for auditory spatial representation. ACTA ACUST UNITED AC 2009; 132:1953-66. [PMID: 19477962 DOI: 10.1093/brain/awp127] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Sound localization relies on the analysis of interaural time and intensity differences, as well as attenuation patterns by the outer ear. We investigated the relative contributions of interaural time and intensity difference cues to sound localization by testing 60 healthy subjects: 25 with focal left and 25 with focal right hemispheric brain damage. Group and single-case behavioural analyses, as well as anatomo-clinical correlations, confirmed that deficits were more frequent and much more severe after right than left hemispheric lesions and for the processing of interaural time than intensity difference cues. For spatial processing based on interaural time difference cues, different error types were evident in the individual data. Deficits in discriminating between neighbouring positions occurred in both hemispaces after focal right hemispheric brain damage, but were restricted to the contralesional hemispace after focal left hemispheric brain damage. Alloacusis (perceptual shifts across the midline) occurred only after focal right hemispheric brain damage and was associated with minor or severe deficits in position discrimination. During spatial processing based on interaural intensity cues, deficits were less severe in the right hemispheric brain damage than left hemispheric brain damage group and no alloacusis occurred. These results, matched to anatomical data, suggest the existence of a binaural sound localization system predominantly based on interaural time difference cues and primarily supported by the right hemisphere. More generally, our data suggest that two distinct mechanisms contribute to: (i) the precise computation of spatial coordinates allowing spatial comparison within the contralateral hemispace for the left hemisphere and the whole space for the right hemisphere; and (ii) the building up of global auditory spatial representations in right temporo-parietal cortices.
Collapse
Affiliation(s)
- Lucas Spierer
- Neuropsychology and Neurorehabilitation Service, Vaudois University Hospital Center and University of Lausanne, Lausanne, Switzerland.
| | | | | | | | | |
Collapse
|
43
|
Getzmann S. Effect of auditory motion velocity on reaction time and cortical processes. Neuropsychologia 2009; 47:2625-33. [PMID: 19467249 DOI: 10.1016/j.neuropsychologia.2009.05.012] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2008] [Revised: 05/12/2009] [Accepted: 05/17/2009] [Indexed: 10/20/2022]
Abstract
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere contralateral to the direction of motion. The contralaterality of the motion onset response decreased when the velocity was reduced. Higher motion velocity was associated with larger and earlier cortical responses and with shorter reaction times to motion onset. The results indicate a close correspondence of brain activity and behavioral performance in auditory motion detection.
Collapse
Affiliation(s)
- Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany.
| |
Collapse
|
44
|
Lewald J, Peters S, Corballis MC, Hausmann M. Perception of stationary and moving sound following unilateral cortectomy. Neuropsychologia 2009; 47:962-71. [DOI: 10.1016/j.neuropsychologia.2008.10.016] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2008] [Revised: 10/21/2008] [Accepted: 10/23/2008] [Indexed: 10/21/2022]
|
45
|
Spierer L, Murray MM, Tardif E, Clarke S. The path to success in auditory spatial discrimination: electrical neuroimaging responses within the supratemporal plane predict performance outcome. Neuroimage 2008; 41:493-503. [PMID: 18420424 DOI: 10.1016/j.neuroimage.2008.02.038] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2007] [Revised: 01/29/2008] [Accepted: 02/19/2008] [Indexed: 10/22/2022] Open
Abstract
Auditory scene analysis requires the accurate encoding and comparison of the perceived spatial positions of sound sources. The electrophysiological correlates of auditory spatial discrimination and their relationship to performance accuracy were studied in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) that were recorded during the completion of a near-threshold S1-S2 paradigm within the right hemispace. Data were sorted as a function of performance accuracy, and AEP responses 75-117 ms after the presentation of the first sound differed topographically between trials leading to correct and incorrect spatial discrimination. Distributed source estimations revealed that this followed from significantly stronger activity within the left (i.e. contralateral) supratemporal plane (STP) and the left inferior parietal lobule prior to correct versus incorrect discrimination performance. Successful spatial discrimination thus depends on the activity of distinct configurations of active brain networks within the contralateral temporo-parietal cortex over a time period when the first sound position is being encoded. Furthermore, significant positive correlations were observed between performance accuracy and the intracranial activity estimated within the left STP. The efficacy of S1 processing within the STP is thus predictive of behavioral performance outcome during auditory spatial discrimination. Our data support a model wherein refinement of spatial representations occurs within the STP and that interactions with parietal structures allow for transformations into coordinate frames that are required for higher-order computations including absolute localization of sound sources.
Collapse
Affiliation(s)
- Lucas Spierer
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland.
| | | | | | | |
Collapse
|
46
|
Abstract
Several studies have shown a right-hemispheric advantage for sound localisation. However, most of these studies used stationary sound stimuli, although in most everyday situations humans are in motion when localising sound, or face a moving sound source. To elucidate the question of a functional asymmetry in cortical processing of auditory motion information, we tested 23 neurologically healthy human participants. Virtual leftward or rightward motion (broadband noise) was presented with variable movement angles (MA) in the horizontal plane (via headphones) in either the participants' left or right hemispace. Participants had to indicate whether the sound moved left or rightward. The frequency of "right" judgements determined as a function of MA had a sigmoidal shape in both hemispaces, indicating significant overall discrimination of motion direction. However, the frequency of correct judgements revealed a significantly better performance for the left than for the right hemispace, suggesting a superiority of the right hemisphere. This finding is in agreement with recent neuroimaging results showing higher right-hemispheric activity during localisation of moving sounds. The results might also point to a supramodal right-hemisphere advantage in the attentional processing of motion perception.
Collapse
Affiliation(s)
- Marco Hirnstein
- Institute for Cognitive Neuroscience, Ruhr-University Bochum, Germany.
| | | | | |
Collapse
|
47
|
Processing of location and pattern changes of natural sounds in the human auditory cortex. Neuroimage 2007; 35:1192-200. [DOI: 10.1016/j.neuroimage.2007.01.007] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2006] [Revised: 01/10/2007] [Accepted: 01/12/2007] [Indexed: 11/22/2022] Open
|
48
|
Bidet-Caulet A, Bertrand O. Dynamics of a temporo-fronto-parietal network during sustained spatial or spectral auditory processing. J Cogn Neurosci 2006; 17:1691-703. [PMID: 16269106 PMCID: PMC4764672 DOI: 10.1162/089892905774589244] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Animal and human studies have suggested that posterior temporal, parietal, and frontal regions are specifically involved in auditory spatial (location and motion) processing, forming a putative dorsal "where" pathway. We used scalp EEG and current density mapping to investigate the dynamics of this network in human subjects presented with a varying acoustic stream in a two-factor paradigm: spatial versus pitch variations, focused versus diverted attention. The main findings were: (i) a temporo-parieto-frontal network was activated during the whole duration of the stream in all conditions and modulated by attention; (ii) the left superior temporal cortex was the only region showing different activations for pitch and spatial variations. Therefore, parietal and frontal regions would be involved in task-related processes (attention and motor preparation), whereas the differential processing of acoustic spatial and object-related features seems to take place at the temporal level.
Collapse
Affiliation(s)
- Aurélie Bidet-Caulet
- INSERM U280, Mental Processes and Brain Activation Laboratory, IFNL, UCBL1, Lyon, France.
| | | |
Collapse
|
49
|
Soeta Y, Nakagawa S, Tonoike M. Auditory evoked fields to variations of interaural time delay. Neurosci Lett 2005; 383:311-6. [PMID: 15955427 DOI: 10.1016/j.neulet.2005.04.027] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2005] [Revised: 03/28/2005] [Accepted: 04/11/2005] [Indexed: 11/17/2022]
Abstract
Auditory motion can be simulated by presenting binaural sounds with time-varying interaural time delays. Human cortical responses to the rate of auditory motion were studied by recording auditory evoked magnetic fields with a 122-channel whole-head magnetometer. Auditory motion from central to right and then to central was produced by varying interaural time differences between ears. The results showed that the N1m latencies and amplitudes were not affected by the fluctuation of interaural time delay; however, the peak amplitude of P2m significantly increased as a function of fluctuation of the interaural time delay.
Collapse
Affiliation(s)
- Yoshiharu Soeta
- Institute for Human Science and Biomedical Engineering, National Institute of Advanced Industrial Science and Technology (AIST), 1-8-31 Midorigaoka, Ikeda, Osaka 563-8577, Japan.
| | | | | |
Collapse
|
50
|
Tian B, Rauschecker JP. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey. J Neurophysiol 2005; 92:2993-3013. [PMID: 15486426 DOI: 10.1152/jn.00472.2003] [Citation(s) in RCA: 109] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.
Collapse
Affiliation(s)
- Biao Tian
- Department of Physiology and Biophysics, Georgetown University School of Medicine, Washington, DC 20057, USA
| | | |
Collapse
|