1
|
İlhan B, Kurt S, Ungan P. Auditory cortical responses to abrupt lateralization shifts do not reflect the activity of hemifield-specific units involved in opponent coding of auditory space. Neuropsychologia 2023; 188:108629. [PMID: 37356539 DOI: 10.1016/j.neuropsychologia.2023.108629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/20/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Recent studies show that the classical model based on axonal delay-lines may not explain interaural time difference (ITD) based spatial coding in humans. Instead, a population-code model called "opponent channels model" (OCM) has been suggested. This model comprises two competing channels respectively for the two auditory hemifields, each with a sigmoidal tuning curve. Event-related potentials (ERPs) to ITD-changes are used in some studies to test the predictions of this model by considering the sounds before and after the change as adaptor and probe stimuli, respectively. It is assumed in these studies that the former stimulus causes adaptation of the neurons selective to its side, and that the ERP N1-P2 response to the ITD-change is the specific response of the neurons with selectivity to the side of probe sound. However, these ERP components are known as a global, non-specific acoustic change complex of cortical origin evoked by any change in the auditory environment. It probably does not genuinely reflect the activity of some stimulus-specific neuronal units that have escaped the refractory effect of the preceding adaptor, which means a violation of the crucial assumption in an adaptor-probe paradigm. To assess this viewpoint, we conducted two experiments. In the first one, we recorded ERPs to abrupt lateralization shifts of click trains having various pre- and post-shift ITDs within the physiological range of -600μs to +600μs. Magnitudes of the ERP components P1, N1, and P2 to these ITD-shifts did not comply with the additive behavior of partial probe responses presumed for an adaptor-probe paradigm, casting doubt on the accuracy of testing sensory coding models by using ERPs to abrupt lateralization changes. Findings of the second experiment, involving ERPs to conjoint outwards/transverse shift stimuli also supported this conclusion.
Collapse
Affiliation(s)
- Barkın İlhan
- Department of Biophysics, Necmettin Erbakan University Meram Medical Faculty, Konya, Türkiye.
| | - Saliha Kurt
- Department of Audiometry, Selçuk University Vocational School of Health Services, Konya, Türkiye.
| | | |
Collapse
|
2
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
3
|
Fine I, Park WJ. Do you hear what I see? How do early blind individuals experience object motion? Philos Trans R Soc Lond B Biol Sci 2023; 378:20210460. [PMID: 36511418 PMCID: PMC9745882 DOI: 10.1098/rstb.2021.0460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022] Open
Abstract
One of the most important tasks for 3D vision is tracking the movement of objects in space. The ability of early blind individuals to understand motion in the environment from noisy and unreliable auditory information is an impressive example of cortical adaptation that is only just beginning to be understood. Here, we compare visual and auditory motion processing, and discuss the effect of early blindness on the perception of auditory motion. Blindness leads to cross-modal recruitment of the visual motion area hMT+ for auditory motion processing. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion. We discuss how this dramatic shift in the cortical basis of motion processing might influence the perceptual experience of motion in early blind individuals. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| | - Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA 98195-1525, USA
| |
Collapse
|
4
|
Battal C, Gurtubay-Antolin A, Rezk M, Mattioni S, Bertonati G, Occelli V, Bottini R, Targher S, Maffei C, Jovicich J, Collignon O. Structural and Functional Network-Level Reorganization in the Coding of Auditory Motion Directions and Sound Source Locations in the Absence of Vision. J Neurosci 2022; 42:4652-4668. [PMID: 35501150 PMCID: PMC9186796 DOI: 10.1523/jneurosci.1554-21.2022] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 11/21/2022] Open
Abstract
hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.
Collapse
Affiliation(s)
- Ceren Battal
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Ane Gurtubay-Antolin
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, 20009, Donostia-San Sebastián, Spain
| | - Mohamed Rezk
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Stefania Mattioni
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Giorgia Bertonati
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Valeria Occelli
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
- Department of Psychology, Edge Hill University, Ormskirk L39 4QP, United Kingdom
| | - Roberto Bottini
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Stefano Targher
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
| | - Chiara Maffei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts 01129
| | - Jorge Jovicich
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
| | - Olivier Collignon
- Institute of Research in Psychology (IPSY) and Institute of NeuroScience (IoNS), Université Catholique de Louvain, 1348 Louvain-la-Neuve, Belgium
- Center of Mind/Brain Sciences, University of Trento, 38123 Trento, Italy
- School of Health Sciences, HES-SO Valais-Wallis, 1950 Sion, Switzerland
- The Sense Innovation and Research Center, CH-1011 Lausanne, Switzerland
| |
Collapse
|
5
|
Wen H, Xu T, Wang X, Yu X, Bi Y. Brain intrinsic connection patterns underlying tool processing in human adults are present in neonates and not in macaques. Neuroimage 2022; 258:119339. [PMID: 35649467 PMCID: PMC9520606 DOI: 10.1016/j.neuroimage.2022.119339] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 05/23/2022] [Accepted: 05/28/2022] [Indexed: 11/25/2022] Open
Abstract
Tool understanding and use are supported by a dedicated left-lateralized, intrinsically connected network in the human adult brain. To examine this network’s phylogenetic and ontogenetic origins, we compared resting-state functional connectivity (rsFC) among regions subserving tool processing in human adults to rsFC among homologous regions in human neonates and macaque monkeys (adolescent and mature). These homologous regions formed an intrinsic network in human neonates, but not in macaques. Network topological patterns were highly similar between human adults and neonates, and significantly less so between humans and macaques. The premotor-parietal rsFC had most significant contribution to the formation of the neonatal tool network. These results suggest that an intrinsic brain network potentially supporting tool processing exists in the human brain prior to individual tool use experiences, and that the premotor-parietal functional connection in particular offers a brain basis for complex tool behaviors specific to humans.
Collapse
|
6
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
7
|
Direct Structural Connections between Auditory and Visual Motion-Selective Regions in Humans. J Neurosci 2021; 41:2393-2405. [PMID: 33514674 DOI: 10.1523/jneurosci.1552-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 11/21/2022] Open
Abstract
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.
Collapse
|
8
|
Poirier C, Hamed SB, Garcia-Saldivar P, Kwok SC, Meguerditchian A, Merchant H, Rogers J, Wells S, Fox AS. Beyond MRI: on the scientific value of combining non-human primate neuroimaging with metadata. Neuroimage 2021; 228:117679. [PMID: 33359343 PMCID: PMC7903159 DOI: 10.1016/j.neuroimage.2020.117679] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 12/07/2020] [Accepted: 12/16/2020] [Indexed: 01/01/2023] Open
Abstract
Sharing and pooling large amounts of non-human primate neuroimaging data offer new exciting opportunities to understand the primate brain. The potential of big data in non-human primate neuroimaging could however be tremendously enhanced by combining such neuroimaging data with other types of information. Here we describe metadata that have been identified as particularly valuable by the non-human primate neuroimaging community, including behavioural, genetic, physiological and phylogenetic data.
Collapse
Affiliation(s)
- Colline Poirier
- Biosciences Institute & Centre for Behaviour and Evolution, Faculty of Medical Sciences, Newcastle 6, UK.
| | - Suliann Ben Hamed
- Institut des Sciences Cognitives Marc Jeannerod, UMR 5229, Université de Lyon - CNRS, France
| | - Pamela Garcia-Saldivar
- Instituto de Neurobiología, UNAM, Campus Juriquilla. Boulevard Juriquilla No. 3001 Querétaro, Qro. 76230 México
| | - Sze Chai Kwok
- Shanghai Key Laboratory of Brain Functional Genomics, Key Laboratory of Brain Functional Genomics Ministry of Education, Shanghai Key Laboratory of Magnetic Resonance, Affiliated Mental Health Center (ECNU), Shanghai Changning Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China; Division of Natural and Applied Sciences, Duke Kunshan University, Duke Institute for Brain Sciences, Kunshan, Jiangsu, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Adrien Meguerditchian
- Laboratoire de Psychologie Cognitive, UMR7290, Université Aix-Marseille/CNRS, Institut Language, Communication and the Brain 13331 Marseille, France
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla. Boulevard Juriquilla No. 3001 Querétaro, Qro. 76230 México
| | - Jeffrey Rogers
- Human Genome Sequencing Center and Dept. of Molecular and Human Genetics, Baylor College of Medicine, Houston, Texas, USA 77030
| | - Sara Wells
- Centre for Macaques, MRC Harwell Institute, Porton Down, Salisbury, United Kingdom
| | - Andrew S Fox
- California National Primate Research Center, Department of Psychology, University of California, Davis, Davis, CA, 95616, USA
| |
Collapse
|
9
|
Xu T, Nenning KH, Schwartz E, Hong SJ, Vogelstein JT, Goulas A, Fair DA, Schroeder CE, Margulies DS, Smallwood J, Milham MP, Langs G. Cross-species functional alignment reveals evolutionary hierarchy within the connectome. Neuroimage 2020; 223:117346. [PMID: 32916286 PMCID: PMC7871099 DOI: 10.1016/j.neuroimage.2020.117346] [Citation(s) in RCA: 96] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 08/04/2020] [Accepted: 08/31/2020] [Indexed: 11/22/2022] Open
Abstract
Evolution provides an important window into how cortical organization
shapes function and vice versa. The complex mosaic of changes in brain
morphology and functional organization that have shaped the mammalian cortex
during evolution, complicates attempts to chart cortical differences across
species. It limits our ability to fully appreciate how evolution has shaped our
brain, especially in systems associated with unique human cognitive capabilities
that lack anatomical homologues in other species. Here, we develop a
function-based method for cross-species alignment that enables the
quantification of homologous regions between humans and rhesus macaques, even
when their location is decoupled from anatomical landmarks. Critically, we find
cross-species similarity in functional organization reflects a gradient of
evolutionary change that decreases from unimodal systems and culminates with the
most pronounced changes in posterior regions of the default mode network
(angular gyrus, posterior cingulate and middle temporal cortices). Our findings
suggest that the establishment of the default mode network, as the apex of a
cognitive hierarchy, has changed in a complex manner during human evolution
– even within subnetworks.
Collapse
Affiliation(s)
- Ting Xu
- Center for the Developing Brain, Child Mind Institute, New York, NY, USA.
| | - Karl-Heinz Nenning
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Ernst Schwartz
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria
| | - Seok-Jun Hong
- Center for the Developing Brain, Child Mind Institute, New York, NY, USA
| | - Joshua T Vogelstein
- Department of Biomedical Engineering, Institute for Computational Medicine, Kavli Neuroscience Discovery Institute, Johns Hopkins University, MD, USA
| | - Alexandros Goulas
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg University, Hamburg, Germany
| | - Damien A Fair
- Advanced Imaging Research Center, Oregon Health & Science University, Portland, OR, USA
| | - Charles E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA; Departments of neurosurgery and Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY, USA
| | - Daniel S Margulies
- Centre National de la Recherche Scientifique (CNRS) UMR 7225, Frontlab, Institut du Cerveau et de la Moelle Epinière, Paris, France
| | - Jonny Smallwood
- Department of Psychology, Queen's University, Kingston, Ontario, Canada; Psychology Department, University of York, York, UK
| | - Michael P Milham
- Center for the Developing Brain, Child Mind Institute, New York, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Austria; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
10
|
Shared Representation of Visual and Auditory Motion Directions in the Human Middle-Temporal Cortex. Curr Biol 2020; 30:2289-2299.e8. [DOI: 10.1016/j.cub.2020.04.039] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 03/03/2020] [Accepted: 04/16/2020] [Indexed: 11/23/2022]
|
11
|
van der Heijden K, Formisano E, Valente G, Zhan M, Kupers R, de Gelder B. Reorganization of Sound Location Processing in the Auditory Cortex of Blind Humans. Cereb Cortex 2020; 30:1103-1116. [PMID: 31504283 DOI: 10.1093/cercor/bhz151] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 05/27/2019] [Accepted: 06/16/2019] [Indexed: 11/12/2022] Open
Abstract
Auditory spatial tasks induce functional activation in the occipital-visual-cortex of early blind humans. Less is known about the effects of blindness on auditory spatial processing in the temporal-auditory-cortex. Here, we investigated spatial (azimuth) processing in congenitally and early blind humans with a phase-encoding functional magnetic resonance imaging (fMRI) paradigm. Our results show that functional activation in response to sounds in general-independent of sound location-was stronger in the occipital cortex but reduced in the medial temporal cortex of blind participants in comparison with sighted participants. Additionally, activation patterns for binaural spatial processing were different for sighted and blind participants in planum temporale. Finally, fMRI responses in the auditory cortex of blind individuals carried less information on sound azimuth position than those in sighted individuals, as assessed with a 2-channel, opponent coding model for the cortical representation of sound azimuth. These results indicate that early visual deprivation results in reorganization of binaural spatial processing in the auditory cortex and that blind individuals may rely on alternative mechanisms for processing azimuth position.
Collapse
Affiliation(s)
- Kiki van der Heijden
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Elia Formisano
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands.,Maastricht Center for Systems Biology, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Giancarlo Valente
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Minye Zhan
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands
| | - Ron Kupers
- BRAINlab and Neuropsychiatry Laboratory, Faculty of Health and Medical Sciences, Department of Neuroscience and Pharmacology, University of Copenhagen, 2200 Copenhagen, Denmark.,Department of Radiology and Biomedical Imaging, Yale University, 300 Cedar Street, New Haven, CT 06520, USA
| | - Beatrice de Gelder
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, 6200 MD Maastricht, the Netherlands.,Department of Computer Science, University College London, Gower Street, London, WC1E 6BT, UK
| |
Collapse
|
12
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
13
|
Shayman CS, Peterka RJ, Gallun FJ, Oh Y, Chang NYN, Hullar TE. Frequency-dependent integration of auditory and vestibular cues for self-motion perception. J Neurophysiol 2020; 123:936-944. [PMID: 31940239 DOI: 10.1152/jn.00307.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Recent evidence has shown that auditory information may be used to improve postural stability, spatial orientation, navigation, and gait, suggesting an auditory component of self-motion perception. To determine how auditory and other sensory cues integrate for self-motion perception, we measured motion perception during yaw rotations of the body and the auditory environment. Psychophysical thresholds in humans were measured over a range of frequencies (0.1-1.0 Hz) during self-rotation without spatial auditory stimuli, rotation of a sound source around a stationary listener, and self-rotation in the presence of an earth-fixed sound source. Unisensory perceptual thresholds and the combined multisensory thresholds were found to be frequency dependent. Auditory thresholds were better at lower frequencies, and vestibular thresholds were better at higher frequencies. Expressed in terms of peak angular velocity, multisensory vestibular and auditory thresholds ranged from 0.39°/s at 0.1 Hz to 0.95°/s at 1.0 Hz and were significantly better over low frequencies than either the auditory-only (0.54°/s to 2.42°/s at 0.1 and 1.0 Hz, respectively) or vestibular-only (2.00°/s to 0.75°/s at 0.1 and 1.0 Hz, respectively) unisensory conditions. Monaurally presented auditory cues were less effective than binaural cues in lowering multisensory thresholds. Frequency-independent thresholds were derived, assuming that vestibular thresholds depended on a weighted combination of velocity and acceleration cues, whereas auditory thresholds depended on displacement and velocity cues. These results elucidate fundamental mechanisms for the contribution of audition to balance and help explain previous findings, indicating its significance in tasks requiring self-orientation.NEW & NOTEWORTHY Auditory information can be integrated with visual, proprioceptive, and vestibular signals to improve balance, orientation, and gait, but this process is poorly understood. Here, we show that auditory cues significantly improve sensitivity to self-motion perception below 0.5 Hz, whereas vestibular cues contribute more at higher frequencies. Motion thresholds are determined by a weighted combination of displacement, velocity, and acceleration information. These findings may help understand and treat imbalance, particularly in people with sensory deficits.
Collapse
Affiliation(s)
- Corey S Shayman
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,School of Medicine, University of Utah, Salt Lake City, Utah
| | - Robert J Peterka
- Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| | - Frederick J Gallun
- National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon.,Oregon Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon
| | - Yonghee Oh
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida
| | - Nai-Yuan N Chang
- Department of Preventive and Restorative Dental Sciences-Division of Bioengineering and Biomaterials, University of California, San Francisco, San Francisco, California
| | - Timothy E Hullar
- Department of Otolaryngology-Head and Neck Surgery, Oregon Health and Science University, Portland, Oregon.,Department of Neurology, Oregon Health and Science University, Portland, Oregon.,National Center for Rehabilitative Auditory Research-VA Portland Health Care System, Portland, Oregon
| |
Collapse
|
14
|
Bednar A, Lalor EC. Where is the cocktail party? Decoding locations of attended and unattended moving sound sources using EEG. Neuroimage 2019; 205:116283. [PMID: 31629828 DOI: 10.1016/j.neuroimage.2019.116283] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 10/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Recently, we showed that in a simple acoustic scene with one sound source, auditory cortex tracks the time-varying location of a continuously moving sound. Specifically, we found that both the delta phase and alpha power of the electroencephalogram (EEG) can be used to reconstruct the sound source azimuth. However, in natural settings, we are often presented with a mixture of multiple competing sounds and so we must focus our attention on the relevant source in order to segregate it from the competing sources e.g. 'cocktail party effect'. While many studies have examined this phenomenon in the context of sound envelope tracking by the cortex, it is unclear how we process and utilize spatial information in complex acoustic scenes with multiple sound sources. To test this, we created an experiment where subjects listened to two concurrent sound stimuli that were moving within the horizontal plane over headphones while we recorded their EEG. Participants were tasked with paying attention to one of the two presented stimuli. The data were analyzed by deriving linear mappings, temporal response functions (TRF), between EEG data and attended as well unattended sound source trajectories. Next, we used these TRFs to reconstruct both trajectories from previously unseen EEG data. In a first experiment we used noise stimuli and included the task involved spatially localizing embedded targets. Then, in a second experiment, we employed speech stimuli and a non-spatial speech comprehension task. Results showed the trajectory of an attended sound source can be reliably reconstructed from both delta phase and alpha power of EEG even in the presence of distracting stimuli. Moreover, the reconstruction was robust to task and stimulus type. The cortical representation of the unattended source position was below detection level for the noise stimuli, but we observed weak tracking of the unattended source location for the speech stimuli by the delta phase of EEG. In addition, we demonstrated that the trajectory reconstruction method can in principle be used to decode selective attention on a single-trial basis, however, its performance was inferior to envelope-based decoders. These results suggest a possible dissociation of delta phase and alpha power of EEG in the context of sound trajectory tracking. Moreover, the demonstrated ability to localize and determine the attended speaker in complex acoustic environments is particularly relevant for cognitively controlled hearing devices.
Collapse
Affiliation(s)
- Adam Bednar
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland.
| | - Edmund C Lalor
- School of Engineering, Trinity College Dublin, Dublin, Ireland; Trinity Center for Bioengineering, Trinity College Dublin, Dublin, Ireland; Department of Biomedical Engineering, Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
15
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
16
|
Xu T, Sturgeon D, Ramirez JSB, Froudist-Walsh S, Margulies DS, Schroeder CE, Fair DA, Milham MP. Interindividual Variability of Functional Connectivity in Awake and Anesthetized Rhesus Macaque Monkeys. BIOLOGICAL PSYCHIATRY: COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2019; 4:543-553. [PMID: 31072758 PMCID: PMC7063583 DOI: 10.1016/j.bpsc.2019.02.005] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 01/28/2019] [Accepted: 02/22/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND: Nonhuman primate (NHP) models are commonly used to advance our understanding of brain function and organization. However, to date, they have offered few insights into individual differences among NHPs. In large part, this is due to the logistical challenges of NHP research, which limit most studies to 5 subjects or fewer. METHODS: We leveraged the availability of a large-scale open NHP imaging resource to provide an initial examination of individual differences in the functional organization of the NHP brain. Specifically, we selected one awake functional magnetic resonance imaging dataset (Newcastle University: n = 10) and two anesthetized functional magnetic resonance imaging datasets (Oxford University: n = 19; University of California, Davis: n = 19) to examine individual differences in functional connectivity characteristics across the cortex as well as potential state dependencies. RESULTS: We noted significant individual variations of functional connectivity across the macaque cortex. Similar to the findings in humans, during the awake state, the primary sensory and motor cortices showed lower variability than the high-order association regions. This variability pattern was significantly correlated with T1-weighted and T2-weighted mapping and degree of long-distance connectivity, but not short-distance connectivity. The interindividual variability under anesthesia exhibited a very distinct pattern, with lower variability in medial frontal cortex, precuneus, and somatomotor regions and higher variability in the lateral ventral frontal and insular cortices. CONCLUSIONS: This work has implications for our understanding of the evolutionary origins of individual variation in the human brain and methodological implications that must be considered in any pursuit to study individual variation in NHP models.
Collapse
Affiliation(s)
- Ting Xu
- Center for the Developing Brain, Child Mind Institute, New York, New York.
| | - Darrick Sturgeon
- Department of Behavior Neuroscience, Oregon Health and Science University, Portland, Oregon; Department of Psychiatry, Oregon Health and Science University, Portland, Oregon; Advanced Imaging Research Center, Oregon Health and Science University, Portland, Oregon
| | - Julian S B Ramirez
- Department of Behavior Neuroscience, Oregon Health and Science University, Portland, Oregon; Department of Psychiatry, Oregon Health and Science University, Portland, Oregon; Advanced Imaging Research Center, Oregon Health and Science University, Portland, Oregon
| | | | - Daniel S Margulies
- Centre National de la Recherche Scientifique, UMR 7225, Frontlab, Institut du Cerveau et de la Moelle Épinière, Paris, France
| | - Charles E Schroeder
- Department of Neurological Surgery, Columbia University College of Physicians and Surgeons, New York, New York; Department of Psychiatry, Columbia University College of Physicians and Surgeons, New York, New York; Translational Neuroscience Division, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York
| | - Damien A Fair
- Department of Behavior Neuroscience, Oregon Health and Science University, Portland, Oregon; Department of Psychiatry, Oregon Health and Science University, Portland, Oregon; Advanced Imaging Research Center, Oregon Health and Science University, Portland, Oregon
| | - Michael P Milham
- Center for the Developing Brain, Child Mind Institute, New York, New York; Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York
| |
Collapse
|
17
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
18
|
Joris PX. Neural binaural sensitivity at high sound speeds: Single cell responses in cat midbrain to fast-changing interaural time differences of broadband sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:EL45. [PMID: 30710960 PMCID: PMC7112706 DOI: 10.1121/1.5087524] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 12/13/2018] [Accepted: 12/18/2018] [Indexed: 06/09/2023]
Abstract
Relative motion between the body and the outside world is a rich source of information. Neural selectivity to motion is well-established in several sensory systems, but is controversial in hearing. This study examines neural sensitivity to changes in the instantaneous interaural time difference of sounds at the two ears. Midbrain neurons track such changes up to extremely high speeds, show only a coarse dependence of firing rate on speed, and lack directional selectivity. These results argue against the presence of selectivity to auditory motion at the level of the midbrain, but reveal an acuity which enables coding of fast-fluctuating binaural cues in realistic sound environments.
Collapse
Affiliation(s)
- Philip X Joris
- Laboratory of Auditory Neurophysiology, KU Leuven, Herestraat 49, B-3000 Leuven, Belgium
| |
Collapse
|
19
|
Oligschläger S, Xu T, Baczkowski BM, Falkiewicz M, Falchier A, Linn G, Margulies DS. Gradients of connectivity distance in the cerebral cortex of the macaque monkey. Brain Struct Funct 2018; 224:925-935. [PMID: 30547311 PMCID: PMC6420469 DOI: 10.1007/s00429-018-1811-1] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Accepted: 12/03/2018] [Indexed: 11/27/2022]
Abstract
Cortical connectivity conforms to a series of organizing principles that are common across species. Spatial proximity, similar cortical type, and similar connectional profile all constitute factors for determining the connectivity between cortical regions. We previously demonstrated another principle of connectivity that is closely related to the spatial layout of the cerebral cortex. Using functional connectivity from resting-state fMRI in the human cortex, we found that the further a region is located from primary cortex, the more distant are its functional connections with the other areas of the cortex. However, it remains unknown whether this relationship between cortical layout and connectivity extends to other primate species. Here, we investigated this relationship using both resting-state functional connectivity as well as gold-standard tract-tracing connectivity in the macaque monkey cortex. For both measures of connectivity, we found a gradient of connectivity distance extending between primary and frontoparietal regions. In the human cortex, the further a region is located from primary areas, the stronger its connections to distant portions of the cortex, with connectivity distance highest in frontal and parietal regions. The similarity between the human and macaque findings provides evidence for a phylogenetically conserved relationship between the spatial layout of cortical areas and connectivity.
Collapse
Affiliation(s)
- Sabine Oligschläger
- Max Planck Research Group for Neuroanatomy and Connectivity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Faculty of Life Sciences, University of Leipzig, Leipzig, Germany.,International Max Planck Research School NeuroCom, Leipzig, Germany
| | - Ting Xu
- Center for the Developing Brain, Child Mind Institute, New York, NY, 10022, USA.,Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, 10962, USA
| | - Blazej M Baczkowski
- Max Planck Research Group for Neuroanatomy and Connectivity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Faculty of Life Sciences, University of Leipzig, Leipzig, Germany.,International Max Planck Research School NeuroCom, Leipzig, Germany.,Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Marcel Falkiewicz
- Max Planck Research Group for Neuroanatomy and Connectivity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Arnaud Falchier
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, 10962, USA
| | - Gary Linn
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, 10962, USA
| | - Daniel S Margulies
- Max Planck Research Group for Neuroanatomy and Connectivity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany. .,International Max Planck Research School NeuroCom, Leipzig, Germany. .,Centre National de la Recherche Scientifique (CNRS), UMR 7225, Frontlab, Institut du Cerveau et de la Moelle épinière, Hôpital Pitié-Salpêtrière, 47, boulevard de l'Hôpital, 75010, Paris, France.
| |
Collapse
|
20
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
21
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
22
|
Neural tracking of auditory motion is reflected by delta phase and alpha power of EEG. Neuroimage 2018; 181:683-691. [PMID: 30053517 DOI: 10.1016/j.neuroimage.2018.07.054] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 07/10/2018] [Accepted: 07/23/2018] [Indexed: 12/29/2022] Open
Abstract
It is of increasing practical interest to be able to decode the spatial characteristics of an auditory scene from electrophysiological signals. However, the cortical representation of auditory space is not well characterized, and it is unclear how cortical activity reflects the time-varying location of a moving sound. Recently, we demonstrated that cortical response measures to discrete noise bursts can be decoded to determine their origin in space. Here we build on these findings to investigate the cortical representation of a continuously moving auditory stimulus using scalp recorded electroencephalography (EEG). In a first experiment, subjects listened to pink noise over headphones which was spectro-temporally modified to be perceived as randomly moving on a semi-circular trajectory in the horizontal plane. While subjects listened to the stimuli, we recorded their EEG using a 128-channel acquisition system. The data were analysed by 1) building a linear regression model (decoder) mapping the relationship between the stimulus location and a training set of EEG data, and 2) using the decoder to reconstruct an estimate of the time-varying sound source azimuth from the EEG data. The results showed that we can decode sound trajectory with a reconstruction accuracy significantly above chance level. Specifically, we found that the phase of delta (<2 Hz) and power of alpha (8-12 Hz) EEG track the dynamics of a moving auditory object. In a follow-up experiment, we replaced the noise with pulse train stimuli containing only interaural level and time differences (ILDs and ITDs respectively). This allowed us to investigate whether our trajectory decoding is sensitive to both acoustic cues. We found that the sound trajectory can be decoded for both ILD and ITD stimuli. Moreover, their neural signatures were similar and even allowed successful cross-cue classification. This supports the notion of integrated processing of ILD and ITD at the cortical level. These results are particularly relevant for application in devices such as cognitively controlled hearing aids and for the evaluation of virtual acoustic environments.
Collapse
|
23
|
Sound frequency affects the auditory motion-onset response in humans. Exp Brain Res 2018; 236:2713-2726. [PMID: 29998350 DOI: 10.1007/s00221-018-5329-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 07/04/2018] [Indexed: 10/28/2022]
Abstract
The current study examines the modulation of the motion-onset response based on the frequency-range of sound stimuli. Delayed motion-onset and stationary stimuli were presented in a free-field by sequentially activating loudspeakers on an azimuthal plane keeping the natural percept of externalized sound presentation. The sounds were presented in low- or high-frequency ranges and had different motion direction within each hemifield. Difference waves were calculated by contrasting the moving and stationary sounds to isolate the motion-onset responses. Analyses carried out at the peak amplitudes and latencies on the difference waves showed that the early part of the motion response (cN1) was modulated by the frequency range of the sounds with stronger amplitudes elicited by stimuli with high frequency range. Subsequent post hoc analysis of the normalized amplitude of the motion response confirmed the previous finding by excluding the possibility that the frequency range had an overall effect on the waveform, and showing that this effect was instead limited to the motion response. These results support the idea of a modular organization of the motion-onset response with the processing of primary sound motion characteristics being reflected in the early part of the response. Also, the article highlights the importance of specificity in auditory stimulus design.
Collapse
|
24
|
Abstract
By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments.
Collapse
Affiliation(s)
- W Owen Brimijoin
- 1 MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, UK
| |
Collapse
|