1
|
Aller M, Noppeney U. To integrate or not to integrate: Temporal dynamics of hierarchical Bayesian causal inference. PLoS Biol 2019; 17:e3000210. [PMID: 30939128 PMCID: PMC6461295 DOI: 10.1371/journal.pbio.3000210] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 04/12/2019] [Accepted: 03/19/2019] [Indexed: 11/19/2022] Open
Abstract
To form a percept of the environment, the brain needs to solve the binding problem-inferring whether signals come from a common cause and are integrated or come from independent causes and are segregated. Behaviourally, humans solve this problem near-optimally as predicted by Bayesian causal inference; but the neural mechanisms remain unclear. Combining Bayesian modelling, electroencephalography (EEG), and multivariate decoding in an audiovisual spatial localisation task, we show that the brain accomplishes Bayesian causal inference by dynamically encoding multiple spatial estimates. Initially, auditory and visual signal locations are estimated independently; next, an estimate is formed that combines information from vision and audition. Yet, it is only from 200 ms onwards that the brain integrates audiovisual signals weighted by their bottom-up sensory reliabilities and top-down task relevance into spatial priority maps that guide behavioural responses. As predicted by Bayesian causal inference, these spatial priority maps take into account the brain's uncertainty about the world's causal structure and flexibly arbitrate between sensory integration and segregation. The dynamic evolution of perceptual estimates thus reflects the hierarchical nature of Bayesian causal inference, a statistical computation, which is crucial for effective interactions with the environment.
Collapse
Affiliation(s)
- Máté Aller
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
2
|
Schutte M, Ewert SD, Wiegrebe L. The percept of reverberation is not affected by visual room impression in virtual environments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:EL229. [PMID: 31067971 DOI: 10.1121/1.5093642] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 02/21/2019] [Indexed: 06/09/2023]
Abstract
Humans possess mechanisms to suppress distracting early sound reflections, summarized as the precedence effect. Recent work shows that precedence is affected by visual stimulation. This paper investigates possible effects of visual stimulation on the perception of later reflections, i.e., reverberation. In a highly immersive audio-visual virtual reality environment, subjects were asked to quantify reverberation in conditions where simultaneously presented auditory and visual stimuli either match in room identity, sound source azimuth, and sound source distance, or diverge in one of these aspects. While subjects reliably judged reverberation across acoustic environments, the visual room impression did not affect reverberation estimates.
Collapse
Affiliation(s)
- Michael Schutte
- Division of Neurobiology, Department Biology II and Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Germany
| | - Stephan D Ewert
- Medical Physics and Cluster of Excellence Hearing4all, University of Oldenburg, , ,
| | - Lutz Wiegrebe
- Division of Neurobiology, Department Biology II and Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Germany
| |
Collapse
|
3
|
Wang L, Wang W, Yan T, Song J, Yang W, Wang B, Go R, Huang Q, Wu J. Beta-Band Functional Connectivity Influences Audiovisual Integration in Older Age: An EEG Study. Front Aging Neurosci 2017; 9:239. [PMID: 28824411 PMCID: PMC5545595 DOI: 10.3389/fnagi.2017.00239] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 07/07/2017] [Indexed: 11/27/2022] Open
Abstract
Audiovisual integration occurs frequently and has been shown to exhibit age-related differences via behavior experiments or time-frequency analyses. In the present study, we examined whether functional connectivity influences audiovisual integration during normal aging. Visual, auditory, and audiovisual stimuli were randomly presented peripherally; during this time, participants were asked to respond immediately to the target stimulus. Electroencephalography recordings captured visual, auditory, and audiovisual processing in 12 old (60-78 years) and 12 young (22-28 years) male adults. For non-target stimuli, we focused on alpha (8-13 Hz), beta (13-30 Hz), and gamma (30-50 Hz) bands. We applied the Phase Lag Index to study the dynamics of functional connectivity. Then, the network topology parameters, which included the clustering coefficient, path length, small-worldness global efficiency, local efficiency and degree, were calculated for each condition. For the target stimulus, a race model was used to analyze the response time. Then, a Pearson correlation was used to test the relationship between each network topology parameters and response time. The results showed that old adults activated stronger connections during audiovisual processing in the beta band. The relationship between network topology parameters and the performance of audiovisual integration was detected only in old adults. Thus, we concluded that old adults who have a higher load during audiovisual integration need more cognitive resources. Furthermore, increased beta band functional connectivity influences the performance of audiovisual integration during normal aging.
Collapse
Affiliation(s)
- Luyao Wang
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of TechnologyBeijing, China
| | - Wenhui Wang
- School of Life Science, Beijing Institute of TechnologyBeijing, China
| | - Tianyi Yan
- School of Life Science, Beijing Institute of TechnologyBeijing, China
| | - Jiayong Song
- The Affiliated High School of Peking UniversityBeijing, China
| | - Weiping Yang
- Department of Psychology, Hubei UniversityWuhan, China
| | - Bin Wang
- College of Computer Science and Technology, Taiyuan University of TechnologyShanxi, China
| | - Ritsu Go
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of TechnologyBeijing, China
- International Joint Research Laboratory of Biomimetic Robots and Systems, Ministry of EducationBeijing, China
| | - Qiang Huang
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of TechnologyBeijing, China
- Key Laboratory of Biomimetic Robots and Systems, Ministry of EducationBeijing, China
| | - Jinglong Wu
- Intelligent Robotics Institute, School of Mechatronical Engineering, Beijing Institute of TechnologyBeijing, China
- International Joint Research Laboratory of Biomimetic Robots and Systems, Ministry of EducationBeijing, China
| |
Collapse
|
4
|
Shrem T, Murray MM, Deouell LY. Auditory-visual integration modulates location-specific repetition suppression of auditory responses. Psychophysiology 2017; 54:1663-1675. [PMID: 28752567 DOI: 10.1111/psyp.12955] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Revised: 05/10/2017] [Accepted: 06/03/2017] [Indexed: 11/28/2022]
Abstract
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information.
Collapse
Affiliation(s)
- Talia Shrem
- Human Cognitive Neuroscience Lab, Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Micah M Murray
- Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology, and Neuropsychology and Neurorehabilitation Service, University Hospital Center and University of Lausanne, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM), Lausanne, Switzerland.,Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, Tennessee, USA
| | - Leon Y Deouell
- Human Cognitive Neuroscience Lab, Department of Psychology, The Hebrew University of Jerusalem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
5
|
Reichert MS, Symes LB, Höbel G. Lighting up sound preferences: cross-modal influences on the precedence effect in treefrogs. Anim Behav 2016. [DOI: 10.1016/j.anbehav.2016.07.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
6
|
Tang X, Wu J, Shen Y. The interactions of multisensory integration with endogenous and exogenous attention. Neurosci Biobehav Rev 2015; 61:208-24. [PMID: 26546734 DOI: 10.1016/j.neubiorev.2015.11.002] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 11/01/2015] [Accepted: 11/02/2015] [Indexed: 11/24/2022]
Abstract
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner.
Collapse
Affiliation(s)
- Xiaoyu Tang
- College of Psychology, Liaoning Normal University, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, 116029, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan
| | - Jinglong Wu
- Key Laboratory of Biomimetic Robots and System, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan.
| | - Yong Shen
- Neurodegenerative Disease Research Center, School of Life Sciences, University of Science and Technology of China, CAS Key Laboratory of Brain Functions and Disease, Hefei, China; Center for Advanced Therapeutic Strategies for Brain Disorders, Roskamp Institute, Sarasota, FL 34243, USA
| |
Collapse
|
7
|
Sound localization in a changing world. Curr Opin Neurobiol 2015; 35:35-43. [PMID: 26126152 DOI: 10.1016/j.conb.2015.06.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Revised: 06/04/2015] [Accepted: 06/15/2015] [Indexed: 12/11/2022]
Abstract
In natural environments, neural systems must be continuously updated to reflect changes in sensory inputs and behavioral goals. Recent studies of sound localization have shown that adaptation and learning involve multiple mechanisms that operate at different timescales and stages of processing, with other sensory and motor-related inputs playing a key role. We are only just beginning to understand, however, how these processes interact with one another to produce adaptive changes at the level of neuronal populations and behavior. Because there is no explicit map of auditory space in the cortex, studies of sound localization may also provide much broader insight into the plasticity of complex neural representations that are not topographically organized.
Collapse
|
8
|
Brown AD, Stecker GC, Tollin DJ. The precedence effect in sound localization. J Assoc Res Otolaryngol 2015; 16:1-28. [PMID: 25479823 PMCID: PMC4310855 DOI: 10.1007/s10162-014-0496-2] [Citation(s) in RCA: 64] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 10/13/2014] [Indexed: 11/29/2022] Open
Abstract
In ordinary listening environments, acoustic signals reaching the ears directly from real sound sources are followed after a few milliseconds by early reflections arriving from nearby surfaces. Early reflections are spectrotemporally similar to their source signals but commonly carry spatial acoustic cues unrelated to the source location. Humans and many other animals, including nonmammalian and even invertebrate animals, are nonetheless able to effectively localize sound sources in such environments, even in the absence of disambiguating visual cues. Robust source localization despite concurrent or nearly concurrent spurious spatial acoustic information is commonly attributed to an assortment of perceptual phenomena collectively termed "the precedence effect," characterizing the perceptual dominance of spatial information carried by the first-arriving signal. Here, we highlight recent progress and changes in the understanding of the precedence effect and related phenomena.
Collapse
Affiliation(s)
- Andrew D. Brown
- />Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045 USA
| | - G. Christopher Stecker
- />Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232 USA
| | - Daniel J. Tollin
- />Department of Physiology and Biophysics, University of Colorado School of Medicine, Aurora, CO 80045 USA
| |
Collapse
|