1
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
2
|
Kayser C, Heuer H. Multisensory perception depends on the reliability of the type of judgment. J Neurophysiol 2024; 131:723-737. [PMID: 38416720 DOI: 10.1152/jn.00451.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/05/2024] [Accepted: 02/24/2024] [Indexed: 03/01/2024] Open
Abstract
The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
3
|
Soballa P, Frings C, Schmalbrock P, Merz S. Multisensory integration reduces landmark distortions for tactile but not visual targets. J Neurophysiol 2023; 130:1403-1413. [PMID: 37910559 DOI: 10.1152/jn.00282.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 11/03/2023] Open
Abstract
Target localization is influenced by the presence of additionally presented nontargets, termed landmarks. In both the visual and tactile modality, these landmarks led to systematic distortions of target localizations often resulting in a shift toward the landmark. This shift has been attributed to averaging the spatial memory of both stimuli. Crucially, everyday experiences often rely on multiple modalities, and multisensory research suggests that inputs from different senses are optimally integrated, not averaged, for accurate perception, resulting in more reliable perception of cross-modal compared with uni-modal stimuli. As this could also lead to a reduced influence of the landmark, we wanted to test whether landmark distortions would be reduced when presented in a different modality or whether landmark distortions were unaffected by the modalities presented. In two experiments (each n = 30) tactile or visual targets were paired with tactile or visual landmarks. Experiment 1 showed that targets were less shifted toward landmarks from the different than the same modality, which was more pronounced for tactile than for visual targets. Experiment 2 aimed to replicate this pattern with increased visual uncertainty to rule out that smaller localization shifts of visual targets due to low uncertainty had led to the results. Still, landmark modality influenced localization shifts for tactile but not visual targets. The data pattern for tactile targets is not in line with memory averaging but seems to reflect the effects of multisensory integration, whereas visual targets were less prone to landmark distortions and do not appear to benefit from multisensory integration.NEW & NOTEWORTHY In the present study, we directly tested the predictions of two different accounts, namely, spatial memory averaging and multisensory integration, concerning the degree of landmark distortions of targets across modalities. We showed that landmark distortions were reduced across modalities compared to distortions within modalities, which is in line with multisensory integration. Crucially, this pattern was more pronounced for tactile than for visual targets.
Collapse
Affiliation(s)
- Paula Soballa
- Department of Psychology, University of Trier, Germany
| | | | | | - Simon Merz
- Department of Psychology, University of Trier, Germany
| |
Collapse
|
4
|
O'Donohue M, Lacherez P, Yamamoto N. Audiovisual spatial ventriloquism is reduced in musicians. Hear Res 2023; 440:108918. [PMID: 37992516 DOI: 10.1016/j.heares.2023.108918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/24/2023]
Abstract
There is great scientific and public interest in claims that musical training improves general cognitive and perceptual abilities. While this is controversial, recent and rather convincing evidence suggests that musical training refines the temporal integration of auditory and visual stimuli at a general level. We investigated whether musical training also affects integration in the spatial domain, via an auditory localisation experiment that measured ventriloquism (where localisation is biased towards visual stimuli on audiovisual trials) and recalibration (a unimodal localisation aftereffect). While musicians (n = 22) and non-musicians (n = 22) did not have significantly different unimodal precision or accuracy, musicians were significantly less susceptible than non-musicians to ventriloquism, with large effect sizes. We replicated these results in another experiment with an independent sample of 24 musicians and 21 non-musicians. Across both experiments, spatial recalibration did not significantly differ between the groups even though musicians resisted ventriloquism. Our results suggest that the multisensory expertise afforded by musical training refines spatial integration, a process that underpins multisensory perception.
Collapse
Affiliation(s)
- Matthew O'Donohue
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia.
| | - Philippe Lacherez
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia
| | - Naohide Yamamoto
- Queensland University of Technology (QUT), School of Psychology and Counselling, Kelvin Grove, QLD 4059, Australia; Queensland University of Technology (QUT), Centre for Vision and Eye Research, Kelvin Grove, QLD 4059, Australia
| |
Collapse
|
5
|
Debats NB, Heuer H, Kayser C. Different time scales of common-cause evidence shape multisensory integration, recalibration and motor adaptation. Eur J Neurosci 2023; 58:3253-3269. [PMID: 37461244 DOI: 10.1111/ejn.16095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 07/03/2023] [Indexed: 09/05/2023]
Abstract
Perceptual coherence in the face of discrepant multisensory signals is achieved via the processes of multisensory integration, recalibration and sometimes motor adaptation. These supposedly operate on different time scales, with integration reducing immediate sensory discrepancies and recalibration and motor adaptation reflecting the cumulative influence of their recent history. Importantly, whether discrepant signals are bound during perception is guided by the brains' inference of whether they originate from a common cause. When combined, these two notions lead to the hypothesis that the time scales on which integration and recalibration (or motor adaptation) operate are associated with different time scales of evidence about a common cause underlying two signals. We tested this prediction in a well-established visuo-motor paradigm, in which human participants performed visually guided hand movements. The kinematic correlation between hand and cursor movements indicates their common origin, which allowed us to manipulate the common-cause evidence by titrating this correlation. Specifically, we dissociated hand and cursor signals during individual movements while preserving their correlation across the series of movement endpoints. Following our hypothesis, this manipulation reduced integration compared with a condition in which visual and proprioceptive signals were perfectly correlated. In contrast, recalibration and motor adaption were not affected by this manipulation. This supports the notion that multisensory integration and recalibration deal with sensory discrepancies on different time scales guided by common-cause evidence: Integration is prompted by local common-cause evidence and reduces immediate discrepancies, whereas recalibration and motor adaptation are prompted by global common-cause evidence and reduce persistent discrepancies.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
6
|
Kayser C, Park H, Heuer H. Cumulative multisensory discrepancies shape the ventriloquism aftereffect but not the ventriloquism bias. PLoS One 2023; 18:e0290461. [PMID: 37607201 PMCID: PMC10443876 DOI: 10.1371/journal.pone.0290461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Multisensory integration and recalibration are two processes by which perception deals with discrepant signals. Both are often studied in the spatial ventriloquism paradigm. There, integration is probed by the presentation of discrepant audio-visual stimuli, while recalibration manifests as an aftereffect in subsequent judgements of unisensory sounds. Both biases are typically quantified against the degree of audio-visual discrepancy, reflecting the possibility that both may arise from common underlying multisensory principles. We tested a specific prediction of this: that both processes should also scale similarly with the history of multisensory discrepancies, i.e. the sequence of discrepancies in several preceding audio-visual trials. Analyzing data from ten experiments with randomly varying spatial discrepancies we confirmed the expected dependency of each bias on the immediately presented discrepancy. And in line with the aftereffect being a cumulative process, this scaled with the discrepancies presented in at least three preceding audio-visual trials. However, the ventriloquism bias did not depend on this three-trial history of multisensory discrepancies and also did not depend on the aftereffect biases in previous trials - making these two multisensory processes experimentally dissociable. These findings support the notion that the ventriloquism bias and the aftereffect reflect distinct functions, with integration maintaining a stable percept by reducing immediate sensory discrepancies and recalibration maintaining an accurate percept by accounting for consistent discrepancies.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Hame Park
- Department of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
7
|
Musical training refines audiovisual integration but does not influence temporal recalibration. Sci Rep 2022; 12:15292. [PMID: 36097277 PMCID: PMC9468170 DOI: 10.1038/s41598-022-19665-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 09/01/2022] [Indexed: 11/11/2022] Open
Abstract
When the brain is exposed to a temporal asynchrony between the senses, it will shift its perception of simultaneity towards the previously experienced asynchrony (temporal recalibration). It is unknown whether recalibration depends on how accurately an individual integrates multisensory cues or on experiences they have had over their lifespan. Hence, we assessed whether musical training modulated audiovisual temporal recalibration. Musicians (n = 20) and non-musicians (n = 18) made simultaneity judgements to flash-tone stimuli before and after adaptation to asynchronous (± 200 ms) flash-tone stimuli. We analysed these judgements via an observer model that described the left and right boundaries of the temporal integration window (decisional criteria) and the amount of sensory noise that affected these judgements. Musicians’ boundaries were narrower (closer to true simultaneity) than non-musicians’, indicating stricter criteria for temporal integration, and they also exhibited enhanced sensory precision. However, while both musicians and non-musicians experienced cumulative and rapid recalibration, these recalibration effects did not differ between the groups. Unexpectedly, cumulative recalibration was caused by auditory-leading but not visual-leading adaptation. Overall, these findings suggest that the precision with which observers perceptually integrate audiovisual temporal cues does not predict their susceptibility to recalibration.
Collapse
|
8
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
9
|
Hládek Ľ, Seitz AR, Kopčo N. Auditory-visual interactions in egocentric distance perception: Ventriloquism effect and aftereffect. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3593. [PMID: 34852598 DOI: 10.1121/10.0007066] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 10/19/2021] [Indexed: 06/13/2023]
Abstract
This study describes data on auditory-visual integration and visually-guided adaptation of auditory distance perception using the ventriloquism effect (VE) and ventriloquism aftereffect (VAE). In an experiment, participants judged egocentric distance of interleaved auditory or auditory-visual stimuli with the auditory component located from 0.7 to 2.04 m in front of listeners in a real reverberant environment. The visual component of auditory-visual stimuli was displaced 30% closer (V-closer), 30% farther (V-farther), or aligned (V-aligned) with respect to the auditory component. The VE and VAE were measured in auditory and auditory-visual trials, respectively. Both effects were approximately independent of target distance when expressed in logarithmic units. The VE strength, defined as a difference of V-misaligned and V-aligned response bias, was approximately 72% of the auditory-visual disparity regardless of the visual-displacement direction, while the VAE was stronger in the V-farther (44%) than the V-closer (31%) condition. The VAE persisted to post-adaptation auditory-only blocks of trials, although it was diminished. The rates of build-up/break-down of the VAE were asymmetrical, with slower adaptation in the V-closer condition. These results suggest that auditory-visual distance integration is independent of the direction of induced shift, while the re-calibration is stronger and faster when evoked by more distant visual stimuli.
Collapse
Affiliation(s)
- Ľuboš Hládek
- Institute of Computer Science, P. J. Šafárik University, Jesenná 5, Košice, 040 01, Slovakia
| | - Aaron R Seitz
- Department of Psychology, University of California Riverside, 900 University Avenue, Riverside, California 92521, USA
| | - Norbert Kopčo
- Institute of Computer Science, P. J. Šafárik University, Jesenná 5, Košice, 040 01, Slovakia
| |
Collapse
|
10
|
Abstract
Adaptive behavior in a complex, dynamic, and multisensory world poses some of the most fundamental computational challenges for the brain, notably inference, decision-making, learning, binding, and attention. We first discuss how the brain integrates sensory signals from the same source to support perceptual inference and decision-making by weighting them according to their momentary sensory uncertainties. We then show how observers solve the binding or causal inference problem-deciding whether signals come from common causes and should hence be integrated or else be treated independently. Next, we describe the multifarious interplay between multisensory processing and attention. We argue that attentional mechanisms are crucial to compute approximate solutions to the binding problem in naturalistic environments when complex time-varying signals arise from myriad causes. Finally, we review how the brain dynamically adapts multisensory processing to a changing world across multiple timescales.
Collapse
Affiliation(s)
- Uta Noppeney
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands;
| |
Collapse
|
11
|
The Neurophysiological Basis of the Trial-Wise and Cumulative Ventriloquism Aftereffects. J Neurosci 2021; 41:1068-1079. [PMID: 33273069 PMCID: PMC7880291 DOI: 10.1523/jneurosci.2091-20.2020] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 10/12/2020] [Accepted: 11/08/2020] [Indexed: 01/23/2023] Open
Abstract
Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple timescales, it remains debated whether the ventriloquism aftereffects observed following trial-wise and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audiovisual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct timescales via common neurophysiological processes reflecting sensory inference and memory in parietal-occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes. SIGNIFICANCE STATEMENT Our brain easily reconciles conflicting multisensory information, such as seeing an actress on screen while hearing her voice over headphones. These adaptive mechanisms exert a persistent influence on the perception of subsequent unisensory stimuli, known as the ventriloquism aftereffect. While this aftereffect emerges following trial-wise or cumulative exposure to multisensory discrepancies, it remained unclear whether both arise from a common neural substrate. We here rephrase this hypothesis using human electroencephalography recordings. Our data suggest that parietal regions involved in multisensory and spatial memory mediate the aftereffect following both trial-wise and cumulative adaptation, but also show that additional and distinct processes are involved in consolidating and implementing the aftereffect following prolonged exposure.
Collapse
|
12
|
Park H, Kayser C. Robust spatial ventriloquism effect and trial-by-trial aftereffect under memory interference. Sci Rep 2020; 10:20826. [PMID: 33257687 PMCID: PMC7705722 DOI: 10.1038/s41598-020-77730-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 11/17/2020] [Indexed: 11/21/2022] Open
Abstract
Our brain adapts to discrepancies in the sensory inputs. One example is provided by the ventriloquism effect, experienced when the sight and sound of an object are displaced. Here the discrepant multisensory stimuli not only result in a biased localization of the sound, but also recalibrate the perception of subsequent unisensory acoustic information in the so-called ventriloquism aftereffect. This aftereffect has been linked to memory-related processes based on its parallels to general sequential effects in perceptual decision making experiments and insights obtained in neuroimaging studies. For example, we have recently implied memory-related medial parietal regions in the trial-by-trial ventriloquism aftereffect. Here, we tested the hypothesis that the trial-by-trial (or immediate) ventriloquism aftereffect is indeed susceptible to manipulations interfering with working memory. Across three experiments we systematically manipulated the temporal delays between stimuli and response for either the ventriloquism or the aftereffect trials, or added a sensory-motor masking trial in between. Our data reveal no significant impact of either of these manipulations on the aftereffect, suggesting that the recalibration reflected by the trial-by-trial ventriloquism aftereffect is surprisingly resilient to manipulations interfering with memory-related processes.
Collapse
Affiliation(s)
- Hame Park
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany.
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany.
- Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| |
Collapse
|
13
|
Effects of prism adaptation on auditory spatial attention in patients with left unilateral spatial neglect: a non-randomized pilot trial. Int J Rehabil Res 2020; 43:228-234. [PMID: 32776764 DOI: 10.1097/mrr.0000000000000413] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
A short period of adaptation to a prismatic shift of the visual field to the right briefly but significantly improves left unilateral spatial neglect. Additionally, prism adaptation affects multiple modalities, including processes of vision, auditory spatial attention, and sound localization. This non-randomized, single-center, controlled trial aimed to examine the immediate effects of prism adaptation on the sound-localization abilities of patients with left unilateral spatial neglect using a simple source localization test. Subjects were divided by self-allocation into a prism-adaptation group (n = 11) and a control group (n = 12). At baseline, patients with left unilateral spatial neglect showed a rightward deviation tendency in the left space. This tendency to right-sided bias in the left space was attenuated after prism adaptation. However, no changes were observed in the right space of patients with left unilateral spatial neglect after prism adaptation, or in the control group. Our results suggest that prism adaptation improves not only vision and proprioception but also auditory attention in the left space of patients with left unilateral spatial neglect. Our findings demonstrate that a single session of prism adaptation can significantly improve sound localization in patients with left unilateral spatial neglect. However, in this study, it was not possible to accurately determine whether the mechanism was a chronic change in head orientation or a readjustment of the spatial representation of the brain; thus, further studies need to be considered.
Collapse
|
14
|
Alpha Activity Reflects the Magnitude of an Individual Bias in Human Perception. J Neurosci 2020; 40:3443-3454. [PMID: 32179571 DOI: 10.1523/jneurosci.2359-19.2020] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 02/05/2020] [Accepted: 02/07/2020] [Indexed: 01/28/2023] Open
Abstract
Biases in sensory perception can arise from both experimental manipulations and personal trait-like features. These idiosyncratic biases and their neural underpinnings are often overlooked in studies on the physiology underlying perception. A potential candidate mechanism reflecting such idiosyncratic biases could be spontaneous alpha band activity, a prominent brain rhythm known to influence perceptual reports in general. Using a temporal order judgment task, we here tested the hypothesis that alpha power reflects the overcoming of an idiosyncratic bias. Importantly, to understand the interplay between idiosyncratic biases and contextual (temporary) biases induced by experimental manipulations, we quantified this relation before and after temporal recalibration. Using EEG recordings in human participants (male and female), we find that prestimulus frontal alpha power correlates with the tendency to respond relative to an own idiosyncratic bias, with stronger α leading to responses matching the bias. In contrast, alpha power does not predict response correctness. These results also hold after temporal recalibration and are specific to the alpha band, suggesting that alpha band activity reflects, directly or indirectly, processes that help to overcome an individual's momentary bias in perception. We propose that combined with established roles of parietal α in the encoding of sensory information frontal α reflects complementary mechanisms influencing perceptual decisions.SIGNIFICANCE STATEMENT The brain is a biased organ, frequently generating systematically distorted percepts of the world, leading each of us to evolve in our own subjective reality. However, such biases are often overlooked or considered noise when studying the neural mechanisms underlying perception. We show that spontaneous alpha band activity predicts the degree of biasedness of human choices in a time perception task, suggesting that alpha activity indexes processes needed to overcome an individual's idiosyncratic bias. This result provides a window onto the neural underpinnings of subjective perception, and offers the possibility to quantify or manipulate such priors in future studies.
Collapse
|
15
|
Badde S, Navarro KT, Landy MS. Modality-specific attention attenuates visual-tactile integration and recalibration effects by reducing prior expectations of a common source for vision and touch. Cognition 2020; 197:104170. [PMID: 32036027 DOI: 10.1016/j.cognition.2019.104170] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 10/25/2022]
Abstract
At any moment in time, streams of information reach the brain through the different senses. Given this wealth of noisy information, it is essential that we select information of relevance - a function fulfilled by attention - and infer its causal structure to eventually take advantage of redundancies across the senses. Yet, the role of selective attention during causal inference in cross-modal perception is unknown. We tested experimentally whether the distribution of attention across vision and touch enhances cross-modal spatial integration (visual-tactile ventriloquism effect, Expt. 1) and recalibration (visual-tactile ventriloquism aftereffect, Expt. 2) compared to modality-specific attention, and then used causal-inference modeling to isolate the mechanisms behind the attentional modulation. In both experiments, we found stronger effects of vision on touch under distributed than under modality-specific attention. Model comparison confirmed that participants used Bayes-optimal causal inference to localize visual and tactile stimuli presented as part of a visual-tactile stimulus pair, whereas simultaneously collected unity judgments - indicating whether the visual-tactile pair was perceived as spatially-aligned - relied on a sub-optimal heuristic. The best-fitting model revealed that attention modulated sensory and cognitive components of causal inference. First, distributed attention led to an increase of sensory noise compared to selective attention toward one modality. Second, attending to both modalities strengthened the stimulus-independent expectation that the two signals belong together, the prior probability of a common source for vision and touch. Yet, only the increase in the expectation of vision and touch sharing a common source was able to explain the observed enhancement of visual-tactile integration and recalibration effects with distributed attention. In contrast, the change in sensory noise explained only a fraction of the observed enhancements, as its consequences vary with the overall level of noise and stimulus congruency. Increased sensory noise leads to enhanced integration effects for visual-tactile pairs with a large spatial discrepancy, but reduced integration effects for stimuli with a small or no cross-modal discrepancy. In sum, our study indicates a weak a priori association between visual and tactile spatial signals that can be strengthened by distributing attention across both modalities.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA.
| | - Karen T Navarro
- Department of Psychology, University of Minnesota, 75 E River Rd., Minneapolis, MN, 55455, USA
| | - Michael S Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY, 10003, USA
| |
Collapse
|
16
|
Bruns P. The Ventriloquist Illusion as a Tool to Study Multisensory Processing: An Update. Front Integr Neurosci 2019; 13:51. [PMID: 31572136 PMCID: PMC6751356 DOI: 10.3389/fnint.2019.00051] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 08/22/2019] [Indexed: 12/02/2022] Open
Abstract
Ventriloquism, the illusion that a voice appears to come from the moving mouth of a puppet rather than from the actual speaker, is one of the classic examples of multisensory processing. In the laboratory, this illusion can be reliably induced by presenting simple meaningless audiovisual stimuli with a spatial discrepancy between the auditory and visual components. Typically, the perceived location of the sound source is biased toward the location of the visual stimulus (the ventriloquism effect). The strength of the visual bias reflects the relative reliability of the visual and auditory inputs as well as prior expectations that the two stimuli originated from the same source. In addition to the ventriloquist illusion, exposure to spatially discrepant audiovisual stimuli results in a subsequent recalibration of unisensory auditory localization (the ventriloquism aftereffect). In the past years, the ventriloquism effect and aftereffect have seen a resurgence as an experimental tool to elucidate basic mechanisms of multisensory integration and learning. For example, recent studies have: (a) revealed top-down influences from the reward and motor systems on cross-modal binding; (b) dissociated recalibration processes operating at different time scales; and (c) identified brain networks involved in the neuronal computations underlying multisensory integration and learning. This mini review article provides a brief overview of established experimental paradigms to measure the ventriloquism effect and aftereffect before summarizing these pathbreaking new advancements. Finally, it is pointed out how the ventriloquism effect and aftereffect could be utilized to address some of the current open questions in the field of multisensory research.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
17
|
Park H, Kayser C. Shared neural underpinnings of multisensory integration and trial-by-trial perceptual recalibration in humans. eLife 2019; 8:47001. [PMID: 31246172 PMCID: PMC6660215 DOI: 10.7554/elife.47001] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 06/26/2019] [Indexed: 01/05/2023] Open
Abstract
Perception adapts to mismatching multisensory information, both when different cues appear simultaneously and when they appear sequentially. While both multisensory integration and adaptive trial-by-trial recalibration are central for behavior, it remains unknown whether they are mechanistically linked and arise from a common neural substrate. To relate the neural underpinnings of sensory integration and recalibration, we measured whole-brain magnetoencephalography while human participants performed an audio-visual ventriloquist task. Using single-trial multivariate analysis, we localized the perceptually-relevant encoding of multisensory information within and between trials. While we found neural signatures of multisensory integration within temporal and parietal regions, only medial superior parietal activity encoded past and current sensory information and mediated the perceptual recalibration within and between trials. These results highlight a common neural substrate of sensory integration and perceptual recalibration, and reveal a role of medial parietal regions in linking present and previous multisensory evidence to guide adaptive behavior. A good ventriloquist will make their audience experience an illusion. The speech the spectators hear appears to come from the mouth of the puppet and not from the puppeteer. Moviegoers experience the same illusion: they perceive dialogue as coming from the mouths of the actors on screen, rather than from the loudspeakers mounted on the walls. Known as the ventriloquist effect, this ‘trick’ exists because the brain assumes that sights and sounds which occur at the same time have the same origin, and it therefore combines the two sets of sensory stimuli. A version of the ventriloquist effect can be induced in the laboratory. Participants hear a sound while watching a simple visual stimulus (for instance, a circle) appear on a screen. When asked to pinpoint the origin of the noise, volunteers choose a location shifted towards the circle, even if this was not where the sound came from. In addition, this error persists when the visual stimulus is no longer present: if a standard trial is followed by a trial that features a sound but no circle, participants perceive the sound in the second test as ‘drawn’ towards the direction of the former shift. This is known as the ventriloquist aftereffect. By scanning the brains of healthy volunteers performing this task, Park and Kayser show that a number of brain areas contribute to the ventriloquist effect. All of these regions help to combine what we see with what we hear, but only one maintains representations of the combined sensory inputs over time. Called the medial superior parietal cortex, this area is unique in contributing to both the ventriloquist effect and its aftereffect. We must constantly use past and current sensory information to adapt our behavior to the environment. The results by Park and Kayser shed light on the brain structures that underpin our capacity to combine information from several senses, as well as our ability to encode memories. Such knowledge should be useful to explore how we can make flexible decisions.
Collapse
Affiliation(s)
- Hame Park
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany.,Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany.,Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| |
Collapse
|
18
|
Distinct mechanisms govern recalibration to audio-visual discrepancies in remote and recent history. Sci Rep 2019; 9:8513. [PMID: 31186503 PMCID: PMC6559981 DOI: 10.1038/s41598-019-44984-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Accepted: 05/28/2019] [Indexed: 11/08/2022] Open
Abstract
To maintain perceptual coherence, the brain corrects for discrepancies between the senses. If, for example, lights are consistently offset from sounds, representations of auditory space are remapped to reduce this error (spatial recalibration). While recalibration effects have been observed following both brief and prolonged periods of adaptation, the relative contribution of discrepancies occurring over these timescales is unknown. Here we show that distinct multisensory recalibration mechanisms operate in remote and recent history. To characterise the dynamics of this spatial recalibration, we adapted human participants to audio-visual discrepancies for different durations, from 32 to 256 seconds, and measured the aftereffects on perceived auditory location. Recalibration effects saturated rapidly but decayed slowly, suggesting a combination of transient and sustained adaptation mechanisms. When long-term adaptation to an audio-visual discrepancy was immediately followed by a brief period of de-adaptation to an opposing discrepancy, recalibration was initially cancelled but subsequently reappeared with further testing. These dynamics were best fit by a multiple-exponential model that monitored audio-visual discrepancies over distinct timescales. Recent and remote recalibration mechanisms enable the brain to balance rapid adaptive changes to transient discrepancies that should be quickly forgotten against slower adaptive changes to persistent discrepancies likely to be more permanent.
Collapse
|
19
|
Xu J, Bi T, Wu J, Meng F, Wang K, Hu J, Han X, Zhang J, Zhou X, Keniston L, Yu L. Spatial receptive field shift by preceding cross-modal stimulation in the cat superior colliculus. J Physiol 2018; 596:5033-5050. [PMID: 30144059 DOI: 10.1113/jp275427] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 08/21/2018] [Indexed: 12/11/2022] Open
Abstract
KEY POINTS It has been known for some time that sensory information of one type can bias the spatial perception of another modality. However, there is a lack of evidence of this occurring in individual neurons. In the present study, we found that the spatial receptive field of superior colliculus multisensory neurons could be dynamically shifted by a preceding stimulus in a different modality. The extent to which the receptive field shifted was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. This result provides a neural mechanism that could underlie the process of cross-modal spatial calibration. ABSTRACT Psychophysical studies have shown that the different senses can be spatially entrained by each other. This can be observed in certain phenomena, such as ventriloquism, in which a visual stimulus can attract the perceived location of a spatially discordant sound. However, the neural mechanism underlying this cross-modal spatial recalibration has remained unclear, as has whether it takes place dynamically. We explored these issues in multisensory neurons of the cat superior colliculus (SC), a midbrain structure that involves both cross-modal and sensorimotor integration. Sequential cross-modal stimulation showed that the preceding stimulus can shift the receptive field (RF) of the lagging response. This cross-modal spatial calibration took place in both auditory and visual RFs, although auditory RFs shifted slightly more. By contrast, if a preceding stimulus was from the same modality, it failed to induce a similarly substantial RF shift. The extent of the RF shift was dependent on both temporal and spatial gaps between the preceding and following stimuli, as well as the salience of the preceding stimulus. A narrow time gap and high stimulus salience were able to induce larger RF shifts. In addition, when both visual and auditory stimuli were presented simultaneously, a substantial RF shift toward the location-fixed stimulus was also induced. These results, taken together, reveal an online cross-modal process and reflect the details of the organization of SC inter-sensory spatial calibration.
Collapse
Affiliation(s)
- Jinghong Xu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Tingting Bi
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jing Wu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Fanzhu Meng
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Kun Wang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiawei Hu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiao Han
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Jiping Zhang
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| | - Les Keniston
- Department of Physical Therapy, University of Maryland Eastern Shore, Princess Anne, MD, USA
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics (East China Normal University), Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics (East China Normal University), School of Life Science, East China Normal University, Shanghai, China
| |
Collapse
|
20
|
Bosen AK, Fleming JT, Allen PD, O’Neill WE, Paige GD. Multiple time scales of the ventriloquism aftereffect. PLoS One 2018; 13:e0200930. [PMID: 30067790 PMCID: PMC6070234 DOI: 10.1371/journal.pone.0200930] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 07/05/2018] [Indexed: 11/18/2022] Open
Abstract
The ventriloquism aftereffect (VAE) refers to a shift in auditory spatial perception following exposure to a spatial disparity between auditory and visual stimuli. The VAE has been previously measured on two distinct time scales. Hundreds or thousands of exposures to a an audio-visual spatial disparity produces enduring VAE that persists after exposure ceases. Exposure to a single audio-visual spatial disparity produces immediate VAE that decays over seconds. To determine if these phenomena are two extremes of a continuum or represent distinct processes, we conducted an experiment with normal hearing listeners that measured VAE in response to a repeated, constant audio-visual disparity sequence, both immediately after exposure to each audio-visual disparity and after the end of the sequence. In each experimental session, subjects were exposed to sequences of auditory and visual targets that were constantly offset by +8° or −8° in azimuth from one another, then localized auditory targets presented in isolation following each sequence. Eye position was controlled throughout the experiment, to avoid the effects of gaze on auditory localization. In contrast to other studies that did not control eye position, we found both a large shift in auditory perception that decayed rapidly after each AV disparity exposure, along with a gradual shift in auditory perception that grew over time and persisted after exposure to the AV disparity ceased. We modeled the temporal and spatial properties of the measured auditory shifts using grey box nonlinear system identification, and found that two models could explain the data equally well. In the power model, the temporal decay of the ventriloquism aftereffect was modeled with a power law relationship. This causes an initial rapid drop in auditory shift, followed by a long tail which accumulates with repeated exposure to audio-visual disparity. In the double exponential model, two separate processes were required to explain the data, one which accumulated and decayed exponentially and the other which slowly integrated over time. Both models fit the data best when the spatial spread of the ventriloquism aftereffect was limited to a window around the location of the audio-visual disparity. We directly compare the predictions made by each model, and suggest additional measurements that could help distinguish which model best describes the mechanisms underlying the VAE.
Collapse
Affiliation(s)
- Adam K. Bosen
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| | - Justin T. Fleming
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - Paul D. Allen
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - William E. O’Neill
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - Gary D. Paige
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| |
Collapse
|
21
|
Trial by trial dependencies in multisensory perception and their correlates in dynamic brain activity. Sci Rep 2018; 8:3742. [PMID: 29487374 PMCID: PMC5829215 DOI: 10.1038/s41598-018-22137-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 02/16/2018] [Indexed: 11/21/2022] Open
Abstract
A well-known effect in multisensory perception is that congruent information received by different senses usually leads to faster and more accurate responses. Less well understood are trial-by-trial interactions, whereby the multisensory composition of stimuli experienced during previous trials shapes performance during a subsequent trial. We here exploit the analogy of multisensory paradigms with classical flanker tasks to investigate the neural correlates underlying trial-by-trial interactions of multisensory congruency. Studying an audio-visual motion task, we demonstrate that congruency benefits for accuracy and reaction times are reduced following an audio-visual incongruent compared to a congruent preceding trial. Using single trial analysis of motion-sensitive EEG components we then localize current-trial and serial interaction effects within distinct brain regions: while the multisensory congruency experienced during the current trial influences the encoding of task-relevant information in sensory-specific brain regions, the serial interaction arises from task-relevant processes within the inferior frontal lobe. These results highlight parallels between multisensory paradigms and classical flanker tasks and demonstrate a role of amodal association cortices in shaping perception based on the history of multisensory congruency.
Collapse
|
22
|
Abstract
Prisms shifting the visual input sideways produce a mismatch between the visual versus felt position of one’s hand. Prism adaptation eliminates this mismatch, realigning hand proprioception with visual input. Whether this realignment concerns exclusively the visuo-(hand)motor system or it generalizes to acoustic inputs is controversial. We here show that there is indeed a slight influence of visual adaptation on the perceived direction of acoustic sources. However, this shift in perceived auditory direction can be fully explained by a subconscious head rotation during prism exposure and by changes in arm proprioception. Hence, prism adaptation does only indirectly generalize to auditory space perception.
Collapse
Affiliation(s)
- Klaudia Pochopien
- Department of Human-Neurobiology, University of Bremen, Bremen, Germany
| | - Manfred Fahle
- Department of Human-Neurobiology, University of Bremen, Bremen, Germany
| |
Collapse
|