1
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
2
|
Rohe T. Complex multisensory causal inference in multi-signal scenarios (commentary on Kayser, Debats & Heuer, 2024). Eur J Neurosci 2024. [PMID: 38706126 DOI: 10.1111/ejn.16388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/24/2024] [Accepted: 04/25/2024] [Indexed: 05/07/2024]
Affiliation(s)
- Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
3
|
Paraskevopoulos E, Anagnostopoulou A, Chalas N, Karagianni M, Bamidis P. Unravelling the multisensory learning advantage: Different patterns of within and across frequency-specific interactions drive uni- and multisensory neuroplasticity. Neuroimage 2024; 291:120582. [PMID: 38521212 DOI: 10.1016/j.neuroimage.2024.120582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/12/2024] [Accepted: 03/20/2024] [Indexed: 03/25/2024] Open
Abstract
In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.
Collapse
Affiliation(s)
| | - Alexandra Anagnostopoulou
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
4
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
5
|
O'Kane SH, Chancel M, Ehrsson HH. Hierarchical and dynamic relationships between body part ownership and full-body ownership. Cognition 2024; 246:105697. [PMID: 38364444 DOI: 10.1016/j.cognition.2023.105697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 02/18/2024]
Abstract
What is the relationship between experiencing individual body parts and the whole body as one's own? We theorised that body part ownership is driven primarily by the perceptual binding of visual and somatosensory signals from specific body parts, whereas full-body ownership depends on a more global binding process based on multisensory information from several body segments. To examine this hypothesis, we used a bodily illusion and asked participants to rate illusory changes in ownership over five different parts of a mannequin's body and the mannequin as a whole, while we manipulated the synchrony or asynchrony of visual and tactile stimuli delivered to three different body parts. We found that body part ownership was driven primarily by local visuotactile synchrony and could be experienced relatively independently of full-body ownership. Full-body ownership depended on the number of synchronously stimulated parts in a nonlinear manner, with the strongest full-body ownership illusion occurring when all parts received synchronous stimulation. Additionally, full-body ownership influenced body part ownership for nonstimulated body parts, and skin conductance responses provided physiological evidence supporting an interaction between body part and full-body ownership. We conclude that body part and full-body ownership correspond to different processes and propose a hierarchical probabilistic model to explain the relationship between part and whole in the context of multisensory awareness of one's own body.
Collapse
Affiliation(s)
- Sophie H O'Kane
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Marie Chancel
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - H Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
6
|
Kayser C, Debats N, Heuer H. Both stimulus-specific and configurational features of multiple visual stimuli shape the spatial ventriloquism effect. Eur J Neurosci 2024; 59:1770-1788. [PMID: 38230578 DOI: 10.1111/ejn.16251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 12/22/2023] [Accepted: 12/25/2023] [Indexed: 01/18/2024]
Abstract
Studies on multisensory perception often focus on simplistic conditions in which one single stimulus is presented per modality. Yet, in everyday life, we usually encounter multiple signals per modality. To understand how multiple signals within and across the senses are combined, we extended the classical audio-visual spatial ventriloquism paradigm to combine two visual stimuli with one sound. The individual visual stimuli presented in the same trial differed in their relative timing and spatial offsets to the sound, allowing us to contrast their individual and combined influence on sound localization judgements. We find that the ventriloquism bias is not dominated by a single visual stimulus but rather is shaped by the collective multisensory evidence. In particular, the contribution of an individual visual stimulus to the ventriloquism bias depends not only on its own relative spatio-temporal alignment to the sound but also the spatio-temporal alignment of the other visual stimulus. We propose that this pattern of multi-stimulus multisensory integration reflects the evolution of evidence for sensory causal relations during individual trials, calling for the need to extend established models of multisensory causal inference to more naturalistic conditions. Our data also suggest that this pattern of multisensory interactions extends to the ventriloquism aftereffect, a bias in sound localization observed in unisensory judgements following a multisensory stimulus.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Nienke Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
7
|
Kayser C, Heuer H. Multisensory perception depends on the reliability of the type of judgment. J Neurophysiol 2024; 131:723-737. [PMID: 38416720 DOI: 10.1152/jn.00451.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/05/2024] [Accepted: 02/24/2024] [Indexed: 03/01/2024] Open
Abstract
The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
8
|
Zhu H, Beierholm U, Shams L. The overlooked role of unisensory precision in multisensory research. Curr Biol 2024; 34:R229-R231. [PMID: 38531310 DOI: 10.1016/j.cub.2024.01.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/10/2024] [Accepted: 01/23/2024] [Indexed: 03/28/2024]
Abstract
Zhu et al. present an alternative explanation for the weaker multisensory illusions in football goalkeepers compared with outfielders and non-athletes, showing that better unisensory precision in goalkeepers can also account for this effect.
Collapse
Affiliation(s)
- Haocheng Zhu
- Department of Psychology, Soochow University, Suzhou 215031, China
| | - Ulrik Beierholm
- Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Ladan Shams
- Department of Psychology, Bioengineering, and Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, CA 90095, USA.
| |
Collapse
|
9
|
Dong C, Noppeney U, Wang S. Perceptual uncertainty explains activation differences between audiovisual congruent speech and McGurk stimuli. Hum Brain Mapp 2024; 45:e26653. [PMID: 38488460 DOI: 10.1002/hbm.26653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/20/2024] [Accepted: 02/26/2024] [Indexed: 03/19/2024] Open
Abstract
Face-to-face communication relies on the integration of acoustic speech signals with the corresponding facial articulations. In the McGurk illusion, an auditory /ba/ phoneme presented simultaneously with a facial articulation of a /ga/ (i.e., viseme), is typically fused into an illusory 'da' percept. Despite its widespread use as an index of audiovisual speech integration, critics argue that it arises from perceptual processes that differ categorically from natural speech recognition. Conversely, Bayesian theoretical frameworks suggest that both the illusory McGurk and the veridical audiovisual congruent speech percepts result from probabilistic inference based on noisy sensory signals. According to these models, the inter-sensory conflict in McGurk stimuli may only increase observers' perceptual uncertainty. This functional magnetic resonance imaging (fMRI) study presented participants (20 male and 24 female) with audiovisual congruent, McGurk (i.e., auditory /ba/ + visual /ga/), and incongruent (i.e., auditory /ga/ + visual /ba/) stimuli along with their unisensory counterparts in a syllable categorization task. Behaviorally, observers' response entropy was greater for McGurk compared to congruent audiovisual stimuli. At the neural level, McGurk stimuli increased activations in a widespread neural system, extending from the inferior frontal sulci (IFS) to the pre-supplementary motor area (pre-SMA) and insulae, typically involved in cognitive control processes. Crucially, in line with Bayesian theories these activation increases were fully accounted for by observers' perceptual uncertainty as measured by their response entropy. Our findings suggest that McGurk and congruent speech processing rely on shared neural mechanisms, thereby supporting the McGurk illusion as a valid measure of natural audiovisual speech perception.
Collapse
Affiliation(s)
- Chenjie Dong
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Uta Noppeney
- Donders Institute for Brain, Cognition, and Behavior, Radboud University, Nijmegen, the Netherlands
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou, China
| |
Collapse
|
10
|
Tanaka T. Evaluating the Bayesian causal inference model of intentional binding through computational modeling. Sci Rep 2024; 14:2979. [PMID: 38316822 PMCID: PMC10844324 DOI: 10.1038/s41598-024-53071-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 01/27/2024] [Indexed: 02/07/2024] Open
Abstract
Intentional binding refers to the subjective compression of the time interval between an action and its consequence. While intentional binding has been widely used as a proxy for the sense of agency, its underlying mechanism has been largely veiled. Bayesian causal inference (BCI) has gained attention as a potential explanation, but currently lacks sufficient empirical support. Thus, this study implemented various computational models to describe the possible mechanisms of intentional binding, fitted them to individual observed data, and quantitatively evaluated their performance. The BCI models successfully isolated the parameters that potentially contributed to intentional binding (i.e., causal belief and temporal prediction) and generally better explained an observer's time estimation than traditional models such as maximum likelihood estimation. The estimated parameter values suggested that the time compression resulted from an expectation that the actions would immediately cause sensory outcomes. Furthermore, I investigated the algorithm that realized this BCI and found probability-matching to be a plausible candidate; people might heuristically reconstruct event timing depending on causal uncertainty rather than optimally integrating causal and temporal posteriors. The evidence demonstrated the utility of computational modeling to investigate how humans infer the causal and temporal structures of events and individual differences in that process.
Collapse
Affiliation(s)
- Takumi Tanaka
- Graduate School of Humanities and Sociology and Faculty of Letters, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
11
|
Yildirim I, Siegel MH, Soltani AA, Ray Chaudhuri S, Tenenbaum JB. Perception of 3D shape integrates intuitive physics and analysis-by-synthesis. Nat Hum Behav 2024; 8:320-335. [PMID: 37996497 DOI: 10.1038/s41562-023-01759-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 10/12/2023] [Indexed: 11/25/2023]
Abstract
Many surface cues support three-dimensional shape perception, but humans can sometimes still see shape when these features are missing-such as when an object is covered with a draped cloth. Here we propose a framework for three-dimensional shape perception that explains perception in both typical and atypical cases as analysis-by-synthesis, or inference in a generative model of image formation. The model integrates intuitive physics to explain how shape can be inferred from the deformations it causes to other objects, as in cloth draping. Behavioural and computational studies comparing this account with several alternatives show that it best matches human observers (total n = 174) in both accuracy and response times, and is the only model that correlates significantly with human performance on difficult discriminations. We suggest that bottom-up deep neural network models are not fully adequate accounts of human shape perception, and point to how machine vision systems might achieve more human-like robustness.
Collapse
Affiliation(s)
- Ilker Yildirim
- Department of Psychology, Yale University, New Haven, CT, USA.
- Department of Statistics & Data Science, Yale University, New Haven, CT, USA.
- Wu-Tsai Institute, Yale University, New Haven, CT, USA.
| | - Max H Siegel
- Department of Brain & Cognitive Sciences, MIT, Cambridge, MA, USA.
- The Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA.
| | - Amir A Soltani
- Department of Brain & Cognitive Sciences, MIT, Cambridge, MA, USA
- The Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA
| | | | - Joshua B Tenenbaum
- Department of Brain & Cognitive Sciences, MIT, Cambridge, MA, USA.
- The Center for Brains, Minds, and Machines, MIT, Cambridge, MA, USA.
| |
Collapse
|
12
|
Zhao Y, Lu E, Zeng Y. Brain-inspired bodily self-perception model for robot rubber hand illusion. Patterns (N Y) 2023; 4:100888. [PMID: 38106608 PMCID: PMC10724368 DOI: 10.1016/j.patter.2023.100888] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/21/2023] [Accepted: 11/07/2023] [Indexed: 12/19/2023]
Abstract
The core of bodily self-consciousness involves perceiving ownership of one's body. A central question is how body illusions like the rubber hand illusion (RHI) occur. Existing theoretical models still lack satisfying computational explanations from connectionist perspectives, especially for how the brain encodes body perception and generates illusions from neuronal interactions. Moreover, the integration of disability experiments is also neglected. Here, we integrate biological findings of bodily self-consciousness to propose a brain-inspired bodily self-perception model by which perceptions of bodily self are autonomously constructed without any supervision signals. We successfully validated the model with six RHI experiments and a disability experiment on an iCub humanoid robot and simulated environments. The results show that our model can not only well-replicate the behavioral and neural data of monkeys in biological experiments but also reasonably explain the causes and results of RHI at the neuronal level, thus contributing to the revelation of mechanisms underlying RHI.
Collapse
Affiliation(s)
- Yuxuan Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Enmeng Lu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Long-term Artificial Intelligence, Beijing, China
| |
Collapse
|
13
|
Shivkumar S, DeAngelis GC, Haefner RM. Hierarchical motion perception as causal inference. bioRxiv 2023:2023.11.18.567582. [PMID: 38014023 PMCID: PMC10680834 DOI: 10.1101/2023.11.18.567582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Since motion can only be defined relative to a reference frame, which reference frame guides perception? A century of psychophysical studies has produced conflicting evidence: retinotopic, egocentric, world-centric, or even object-centric. We introduce a hierarchical Bayesian model mapping retinal velocities to perceived velocities. Our model mirrors the structure in the world, in which visual elements move within causally connected reference frames. Friction renders velocities in these reference frames mostly stationary, formalized by an additional delta component (at zero) in the prior. Inverting this model automatically segments visual inputs into groups, groups into supergroups, etc. and "perceives" motion in the appropriate reference frame. Critical model predictions are supported by two new experiments, and fitting our model to the data allows us to infer the subjective set of reference frames used by individual observers. Our model provides a quantitative normative justification for key Gestalt principles providing inspiration for building better models of visual processing in general.
Collapse
Affiliation(s)
- Sabyasachi Shivkumar
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, NY 10027, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| | - Ralf M Haefner
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
- Center for Visual Science, University of Rochester, Rochester, NY 14627, USA
| |
Collapse
|
14
|
Monti M, Molholm S, Cuppini C. Atypical development of causal inference in autism inferred through a neurocomputational model. Front Comput Neurosci 2023; 17:1258590. [PMID: 37927544 PMCID: PMC10620690 DOI: 10.3389/fncom.2023.1258590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
In everyday life, the brain processes a multitude of stimuli from the surrounding environment, requiring the integration of information from different sensory modalities to form a coherent perception. This process, known as multisensory integration, enhances the brain's response to redundant congruent sensory cues. However, it is equally important for the brain to segregate sensory inputs from distinct events, to interact with and correctly perceive the multisensory environment. This problem the brain must face, known as the causal inference problem, is strictly related to multisensory integration. It is widely recognized that the ability to integrate information from different senses emerges during the developmental period, as a function of our experience with multisensory stimuli. Consequently, multisensory integrative abilities are altered in individuals who have atypical experiences with cross-modal cues, such as those on the autistic spectrum. However, no research has been conducted on the developmental trajectories of causal inference and its relationship with experience thus far. Here, we used a neuro-computational model to simulate and investigate the development of causal inference in both typically developing children and those in the autistic spectrum. Our results indicate that higher exposure to cross-modal cues accelerates the acquisition of causal inference abilities, and a minimum level of experience with multisensory stimuli is required to develop fully mature behavior. We then simulated the altered developmental trajectory of causal inference in individuals with autism by assuming reduced multisensory experience during training. The results suggest that causal inference reaches complete maturity much later in these individuals compared to neurotypical individuals. Furthermore, we discuss the underlying neural mechanisms and network architecture involved in these processes, highlighting that the development of causal inference follows the evolution of the mechanisms subserving multisensory integration. Overall, this study provides a computational framework, unifying causal inference and multisensory integration, which allows us to suggest neural mechanisms and provide testable predictions about the development of such abilities in typically developed and autistic children.
Collapse
Affiliation(s)
- Melissa Monti
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| |
Collapse
|
15
|
Newell FN, McKenna E, Seveso MA, Devine I, Alahmad F, Hirst RJ, O'Dowd A. Multisensory perception constrains the formation of object categories: a review of evidence from sensory-driven and predictive processes on categorical decisions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220342. [PMID: 37545304 PMCID: PMC10404931 DOI: 10.1098/rstb.2022.0342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 06/29/2023] [Indexed: 08/08/2023] Open
Abstract
Although object categorization is a fundamental cognitive ability, it is also a complex process going beyond the perception and organization of sensory stimulation. Here we review existing evidence about how the human brain acquires and organizes multisensory inputs into object representations that may lead to conceptual knowledge in memory. We first focus on evidence for two processes on object perception, multisensory integration of redundant information (e.g. seeing and feeling a shape) and crossmodal, statistical learning of complementary information (e.g. the 'moo' sound of a cow and its visual shape). For both processes, the importance attributed to each sensory input in constructing a multisensory representation of an object depends on the working range of the specific sensory modality, the relative reliability or distinctiveness of the encoded information and top-down predictions. Moreover, apart from sensory-driven influences on perception, the acquisition of featural information across modalities can affect semantic memory and, in turn, influence category decisions. In sum, we argue that both multisensory processes independently constrain the formation of object categories across the lifespan, possibly through early and late integration mechanisms, respectively, to allow us to efficiently achieve the everyday, but remarkable, ability of recognizing objects. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- F. N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - E. McKenna
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - M. A. Seveso
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - I. Devine
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - F. Alahmad
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - R. J. Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| | - A. O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin D02 PN40, Ireland
| |
Collapse
|
16
|
Maynes R, Faulkner R, Callahan G, Mims CE, Ranjan S, Stalzer J, Odegaard B. Metacognitive awareness in the sound-induced flash illusion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220347. [PMID: 37545312 PMCID: PMC10404924 DOI: 10.1098/rstb.2022.0347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/27/2023] [Indexed: 08/08/2023] Open
Abstract
Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Randolph Maynes
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Ryan Faulkner
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Grace Callahan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Callie E. Mims
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
- Psychology Department, University of South Alabama, Mobile, 36688, AL, USA
| | - Saurabh Ranjan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Justine Stalzer
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Brian Odegaard
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| |
Collapse
|
17
|
Aston S, Nardini M, Beierholm U. Different types of uncertainty in multisensory perceptual decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220349. [PMID: 37545308 PMCID: PMC10404920 DOI: 10.1098/rstb.2022.0349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
Efficient decision-making requires accounting for sources of uncertainty (noise, or variability). Many studies have shown how the nervous system is able to account for perceptual uncertainty (noise, variability) that arises from limitations in its own abilities to encode perceptual stimuli. However, many other sources of uncertainty exist, reflecting for example variability in the behaviour of other agents or physical processes. Here we review previous studies on decision making under uncertainty as a function of the different types of uncertainty that the nervous system encounters, showing that noise that is intrinsic to the perceptual system can often be accounted for near-optimally (i.e. not statistically different from optimally), whereas accounting for other types of uncertainty can be much more challenging. As an example, we present a study in which participants made decisions about multisensory stimuli with both intrinsic (perceptual) and extrinsic (environmental) uncertainty and show that the nervous system accounts for these differently when making decisions: they account for internal uncertainty but under-account for external. Human perceptual systems may be well equipped to account for intrinsic (perceptual) uncertainty because, in principle, they have access to this. Accounting for external uncertainty is more challenging because this uncertainty must be learned. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Stacey Aston
- Department of Psychology, Durham University, Durham, Durham DH1 3LE, UK
| | - Marko Nardini
- Department of Psychology, Durham University, Durham, Durham DH1 3LE, UK
| | - Ulrik Beierholm
- Department of Psychology, Durham University, Durham, Durham DH1 3LE, UK
| |
Collapse
|
18
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
19
|
Meijer D, Noppeney U. Metacognition in the audiovisual McGurk illusion: perceptual and causal confidence. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220348. [PMID: 37545307 PMCID: PMC10404922 DOI: 10.1098/rstb.2022.0348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 07/02/2023] [Indexed: 08/08/2023] Open
Abstract
Almost all decisions in everyday life rely on multiple sensory inputs that can come from common or independent causes. These situations invoke perceptual uncertainty about environmental properties and the signals' causal structure. Using the audiovisual McGurk illusion, this study investigated how observers formed perceptual and causal confidence judgements in information integration tasks under causal uncertainty. Observers were presented with spoken syllables, their corresponding articulatory lip movements or their congruent and McGurk combinations (e.g. auditory B/P with visual G/K). Observers reported their perceived auditory syllable, the causal structure and confidence for each judgement. Observers were more accurate and confident on congruent than unisensory trials. Their perceptual and causal confidence were tightly related over trials as predicted by the interactive nature of perceptual and causal inference. Further, observers assigned comparable perceptual and causal confidence to veridical 'G/K' percepts on audiovisual congruent trials and their causal and perceptual metamers on McGurk trials (i.e. illusory 'G/K' percepts). Thus, observers metacognitively evaluate the integrated audiovisual percept with limited access to the conflicting unisensory stimulus components on McGurk trials. Collectively, our results suggest that observers form meaningful perceptual and causal confidence judgements about multisensory scenes that are qualitatively consistent with principles of Bayesian causal inference. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- David Meijer
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Wien, Austria
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| |
Collapse
|
20
|
Schröger E, Roeber U, Coy N. Markov chains as a proxy for the predictive memory representations underlying mismatch negativity. Front Hum Neurosci 2023; 17:1249413. [PMID: 37771348 PMCID: PMC10525344 DOI: 10.3389/fnhum.2023.1249413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 08/22/2023] [Indexed: 09/30/2023] Open
Abstract
Events not conforming to a regularity inherent to a sequence of events elicit prediction error signals of the brain such as the Mismatch Negativity (MMN) and impair behavioral task performance. Events conforming to a regularity lead to attenuation of brain activity such as stimulus-specific adaptation (SSA) and behavioral benefits. Such findings are usually explained by theories stating that the information processing system predicts the forthcoming event of the sequence via detected sequential regularities. A mathematical model that is widely used to describe, to analyze and to generate event sequences are Markov chains: They contain a set of possible events and a set of probabilities for transitions between these events (transition matrix) that allow to predict the next event on the basis of the current event and the transition probabilities. The accuracy of such a prediction depends on the distribution of the transition probabilities. We argue that Markov chains also have useful applications when studying cognitive brain functions. The transition matrix can be regarded as a proxy for generative memory representations that the brain uses to predict the next event. We assume that detected regularities in a sequence of events correspond to (a subset of) the entries in the transition matrix. We apply this idea to the Mismatch Negativity (MMN) research and examine three types of MMN paradigms: classical oddball paradigms emphasizing sound probabilities, between-sound regularity paradigms manipulating transition probabilities between adjacent sounds, and action-sound coupling paradigms in which sounds are associated with actions and their intended effects. We show that the Markovian view on MMN yields theoretically relevant insights into the brain processes underlying MMN and stimulates experimental designs to study the brain's processing of event sequences.
Collapse
Affiliation(s)
- Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Urte Roeber
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Nina Coy
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| |
Collapse
|
21
|
Pagnini F, Barbiani D, Cavalera C, Volpato E, Grosso F, Minazzi GA, Vailati Riboni F, Graziano F, Di Tella S, Manzoni GM, Silveri MC, Riva G, Phillips D. Placebo and Nocebo Effects as Bayesian-Brain Phenomena: The Overlooked Role of Likelihood and Attention. Perspect Psychol Sci 2023; 18:1217-1229. [PMID: 36656800 DOI: 10.1177/17456916221141383] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
The Bayesian-brain framework applied to placebo responses and other mind-body interactions suggests that the effects on the body result from the interaction between priors, such as expectations and learning, and likelihood, such as somatosensorial information. Significant research in this area focuses on the role of the priors, but the relevance of the likelihood has been surprisingly overlooked. One way of manipulating the relevance of the likelihood is by paying attention to sensorial information. We suggest that attention can influence both precision and position (i.e., the relative distance from the priors) of the likelihood by focusing on specific components of the somatosensorial information. Two forms of attention seem particularly relevant in this framework: mindful attention and selective attention. Attention has the potential to be considered a "major player" in placebo/nocebo research, together with expectations and learning. In terms of application, relying on attentional strategies as "amplifiers" or "silencers" of sensorial information could lead to an active involvement of individuals in shaping their care process and health. In this contribution, we discuss the theoretical implications of these intuitions with the aim to provide a comprehensive framework that includes Bayesian brain, placebo/nocebo effects, and the role of attention in mind-body interactions.
Collapse
Affiliation(s)
| | - Diletta Barbiani
- Department of Neurosciences, Biomedicine and Movement Sciences, University of Verona
| | - Cesare Cavalera
- Department of Psychology, Università Cattolica del Sacro Cuore
| | - Eleonora Volpato
- Department of Psychology, Università Cattolica del Sacro Cuore
- IRCCS Fondazione Don Carlo Gnocchi, Milan, Italy
| | | | | | | | - Francesca Graziano
- Bicocca Bioinformatics Biostatistics and Bioimaging B4 Center, University of Milano-Bicocca
- School of Medicine and Surgery, University of Milano
| | - Sonia Di Tella
- Department of Psychology, Università Cattolica del Sacro Cuore
| | | | | | - Giuseppe Riva
- Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano IRCCS
- Humane Technology Lab., Catholic University of Milan
| | | |
Collapse
|
22
|
Debats NB, Heuer H, Kayser C. Different time scales of common-cause evidence shape multisensory integration, recalibration and motor adaptation. Eur J Neurosci 2023; 58:3253-3269. [PMID: 37461244 DOI: 10.1111/ejn.16095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 07/03/2023] [Indexed: 09/05/2023]
Abstract
Perceptual coherence in the face of discrepant multisensory signals is achieved via the processes of multisensory integration, recalibration and sometimes motor adaptation. These supposedly operate on different time scales, with integration reducing immediate sensory discrepancies and recalibration and motor adaptation reflecting the cumulative influence of their recent history. Importantly, whether discrepant signals are bound during perception is guided by the brains' inference of whether they originate from a common cause. When combined, these two notions lead to the hypothesis that the time scales on which integration and recalibration (or motor adaptation) operate are associated with different time scales of evidence about a common cause underlying two signals. We tested this prediction in a well-established visuo-motor paradigm, in which human participants performed visually guided hand movements. The kinematic correlation between hand and cursor movements indicates their common origin, which allowed us to manipulate the common-cause evidence by titrating this correlation. Specifically, we dissociated hand and cursor signals during individual movements while preserving their correlation across the series of movement endpoints. Following our hypothesis, this manipulation reduced integration compared with a condition in which visual and proprioceptive signals were perfectly correlated. In contrast, recalibration and motor adaption were not affected by this manipulation. This supports the notion that multisensory integration and recalibration deal with sensory discrepancies on different time scales guided by common-cause evidence: Integration is prompted by local common-cause evidence and reduces immediate discrepancies, whereas recalibration and motor adaptation are prompted by global common-cause evidence and reduce persistent discrepancies.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
23
|
Kayser C, Park H, Heuer H. Cumulative multisensory discrepancies shape the ventriloquism aftereffect but not the ventriloquism bias. PLoS One 2023; 18:e0290461. [PMID: 37607201 PMCID: PMC10443876 DOI: 10.1371/journal.pone.0290461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Multisensory integration and recalibration are two processes by which perception deals with discrepant signals. Both are often studied in the spatial ventriloquism paradigm. There, integration is probed by the presentation of discrepant audio-visual stimuli, while recalibration manifests as an aftereffect in subsequent judgements of unisensory sounds. Both biases are typically quantified against the degree of audio-visual discrepancy, reflecting the possibility that both may arise from common underlying multisensory principles. We tested a specific prediction of this: that both processes should also scale similarly with the history of multisensory discrepancies, i.e. the sequence of discrepancies in several preceding audio-visual trials. Analyzing data from ten experiments with randomly varying spatial discrepancies we confirmed the expected dependency of each bias on the immediately presented discrepancy. And in line with the aftereffect being a cumulative process, this scaled with the discrepancies presented in at least three preceding audio-visual trials. However, the ventriloquism bias did not depend on this three-trial history of multisensory discrepancies and also did not depend on the aftereffect biases in previous trials - making these two multisensory processes experimentally dissociable. These findings support the notion that the ventriloquism bias and the aftereffect reflect distinct functions, with integration maintaining a stable percept by reducing immediate sensory discrepancies and recalibration maintaining an accurate percept by accounting for consistent discrepancies.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Hame Park
- Department of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
24
|
Erdeniz B, Tekgün E, Lenggenhager B, Lopez C. Visual perspective, distance, and felt presence of others in dreams. Conscious Cogn 2023; 113:103547. [PMID: 37390767 DOI: 10.1016/j.concog.2023.103547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 06/22/2023] [Accepted: 06/22/2023] [Indexed: 07/02/2023]
Abstract
The peripersonal space, that is, the limited space surrounding the body, involves multisensory coding and representation of the self in space. Previous studies have shown that peripersonal space representation and the visual perspective on the environment can be dramatically altered when neurotypical individuals self-identify with a distant avatar (i.e., in virtual reality) or during clinical conditions (i.e., out-of-body experience, heautoscopy, depersonalization). Despite its role in many cognitive/social functions, the perception of peripersonal space in dreams, and its relationship with the perception of other characters (interpersonal distance in dreams), remain largely uncharted. The present study aimed to explore the visuospatial properties of this space, which is likely to underlie self-location as well as self/other distinction in dreams. 530 healthy volunteers answered a web-based questionnaire to measure their dominant visuo-spatial perspective in dreams, the frequency of recall for felt distances between their dream self and other dream characters, and the dreamers' viewing angle of other dream characters. Most participants reported dream experiences from a first-person perspective (1PP) (82%) compared to a third-person perspective (3PP) (18%). Independent of their dream perspective, participants reported that they generally perceived other dream characters in their close space, that is, at distance of either between 0 and 90 cm, or 90-180 cm, than in further spaces (180-270 cm). Regardless of the perspective (1PP or 3PP), both groups also reported more frequently seeing other dream characters from eye level (0° angle of viewing) than from above (30° and 60°) or below eye level (-30° and -60°). Moreover, the intensity of sensory experiences in dreams, as measured by the Bodily Self-Consciousness in Dreams Questionnaire, was higher in individuals who habitually see other dream characters closer to their personal dream self (i.e., within 0-90 cm and 90-180 cm). These preliminary findings offer a new, phenomenological account of space representation in dreams with regards to the felt presence of others. They might provide insights not only to our understanding of how dreams are formed, but also to the type of neurocomputations involved in self/other distinction.
Collapse
Affiliation(s)
- Burak Erdeniz
- İzmir University of Economics, Department of Psychology, İzmir, Turkey
| | - Ege Tekgün
- İzmir University of Economics, Department of Psychology, İzmir, Turkey
| | | | | |
Collapse
|
25
|
Stanley BM, Chen YC, Maurer D, Lewis TL, Shore DI. Developmental changes in audiotactile event perception. J Exp Child Psychol 2023; 230:105629. [PMID: 36731280 DOI: 10.1016/j.jecp.2023.105629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 01/04/2023] [Accepted: 01/05/2023] [Indexed: 02/04/2023]
Abstract
The fission and fusion illusions provide measures of multisensory integration. The sound-induced tap fission illusion occurs when a tap is paired with two distractor sounds, resulting in the perception of two taps; the sound-induced tap fusion illusion occurs when two taps are paired with a single sound, resulting in the perception of a single tap. Using these illusions, we measured integration in three groups of children (9-, 11-, and 13-year-olds) and compared them with a group of adults. Based on accuracy, we derived a measure of magnitude of illusion and used a signal detection analysis to estimate perceptual discriminability and decisional criterion. All age groups showed a significant fission illusion, whereas only the three groups of children showed a significant fusion illusion. When compared with adults, the 9-year-olds showed larger fission and fusion illusions (i.e., reduced discriminability and greater bias), whereas the 11-year-olds were adult-like for fission but showed some differences for fusion: significantly worse discriminability and marginally greater magnitude and criterion. The 13-year-olds were adult-like on all measures. Based on the pattern of data, we speculate that the developmental trajectories for fission and fusion differ. We discuss these developmental results in the context of three non-mutually exclusive theoretical frameworks: sensory dominance, maximum likelihood estimation, and causal inference.
Collapse
Affiliation(s)
- Brendan M Stanley
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City 252, Taiwan
| | - Daphne Maurer
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - Terri L Lewis
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada
| | - David I Shore
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario L8S 4K1, Canada; Multisensory Perception Laboratory, Division of Multisensory Mind Inc., Hamilton, Ontario L8S 4K1, Canada.
| |
Collapse
|
26
|
Roth MJ, Lindner A, Hesse K, Wildgruber D, Wong HY, Buehner MJ. Impaired perception of temporal contiguity between action and effect is associated with disorders of agency in schizophrenia. Proc Natl Acad Sci U S A 2023; 120:e2214327120. [PMID: 37186822 PMCID: PMC10214164 DOI: 10.1073/pnas.2214327120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 03/28/2023] [Indexed: 05/17/2023] Open
Abstract
Delusions of control in schizophrenia are characterized by the striking feeling that one's actions are controlled by external forces. We here tested qualitative predictions inspired by Bayesian causal inference models, which suggest that such misattributions of agency should lead to decreased intentional binding. Intentional binding refers to the phenomenon that subjects perceive a compression of time between their intentional actions and consequent sensory events. We demonstrate that patients with delusions of control perceived less self-agency in our intentional binding task. This effect was accompanied by significant reductions of intentional binding as compared to healthy controls and patients without delusions. Furthermore, the strength of delusions of control tightly correlated with decreases in intentional binding. Our study validated a critical prediction of Bayesian accounts of intentional binding, namely that a pathological reduction of the prior likelihood of a causal relation between one's actions and consequent sensory events-here captured by delusions of control-should lead to lesser intentional binding. Moreover, our study highlights the import of an intact perception of temporal contiguity between actions and their effects for the sense of agency.
Collapse
Affiliation(s)
- Manuel J. Roth
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3 72076Tübingen, Germany
- International Max Planck Research School for Cognitive and Systems Neuroscience, University of Tübingen, Otfried-Müller-Str. 27 72076Tübingen, Germany
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
- Dynamic Cognition Group, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 11 72076Tübingen, Germany
| | - Axel Lindner
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3 72076Tübingen, Germany
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
- Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3 72076Tübingen, Germany
| | - Klaus Hesse
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
| | - Hong Yu Wong
- Philosophy of Neuroscience, Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25 72076Tübingen, Germany
- Department of Philosophy, University of Tübingen, Bursagasse 1 72070Tübingen, Germany
| | - Marc J. Buehner
- School of Psychology, Cardiff University, Park Place, CardiffCF10 3AT, Wales, United Kingdom
| |
Collapse
|
27
|
Huo H, Liu X, Tang Z, Dong Y, Zhao D, Chen D, Tang M, Qiao X, Du X, Guo J, Wang J, Fan Y. Interhemispheric multisensory perception and Bayesian causal inference. iScience 2023; 26:106706. [PMID: 37250338 PMCID: PMC10214730 DOI: 10.1016/j.isci.2023.106706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/07/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
In daily life, our brain needs to eliminate irrelevant signals and integrate relevant signals to facilitate natural interactions with the surrounding. Previous study focused on paradigms without effect of dominant laterality and found that human observers process multisensory signals consistent with Bayesian causal inference (BCI). However, most human activities are of bilateral interaction involved in processing of interhemispheric sensory signals. It remains unclear whether the BCI framework also fits to such activities. Here, we presented a bilateral hand-matching task to understand the causal structure of interhemispheric sensory signals. In this task, participants were asked to match ipsilateral visual or proprioceptive cues with the contralateral hand. Our results suggest that interhemispheric causal inference is most derived from the BCI framework. The interhemispheric perceptual bias may vary strategy models to estimate the contralateral multisensory signals. The findings help to understand how the brain processes the uncertainty information coming from interhemispheric sensory signals.
Collapse
Affiliation(s)
- Hongqiang Huo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaoyu Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| | - Zhili Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Ying Dong
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Di Zhao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Duo Chen
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Min Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaofeng Qiao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xin Du
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jieyi Guo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jinghui Wang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- School of Medical Science and Engineering Medicine, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| |
Collapse
|
28
|
Noel JP, Angelaki DE. A theory of autism bringing across levels of description. Trends Cogn Sci 2023:S1364-6613(23)00100-6. [PMID: 37183143 DOI: 10.1016/j.tics.2023.04.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 04/18/2023] [Accepted: 04/19/2023] [Indexed: 05/16/2023]
Abstract
Autism impacts a wide range of behaviors and neural functions. As such, theories of autism spectrum disorder (ASD) are numerous and span different levels of description, from neurocognitive to molecular. We propose how existent behavioral, computational, algorithmic, and neural accounts of ASD may relate to one another. Specifically, we argue that ASD may be cast as a disorder of causal inference (computational level). This computation relies on marginalization, which is thought to be subserved by divisive normalization (algorithmic level). In turn, divisive normalization may be impaired by excitatory-to-inhibitory imbalances (neural implementation level). We also discuss ASD within similar frameworks, those of predictive coding and circular inference. Together, we hope to motivate work unifying the different accounts of ASD.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY, USA.
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
29
|
Abstract
Body ownership is the multisensory perception of a body as one's own. Recently, the emergence of body ownership illusions like the visuotactile rubber hand illusion has been described by Bayesian causal inference models in which the observer computes the probability that visual and tactile signals come from a common source. Given the importance of proprioception for the perception of one's body, proprioceptive information and its relative reliability should impact this inferential process. We used a detection task based on the rubber hand illusion where participants had to report whether the rubber hand felt like their own or not. We manipulated the degree of asynchrony of visual and tactile stimuli delivered to the rubber hand and the real hand under two levels of proprioceptive noise using tendon vibration applied to the lower arm's antagonist extensor and flexor muscles. As hypothesized, the probability of the emergence of the rubber hand illusion increased with proprioceptive noise. Moreover, this result, well fitted by a Bayesian causal inference model, was best described by a change in the a priori probability of a common cause for vision and touch. These results offer new insights into how proprioceptive uncertainty shapes the multisensory perception of one's own body.
Collapse
Affiliation(s)
- Marie Chancel
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden; Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France.
| | - H Henrik Ehrsson
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden
| |
Collapse
|
30
|
Abstract
Despite its many twists and turns, the arc of cognitive science generally bends toward progress, thanks to its interdisciplinary nature. By glancing at the last few decades of experimental and computational advances, it can be argued that-far from failing to converge on a shared set of conceptual assumptions-the field is indeed making steady consensual progress toward what can broadly be referred to as interactive frameworks. This inclination is apparent in the subfields of psycholinguistics, visual perception, embodied cognition, extended cognition, neural networks, dynamical systems theory, and more. This pictorial essay briefly documents this steady progress both from a bird's eye view and from the trenches. The conclusion is one of optimism that cognitive science is getting there, albeit slowly and arduously, like any good science should.
Collapse
Affiliation(s)
- Michael J Spivey
- Department of Cognitive and Information Sciences, University of California, Merced
| |
Collapse
|
31
|
Sciortino P, Kayser C. Steady state visual evoked potentials reveal a signature of the pitch-size crossmodal association in visual cortex. Neuroimage 2023; 273:120093. [PMID: 37028733 DOI: 10.1016/j.neuroimage.2023.120093] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Crossmodal correspondences describe our tendency to associate sensory features from different modalities with each other, such as the pitch of a sound with the size of a visual object. While such crossmodal correspondences (or associations) are described in many behavioural studies their neurophysiological correlates remain unclear. Under the current working model of multisensory perception both a low- and a high-level account seem plausible. That is, the neurophysiological processes shaping these associations could commence in low-level sensory regions, or may predominantly emerge in high-level association regions of semantic and object identification networks. We exploited steady-state visual evoked potentials (SSVEP) to directly probe this question, focusing on the associations between pitch and the visual features of size, hue or chromatic saturation. We found that SSVEPs over occipital regions are sensitive to the congruency between pitch and size, and a source analysis pointed to an origin around primary visual cortices. We speculate that this signature of the pitch-size association in low-level visual cortices reflects the successful pairing of congruent visual and acoustic object properties and may contribute to establishing causal relations between multisensory objects. Besides this, our study also provides a paradigm can be exploited to study other crossmodal associations involving visual stimuli in the future.
Collapse
|
32
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. bioRxiv 2023:2023.01.27.525974. [PMID: 36778376 PMCID: PMC9915492 DOI: 10.1101/2023.01.27.525974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of Bayesian Causal Inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief over (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modeling results, we show that humans report targets as stationary and steer toward their initial rather than final position more often when they are themselves moving, suggesting a misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results confirm both of these predictions. Lastly, analysis of eye-movements show that, while initial saccades toward targets are largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, United States
| | - Johannes Bill
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Department of Psychology, Harvard University, Cambridge, MA, United States
| | - Haoran Ding
- Center for Neural Science, New York University, New York City, NY, United States
| | - John Vastola
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY, United States
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, United States
- Tandon School of Engineering, New York University, New York City, NY, United states
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Boston, MA, United States
- Center for Brain Science, Harvard University, Boston, MA, United States
| |
Collapse
|
33
|
Quintero SI, Shams L, Kamal K. Changing the Tendency to Integrate the Senses. Brain Sci 2022; 12:brainsci12101384. [PMID: 36291318 PMCID: PMC9599885 DOI: 10.3390/brainsci12101384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 10/04/2022] [Accepted: 10/05/2022] [Indexed: 11/16/2022] Open
Abstract
Integration of sensory signals that emanate from the same source, such as the visual of lip articulations and the sound of the voice of a speaking individual, can improve perception of the source signal (e.g., speech). Because momentary sensory inputs are typically corrupted with internal and external noise, there is almost always a discrepancy between the inputs, facing the perceptual system with the problem of determining whether the two signals were caused by the same source or different sources. Thus, whether or not multisensory stimuli are integrated and the degree to which they are bound is influenced by factors such as the prior expectation of a common source. We refer to this factor as the tendency to bind stimuli, or for short, binding tendency. In theory, the tendency to bind sensory stimuli can be learned by experience through the acquisition of the probabilities of the co-occurrence of the stimuli. It can also be influenced by cognitive knowledge of the environment. The binding tendency varies across individuals and can also vary within an individual over time. Here, we review the studies that have investigated the plasticity of binding tendency. We discuss the protocols that have been reported to produce changes in binding tendency, the candidate learning mechanisms involved in this process, the possible neural correlates of binding tendency, and outstanding questions pertaining to binding tendency and its plasticity. We conclude by proposing directions for future research and argue that understanding mechanisms and recipes for increasing binding tendency can have important clinical and translational applications for populations or individuals with a deficiency in multisensory integration.
Collapse
Affiliation(s)
- Saul I Quintero
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
- Department of Bioengineering, University of California, Los Angeles, CA 90089, USA
- Neuroscience Interdepartmental Program, University of California, Los Angeles, CA 90089, USA
| | - Kimia Kamal
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
34
|
Srivastava P, Fotiadis P, Parkes L, Bassett DS. The expanding horizons of network neuroscience: From description to prediction and control. Neuroimage 2022; 258:119250. [PMID: 35659996 DOI: 10.1016/j.neuroimage.2022.119250] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 04/15/2022] [Accepted: 04/25/2022] [Indexed: 01/11/2023] Open
Abstract
The field of network neuroscience has emerged as a natural framework for the study of the brain and has been increasingly applied across divergent problems in neuroscience. From a disciplinary perspective, network neuroscience originally emerged as a formal integration of graph theory (from mathematics) and neuroscience (from biology). This early integration afforded marked utility in describing the interconnected nature of neural units, both structurally and functionally, and underscored the relevance of that interconnection for cognition and behavior. But since its inception, the field has not remained static in its methodological composition. Instead, it has grown to use increasingly advanced graph-theoretic tools and to bring in several other disciplinary perspectives-including machine learning and systems engineering-that have proven complementary. In doing so, the problem space amenable to the discipline has expanded markedly. In this review, we discuss three distinct flavors of investigation in state-of-the-art network neuroscience: (i) descriptive network neuroscience, (ii) predictive network neuroscience, and (iii) a perturbative network neuroscience that draws on recent advances in network control theory. In considering each area, we provide a brief summary of the approaches, discuss the nature of the insights obtained, and highlight future directions.
Collapse
Affiliation(s)
- Pragya Srivastava
- Department of Bioengineering, University of Pennsylvania, Philadelphia PA 19104, USA
| | - Panagiotis Fotiadis
- Department of Bioengineering, University of Pennsylvania, Philadelphia PA 19104, USA; Department of Neuroscience, University of Pennsylvania, Philadelphia PA 19104, USA
| | - Linden Parkes
- Department of Bioengineering, University of Pennsylvania, Philadelphia PA 19104, USA
| | - Dani S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia PA 19104, USA; Department of Physics & Astronomy, University of Pennsylvania, Philadelphia PA 19104, USA; Department of Electrical & Systems Engineering, University of Pennsylvania, Philadelphia PA 19104, USA; Department of Neurology, University of Pennsylvania, Philadelphia PA 19104, USA; Department of Psychiatry, University of Pennsylvania, Philadelphia PA 19104, USA; Santa Fe Institute, Santa Fe NM 87501, USA.
| |
Collapse
|