1
|
Lo Presti S, Bonavita M, Piga V, Lozito S, Doricchi F, Lasaponara S. "Don't stop believing" - Decoding belief dynamics in the brain: An ALE meta-analysis of neural correlates in belief formation and updating. Neurosci Biobehav Rev 2025; 173:106153. [PMID: 40228650 DOI: 10.1016/j.neubiorev.2025.106153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Revised: 03/18/2025] [Accepted: 04/03/2025] [Indexed: 04/16/2025]
Abstract
Understanding how individuals form and update their beliefs is a fundamental question in cognitive and social psychology. Belief formation (BF) refers to the initial development of an individual's belief, while belief updating (BU) pertains to the revision of existing beliefs in response to contradictory evidence. Although these two processes are often interwoven, they might operate through different neural mechanisms. This meta-analysis aims to synthesize the existing functional magnetic resonance imaging (fMRI) literature on BF and BU, with a particular focus on how BF is investigated. Approaches based on Theory of Mind paradigms, such as False Belief tasks, are often opposed to other approaches, emphasizing the role of individual or situational factors in belief formation. Notably, we propose that this differentiation might reflect the engagement of social and non-social dynamics within belief formation. Activation likelihood estimation (ALE) analysis revealed shared involvement of the Precuneus (PCu) in both BF and BU, while BF specifically engaged the bilateral activation of the Temporo-Parietal Junctions (TPJ). Additionally, social and non-social BF exhibited distinct neural correlates: social BF was associated with activity in the right TPJ, whereas non-social BF relied on the left dorsolateral prefrontal cortex (DLPFC). These findings support the hypothesis that BF and BU operate via partially dissociable neural networks and highlights the role of TPJ and PCu as essential hubs to build-up neural templates and enabling shifts in viewpoint necessary to adapt beliefs.
Collapse
Affiliation(s)
- S Lo Presti
- Psychology Department - "Sapienza" University of Rome, Via dei Marsi, 78, Rome 00185, Italy; Neuropsychology Department - IRCCS Fondazione Santa Lucia, Via Ardeatina, 306, Rome 00100, Italy
| | - M Bonavita
- Psychology Department - "Sapienza" University of Rome, Via dei Marsi, 78, Rome 00185, Italy
| | - V Piga
- Psychology Department - "Sapienza" University of Rome, Via dei Marsi, 78, Rome 00185, Italy; Neuropsychology Department - IRCCS Fondazione Santa Lucia, Via Ardeatina, 306, Rome 00100, Italy; PhD Program in behavioural neuroscience, "Sapienza" University of Rome, Rome, Italy
| | - S Lozito
- Psychology Department - "Sapienza" University of Rome, Via dei Marsi, 78, Rome 00185, Italy; Neuropsychology Department - IRCCS Fondazione Santa Lucia, Via Ardeatina, 306, Rome 00100, Italy; PhD Program in behavioural neuroscience, "Sapienza" University of Rome, Rome, Italy
| | - F Doricchi
- Psychology Department - "Sapienza" University of Rome, Via dei Marsi, 78, Rome 00185, Italy; Neuropsychology Department - IRCCS Fondazione Santa Lucia, Via Ardeatina, 306, Rome 00100, Italy
| | - S Lasaponara
- Psychology Department - "Sapienza" University of Rome, Via dei Marsi, 78, Rome 00185, Italy; Neuropsychology Department - IRCCS Fondazione Santa Lucia, Via Ardeatina, 306, Rome 00100, Italy.
| |
Collapse
|
2
|
Dong C, Wang Z, Li R, Noppeney U, Wang S. Variations in unisensory speech perception explain interindividual differences in McGurk illusion susceptibility. Psychon Bull Rev 2025:10.3758/s13423-025-02697-3. [PMID: 40274722 DOI: 10.3758/s13423-025-02697-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/06/2025] [Indexed: 04/26/2025]
Abstract
Face-to-face communication relies on integrating acoustic speech signals with corresponding facial articulations. Audiovisual integration abilities or deficits in typical and atypical populations are often assessed through their susceptibility to the McGurk illusion (i.e., their McGurk illusion rates). According to theories of normative Bayesian causal inference, observers integrate a visual /ga/ viseme and an auditory /ba/ phoneme weighted by their relative phonemic reliabilities into an illusory "da" percept. Consequently, McGurk illusion rates should be strongly influenced by observers' categorical perception of the corresponding facial articulatory movements and the acoustic signals. Across three experiments we investigated the extent to which variability in the McGurk illusion rate across participants or stimuli (i.e., speakers) can be explained by the corresponding variations in the categorical perception of the unisensory auditory and visual components. Additionally, we investigated whether the McGurk illusion susceptibility is a stable trait across different testing sessions (i.e., days) and tasks. Consistent with the principles of Bayesian Causal Inference, our results demonstrate that observers' tendency to (mis)perceive the auditory /ba/ and the visual /ga/ stimuli as "da" in unisensory contexts strongly predicts their McGurk illusion rates across both speakers and participants. Likewise, the stability in the McGurk illusion across sessions and tasks arises closely aligned with the corresponding stability of the unisensory auditory and visual categorical perception. Collectively, these findings highlight the importance of accounting for variations in unisensory performance and variability of materials (e.g., speakers) when using audiovisual illusions to assess audiovisual integration capability.
Collapse
Affiliation(s)
- Chenjie Dong
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Guangzhou, 510631, China
| | - Zhengye Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Guangzhou, 510631, China
| | - Ruqin Li
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Guangzhou, 510631, China
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Suiping Wang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Guangzhou, 510631, China.
| |
Collapse
|
3
|
Scheller M, Proulx MJ, de Haan M, Dahlmann-Noor A, Petrini K. Visual experience affects neural correlates of audio-haptic integration: A case study of non-sighted individuals. PROGRESS IN BRAIN RESEARCH 2025; 292:25-70. [PMID: 40409923 DOI: 10.1016/bs.pbr.2025.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2025]
Abstract
The ability to reduce sensory uncertainty by integrating information across different senses develops late in humans and depends on cross-modal, sensory experience during childhood and adolescence. While the dependence of audio-haptic integration on vision suggests cross-modal neural reorganization, evidence for such changes is lacking. Furthermore, little is known about the neural processes underlying audio-haptic integration even in sighted adults. Here, we examined electrophysiological correlates of audio-haptic integration in sighted adults (n = 29), non-sighted adults (n = 7), and sighted adolescents (n = 12) using a data-driven electrical neuroimaging approach. In sighted adults, optimal integration performance was predicted by topographical and super-additive strength modulations around 205-285 ms. Data from four individuals who went blind before the age of 8-9 years suggests that they achieved optimal integration via different, sub-additive mechanisms at earlier processing stages. Sighted adolescents showed no robust multisensory modulations. Late-blind adults, who did not show behavioral benefits of integration, demonstrated modulations at early latencies. Our findings suggest a critical period for the development of optimal audio-haptic integration dependent on visual experience around the late childhood and early adolescence.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom; Department of Psychology, Durham University, Durham, United Kingdom.
| | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom; The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, United Kingdom; Bath Institute for the Augmented Human (IAH), Bath, United Kingdom
| | - Michelle de Haan
- Developmental Neurosciences Programme, University College London, London, United Kingdom
| | - Annegret Dahlmann-Noor
- NIHR Moorfields Biomedical Research Centre, London, United Kingdom; Paediatric Service, Moorfields Eye Hospital, London, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom; The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, United Kingdom; Bath Institute for the Augmented Human (IAH), Bath, United Kingdom
| |
Collapse
|
4
|
Gao Y, Xue K, Odegaard B, Rahnev D. Automatic multisensory integration follows subjective confidence rather than objective performance. COMMUNICATIONS PSYCHOLOGY 2025; 3:38. [PMID: 40069314 PMCID: PMC11896883 DOI: 10.1038/s44271-025-00221-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 02/26/2025] [Indexed: 03/15/2025]
Abstract
It is well known that sensory information from one modality can automatically affect judgments from a different sensory modality. However, it remains unclear what determines the strength of the influence of an irrelevant sensory cue from one modality on a perceptual judgment for a different modality. Here we test whether the strength of multisensory impact by an irrelevant sensory cue depends on participants' objective accuracy or subjective confidence for that cue. We created visual motion stimuli with low vs. high overall motion energy, where high-energy stimuli yielded higher confidence but lower accuracy in a visual-only task. We then tested the impact of the low- and high-energy visual stimuli on auditory motion perception in 99 participants. We found that the high-energy visual stimuli influenced the auditory motion judgments more strongly than the low-energy visual stimuli, consistent with their higher confidence but contrary to their lower accuracy. A computational model assuming common principles underlying confidence reports and multisensory integration captured these effects. Our findings show that automatic multisensory integration follows subjective confidence rather than objective performance and suggest the existence of common computations across vastly different stages of perceptual decision making.
Collapse
Affiliation(s)
- Yi Gao
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, 30332, USA.
| | - Kai Xue
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Brian Odegaard
- Department of Psychology, University of Florida, Gainesville, FL, 32611, USA
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| |
Collapse
|
5
|
Gijbels L, Lee AKC, Lalonde K. Integration of audiovisual speech perception: From infancy to older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:1981-2000. [PMID: 40126041 DOI: 10.1121/10.0036137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 02/19/2025] [Indexed: 03/25/2025]
Abstract
One of the most prevalent and relevant social experiences for humans - engaging in face-to-face conversations - is inherently multimodal. In the context of audiovisual (AV) speech perception, the visual cues from the speaker's face play a crucial role in language acquisition and in enhancing our comprehension of incoming auditory speech signals. Nonetheless, AV integration reflects substantial individual differences, which cannot be entirely accounted for by the information conveyed through the speech signal or the perceptual abilities of the individual. These differences illustrate changes in response to experience with auditory and visual sensory processing across the lifespan, and within a phase of life. To improve our understanding of integration of AV speech, the current work offers a perspective for understanding AV speech processing in relation to AV perception in general from a prelinguistic and a linguistic viewpoint, and by looking at AV perception through the lens of humans as Bayesian observers implementing a causal inference model. This allowed us to create a cohesive approach to look at differences and similarities of AV integration from infancy to older adulthood. Behavioral and neurophysiological evidence suggests that both prelinguistic and linguistic mechanisms exhibit distinct, yet mutually influential, effects across the lifespan within and between individuals.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- University of Washington, Department of Speech and Hearing Sciences, Seattle, Washington 98195, USA
- University of Washington, Institute for Learning and Brain Sciences, Seattle, Washington 98915, USA
| | - Adrian K C Lee
- University of Washington, Department of Speech and Hearing Sciences, Seattle, Washington 98195, USA
- University of Washington, Institute for Learning and Brain Sciences, Seattle, Washington 98915, USA
| | - Kaylah Lalonde
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska 68131, USA
| |
Collapse
|
6
|
Brožová N, Vollmer L, Kampa B, Kayser C, Fels J. Cross-modal congruency modulates evidence accumulation, not decision thresholds. Front Neurosci 2025; 19:1513083. [PMID: 40052091 PMCID: PMC11882578 DOI: 10.3389/fnins.2025.1513083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/30/2025] [Indexed: 03/09/2025] Open
Abstract
Audiovisual cross-modal correspondences (CMCs) refer to the brain's inherent ability to subconsciously connect auditory and visual information. These correspondences reveal essential aspects of multisensory perception and influence behavioral performance, enhancing reaction times and accuracy. However, the impact of different types of CMCs-arising from statistical co-occurrences or shaped by semantic associations-on information processing and decision-making remains underexplored. This study utilizes the Implicit Association Test, where unisensory stimuli are sequentially presented and linked via CMCs within an experimental block by the specific response instructions (either congruent or incongruent). Behavioral data are integrated with EEG measurements through neurally informed drift-diffusion modeling to examine how neural activity across both auditory and visual trials is modulated by CMCs. Our findings reveal distinct neural components that differentiate between congruent and incongruent stimuli regardless of modality, offering new insights into the role of congruency in shaping multisensory perceptual decision-making. Two key neural stages were identified: an Early component enhancing sensory encoding in congruent trials and a Late component affecting evidence accumulation, particularly in incongruent trials. These results suggest that cross-modal congruency primarily influences the processing and accumulation of sensory information rather than altering decision thresholds.
Collapse
Affiliation(s)
- Natálie Brožová
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| | - Lukas Vollmer
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| | - Björn Kampa
- Systems Neurophysiology Department, Institute of Zoology, RWTH Aachen University, Aachen, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Janina Fels
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
7
|
Allar IB, Hua A, Rowland BA, Maier JX. Gustatory cortex neurons perform reliability-dependent integration of multisensory flavor inputs. Curr Biol 2025; 35:600-611.e3. [PMID: 39798562 PMCID: PMC11794012 DOI: 10.1016/j.cub.2024.12.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Revised: 11/05/2024] [Accepted: 12/05/2024] [Indexed: 01/15/2025]
Abstract
Flavor is the quintessential multisensory experience, combining gustatory, retronasal olfactory, and texture qualities to inform food perception and consumption behavior. However, the computations that govern multisensory integration of flavor components and their underlying neural mechanisms remain elusive. Here, we use rats as a model system to test the hypothesis that taste and smell components of flavor are integrated in a reliability-dependent manner to inform hedonic judgments and that this computation is performed by neurons in the primary taste cortex. Using a series of two-bottle preference tests, we demonstrate that hedonic judgments of taste + smell mixtures are a weighted average of the component judgments, and that the weight of the components depends on their relative reliability. Using extracellular recordings of single-neuron spiking and local field potential activity in combination with decoding analysis, we reveal a correlate of this computation in gustatory cortex (GC). GC neurons weigh bimodal taste and smell inputs based on their reliability, with more reliable inputs contributing more strongly to taste + smell mixture responses. Input reliability was associated with less variable responses and stronger network-level synchronization in the gamma band. Together, our findings establish a quantitative framework for understanding hedonic multisensory flavor judgments and identify the neural computations that underlie them.
Collapse
Affiliation(s)
- Isabella B Allar
- Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA
| | - Alex Hua
- Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA
| | - Benjamin A Rowland
- Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA
| | - Joost X Maier
- Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, NC 27157, USA.
| |
Collapse
|
8
|
Eckert AL, Fuehrer E, Schmitter C, Straube B, Fiehler K, Endres D. Modelling sensory attenuation as Bayesian causal inference across two datasets. PLoS One 2025; 20:e0317924. [PMID: 39854573 PMCID: PMC11761661 DOI: 10.1371/journal.pone.0317924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 01/07/2025] [Indexed: 01/26/2025] Open
Abstract
INTRODUCTION To interact with the environment, it is crucial to distinguish between sensory information that is externally generated and inputs that are self-generated. The sensory consequences of one's own movements tend to induce attenuated behavioral- and neural responses compared to externally generated inputs. We propose a computational model of sensory attenuation (SA) based on Bayesian Causal Inference, where SA occurs when an internal cause for sensory information is inferred. METHODS Experiment 1investigates sensory attenuation during a stroking movement. Tactile stimuli on the stroking finger were suppressed, especially when they were predictable. Experiment 2 showed impaired delay detection between an arm movement and a video of the movement when participants were moving vs. when their arm was moved passively. We reconsider these results from the perspective of Bayesian Causal Inference (BCI). Using a hierarchical Markov Model (HMM) and variational message passing, we first qualitatively capture patterns of task behavior and sensory attenuation in simulations. Next, we identify participant-specific model parameters for both experiments using optimization. RESULTS A sequential BCI model is well equipped to capture empirical patterns of SA across both datasets. Using participant-specific optimized model parameters, we find a good agreement between data and model predictions, with the model capturing both tactile detections in Experiment 1 and delay detections in Experiment 2. DISCUSSION BCI is an appropriate framework to model sensory attenuation in humans. Computational models of sensory attenuation may help to bridge the gap across different sensory modalities and experimental paradigms and may contribute towards an improved description and understanding of deficits in specific patient groups (e.g. schizophrenia).
Collapse
Affiliation(s)
- Anna-Lena Eckert
- Department of Psychology, Theoretical Cognitive Science Group, Philipps-Universität Marburg, Marburg, Germany
| | - Elena Fuehrer
- Department of Psychology and Sport Science, Experimental Psychology Group, Justus-Liebig-Universität Gießen, Gießen, Germany
| | - Christina Schmitter
- Department of Psychiatry and Psychotherapy, Translational Neuroimaging Group, Philipps-Universität Marburg, Marburg, Germany
| | - Benjamin Straube
- Department of Psychiatry and Psychotherapy, Translational Neuroimaging Group, Philipps-Universität Marburg, Marburg, Germany
| | - Katja Fiehler
- Department of Psychology and Sport Science, Experimental Psychology Group, Justus-Liebig-Universität Gießen, Gießen, Germany
| | - Dominik Endres
- Department of Psychology, Theoretical Cognitive Science Group, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
9
|
Hu Y, Mohsenzadeh Y. Neural processing of naturalistic audiovisual events in space and time. Commun Biol 2025; 8:110. [PMID: 39843939 PMCID: PMC11754444 DOI: 10.1038/s42003-024-07434-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 12/19/2024] [Indexed: 01/24/2025] Open
Abstract
Our brain seamlessly integrates distinct sensory information to form a coherent percept. However, when real-world audiovisual events are perceived, the specific brain regions and timings for processing different levels of information remain less investigated. To address that, we curated naturalistic videos and recorded functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) data when participants viewed videos with accompanying sounds. Our findings reveal early asymmetrical cross-modal interaction, with acoustic information represented in both early visual and auditory regions, while visual information only identified in visual cortices. The visual and auditory features were processed with similar onset but different temporal dynamics. High-level categorical and semantic information emerged in multisensory association areas later in time, indicating late cross-modal integration and its distinct role in converging conceptual information. Comparing neural representations to a two-branch deep neural network model highlighted the necessity of early cross-modal connections to build a biologically plausible model of audiovisual perception. With EEG-fMRI fusion, we provided a spatiotemporally resolved account of neural activity during the processing of naturalistic audiovisual stimuli.
Collapse
Affiliation(s)
- Yu Hu
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Yalda Mohsenzadeh
- Western Institute for Neuroscience, Western University, London, ON, Canada.
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada.
- Department of Computer Science, Western University, London, ON, Canada.
| |
Collapse
|
10
|
An W, Zhang N, Li S, Yu Y, Wu J, Yang J. The Impact of Selective Spatial Attention on Auditory-Tactile Integration: An Event-Related Potential Study. Brain Sci 2024; 14:1258. [PMID: 39766457 PMCID: PMC11674746 DOI: 10.3390/brainsci14121258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Revised: 12/12/2024] [Accepted: 12/13/2024] [Indexed: 01/11/2025] Open
Abstract
BACKGROUND Auditory-tactile integration is an important research area in multisensory integration. Especially in special environments (e.g., traffic noise and complex work environments), auditory-tactile integration is crucial for human response and decision making. We investigated the influence of attention on the temporal course and spatial distribution of auditory-tactile integration. METHODS Participants received auditory stimuli alone, tactile stimuli alone, and simultaneous auditory and tactile stimuli, which were randomly presented on the left or right side. For each block, participants attended to all stimuli on the designated side and detected uncommon target stimuli while ignoring all stimuli on the other side. Event-related potentials (ERPs) were recorded via 64 scalp electrodes. Integration was quantified by comparing the response to the combined stimulus to the sum of the responses to the auditory and tactile stimuli presented separately. RESULTS The results demonstrated that compared to the unattended condition, integration occurred earlier and involved more brain regions in the attended condition when the stimulus was presented in the left hemispace. The unattended condition involved a more extensive range of brain regions and occurred earlier than the attended condition when the stimulus was presented in the right hemispace. CONCLUSIONS Attention can modulate auditory-tactile integration and show systematic differences between the left and right hemispaces. These findings contribute to the understanding of the mechanisms of auditory-tactile information processing in the human brain.
Collapse
Affiliation(s)
| | | | | | | | | | - Jiajia Yang
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, 3-1-1 Tsushima-Naka, Okayama 700-8530, Japan; (W.A.)
| |
Collapse
|
11
|
Ganea N, Addyman C, Yang J, Bremner A. Effects of multisensory stimulation on infants' learning of object pattern and trajectory. Child Dev 2024; 95:2133-2149. [PMID: 39105480 DOI: 10.1111/cdev.14147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/07/2024]
Abstract
This study investigated whether infants encode better the features of a briefly occluded object if its movements are specified simultaneously by vision and audition than if they are not (data collected: 2017-2019). Experiment 1 showed that 10-month-old infants (N = 39, 22 females, White-English) notice changes in the visual pattern on the object irrespective of the stimulation received (spatiotemporally congruent audio-visual stimulation, incongruent stimulation, or visual-only;η p 2 = .53). Experiment 2 (N = 72, 36 female) found similar results in 6-month-olds (Test Block 1,η p 2 = .13), but not 4-month-olds. Experiment 3 replicated this finding with another group of 6-month-olds (N = 42, 21 females) and showed that congruent stimulation enables infants to detect changes in object trajectory (d = 0.56) in addition to object pattern (d = 1.15), whereas incongruent stimulation hinders performance.
Collapse
Affiliation(s)
- Nataşa Ganea
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Caspar Addyman
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Jiale Yang
- School of Psychology, Chukyo University, Nagoya, Japan
| | - Andrew Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
12
|
Loosen AM, Kato A, Gu X. Revisiting the role of computational neuroimaging in the era of integrative neuroscience. Neuropsychopharmacology 2024; 50:103-113. [PMID: 39242921 PMCID: PMC11525590 DOI: 10.1038/s41386-024-01946-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 07/12/2024] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Computational models have become integral to human neuroimaging research, providing both mechanistic insights and predictive tools for human cognition and behavior. However, concerns persist regarding the ecological validity of lab-based neuroimaging studies and whether their spatiotemporal resolution is not sufficient for capturing neural dynamics. This review aims to re-examine the utility of computational neuroimaging, particularly in light of the growing prominence of alternative neuroscientific methods and the growing emphasis on more naturalistic behaviors and paradigms. Specifically, we will explore how computational modeling can both enhance the analysis of high-dimensional imaging datasets and, conversely, how neuroimaging, in conjunction with other data modalities, can inform computational models through the lens of neurobiological plausibility. Collectively, this evidence suggests that neuroimaging remains critical for human neuroscience research, and when enhanced by computational models, imaging can serve an important role in bridging levels of analysis and understanding. We conclude by proposing key directions for future research, emphasizing the development of standardized paradigms and the integrative use of computational modeling across neuroimaging techniques.
Collapse
Affiliation(s)
- Alisa M Loosen
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Ayaka Kato
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| | - Xiaosi Gu
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
13
|
Nardini M, Scheller M, Ramsay M, Kristiansen O, Allen C. Towards Human Sensory Augmentation: A Cognitive Neuroscience Framework for Evaluating Integration of New Signals within Perception, Brain Representations, and Subjective Experience. AUGMENTED HUMAN RESEARCH 2024; 10:1. [PMID: 39497728 PMCID: PMC11533871 DOI: 10.1007/s41133-024-00075-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 08/29/2024] [Accepted: 10/12/2024] [Indexed: 11/07/2024]
Abstract
New wearable devices and technologies provide unprecedented scope to augment or substitute human perceptual abilities. However, the flexibility to reorganize brain processing to use novel sensory signals during early sensitive periods in infancy is much less evident at later ages, making integration of new signals into adults' perception a significant challenge. We believe that an approach informed by cognitive neuroscience is crucial for maximizing the true potential of new sensory technologies. Here, we present a framework for measuring and evaluating the extent to which new signals are integrated within existing structures of perception and experience. As our testbed, we use laboratory tasks in which healthy volunteers learn new, augmented perceptual-motor skills. We describe a suite of measures of (i) perceptual function (psychophysics), (ii) neural representations (fMRI/decoding), and (iii) subjective experience (qualitative interview/micro-phenomenology) targeted at testing hypotheses about how newly learned signals become integrated within perception and experience. As proof of concept, we provide example data showing how this approach allows us to measure changes in perception, neural processing, and subjective experience. We argue that this framework, in concert with targeted approaches to optimizing training and learning, provides the tools needed to develop and optimize new approaches to human sensory augmentation and substitution.
Collapse
Affiliation(s)
- Marko Nardini
- Department of Psychology, Durham University, Durham, UK
| | | | | | | | - Chris Allen
- Department of Psychology, Durham University, Durham, UK
| |
Collapse
|
14
|
He J, Kurita K, Yoshida T, Matsumoto K, Shimizu E, Hirano Y. Comparisons of the amplitude of low-frequency fluctuation and functional connectivity in major depressive disorder and social anxiety disorder: A resting-state fMRI study. J Affect Disord 2024; 362:425-436. [PMID: 39004312 DOI: 10.1016/j.jad.2024.07.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/20/2024] [Accepted: 07/11/2024] [Indexed: 07/16/2024]
Abstract
BACKGROUND Studies comparing the brain functions of major depressive disorder (MDD) and social anxiety disorder (SAD) at the regional and network levels remain scarce. This study aimed to elucidate their pathogenesis using neuroimaging techniques and explore biomarkers that can differentiate these disorders. METHODS Resting-state fMRI data were collected from 48 patients with MDD, 41 patients with SAD, and 82 healthy controls. Differences in the amplitude of low-frequency fluctuations (ALFF) among the three groups were examined to identify regions showing abnormal regional spontaneous activity. A seed-based functional connectivity (FC) analysis was conducted using ALFF results as seeds and different connections were identified between regions showing abnormal local spontaneous activity and other regions. The correlation between abnormal brain function and clinical symptoms was analyzed. RESULTS Patients with MDD and SAD exhibited similar abnormal ALFF and FC in several brain regions; notably, FC between the right superior frontal gyrus (SFG) and the right posterior supramarginal gyrus (pSMG) in patients with SAD was negatively correlated with depressive symptoms. Furthermore, patients with MDD showed higher ALFF in the right SFG than HCs and those with SAD. LIMITATION Potential effects of medications, comorbidities, and data type could not be ignored. CONCLUSION MDD and SAD showed common and distinct aberrant brain function patterns at the regional and network levels. At the regional level, we found that the ALFF in the right SFG was different between patients with MDD and those with SAD. At the network level, we did not find any differences between these disorders.
Collapse
Affiliation(s)
- Junbing He
- Research Center for Child Mental Development, Chiba University, Chiba, Japan; Department of Cognitive Behavioral Physiology, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Kohei Kurita
- Research Center for Child Mental Development, Chiba University, Chiba, Japan; United Graduate School of Child Development, Osaka University, Suita, Japan
| | - Tokiko Yoshida
- Research Center for Child Mental Development, Chiba University, Chiba, Japan; United Graduate School of Child Development, Osaka University, Suita, Japan
| | - Koji Matsumoto
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Eiji Shimizu
- Research Center for Child Mental Development, Chiba University, Chiba, Japan; Department of Cognitive Behavioral Physiology, Graduate School of Medicine, Chiba University, Chiba, Japan; United Graduate School of Child Development, Osaka University, Suita, Japan
| | - Yoshiyuki Hirano
- Research Center for Child Mental Development, Chiba University, Chiba, Japan; United Graduate School of Child Development, Osaka University, Suita, Japan.
| |
Collapse
|
15
|
Isabella SL, D'Alonzo M, Mioli A, Arcara G, Pellegrino G, Di Pino G. Artificial embodiment displaces cortical neuromagnetic somatosensory responses. Sci Rep 2024; 14:22279. [PMID: 39333283 PMCID: PMC11437133 DOI: 10.1038/s41598-024-72460-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 09/06/2024] [Indexed: 09/29/2024] Open
Abstract
Integrating artificial limbs as part of one's body involves complex neuroplastic changes resulting from various sensory inputs. While somatosensory feedback is crucial, plastic processes that enable embodiment remain unknown. We investigated this using somatosensory evoked fields (SEFs) in the primary somatosensory cortex (S1) following the Rubber Hand Illusion (RHI), known to quickly induce artificial limb embodiment. During electrical stimulation of the little finger and thumb, 19 adults underwent neuromagnetic recordings before and after the RHI. We found early SEF displacement, including an illusion-brain correlation between extent of embodiment and specific changes to the first cortical response at 20 ms in Area 3b, within S1. Furthermore, we observed a posteriorly directed displacement at 35 ms towards Area 1, known to be important for visual integration during touch perception. That this second displacement was unrelated to extent of embodiment implies a functional distinction between neuroplastic changes of these components and areas. The earlier shift in Area 3b may shape extent of limb ownership, while subsequent displacement into Area 1 may relate to early visual-tactile integration that initiates embodiment. Here we provide evidence for multiple neuroplastic processes in S1-lasting beyond the illusion-supporting integration of artificial limbs like prostheses within the body representation.
Collapse
Affiliation(s)
- Silvia L Isabella
- NeXT: Neurophsyiology and Neuro-Engineering of Human-Technology Interaction Research Unit, Universita' Campus Bio-Medico di Roma, Rome, Italy.
- San Camillo IRCCS Research Hospital, Venice, Italy.
| | - Marco D'Alonzo
- NeXT: Neurophsyiology and Neuro-Engineering of Human-Technology Interaction Research Unit, Universita' Campus Bio-Medico di Roma, Rome, Italy
| | - Alessandro Mioli
- NeXT: Neurophsyiology and Neuro-Engineering of Human-Technology Interaction Research Unit, Universita' Campus Bio-Medico di Roma, Rome, Italy
| | | | - Giovanni Pellegrino
- Epilepsy program, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Giovanni Di Pino
- NeXT: Neurophsyiology and Neuro-Engineering of Human-Technology Interaction Research Unit, Universita' Campus Bio-Medico di Roma, Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico di Roma, Rome, Italy
| |
Collapse
|
16
|
Yonemura Y, Katori Y. Dynamical predictive coding with reservoir computing performs noise-robust multi-sensory speech recognition. Front Comput Neurosci 2024; 18:1464603. [PMID: 39376576 PMCID: PMC11456454 DOI: 10.3389/fncom.2024.1464603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2024] [Accepted: 09/05/2024] [Indexed: 10/09/2024] Open
Abstract
Multi-sensory integration is a perceptual process through which the brain synthesizes a unified perception by integrating inputs from multiple sensory modalities. A key issue is understanding how the brain performs multi-sensory integrations using a common neural basis in the cortex. A cortical model based on reservoir computing has been proposed to elucidate the role of recurrent connectivity among cortical neurons in this process. Reservoir computing is well-suited for time series processing, such as speech recognition. This inquiry focuses on extending a reservoir computing-based cortical model to encompass multi-sensory integration within the cortex. This research introduces a dynamical model of multi-sensory speech recognition, leveraging predictive coding combined with reservoir computing. Predictive coding offers a framework for the hierarchical structure of the cortex. The model integrates reliability weighting, derived from the computational theory of multi-sensory integration, to adapt to multi-sensory time series processing. The model addresses a multi-sensory speech recognition task, necessitating the management of complex time series. We observed that the reservoir effectively recognizes speech by extracting time-contextual information and weighting sensory inputs according to sensory noise. These findings indicate that the dynamic properties of recurrent networks are applicable to multi-sensory time series processing, positioning reservoir computing as a suitable model for multi-sensory integration.
Collapse
Affiliation(s)
- Yoshihiro Yonemura
- Graduate of System Information Science, Future University Hakodate, Hakodate, Hokkaido, Japan
| | - Yuichi Katori
- Graduate of System Information Science, Future University Hakodate, Hakodate, Hokkaido, Japan
- International Research Center for Neurointelligence (IRCN), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
17
|
Jiao L, Wang Y, Liu X, Li L, Liu F, Ma W, Guo Y, Chen P, Yang S, Hou B. Causal Inference Meets Deep Learning: A Comprehensive Survey. RESEARCH (WASHINGTON, D.C.) 2024; 7:0467. [PMID: 39257419 PMCID: PMC11384545 DOI: 10.34133/research.0467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Accepted: 08/11/2024] [Indexed: 09/12/2024]
Abstract
Deep learning relies on learning from extensive data to generate prediction results. This approach may inadvertently capture spurious correlations within the data, leading to models that lack interpretability and robustness. Researchers have developed more profound and stable causal inference methods based on cognitive neuroscience. By replacing the correlation model with a stable and interpretable causal model, it is possible to mitigate the misleading nature of spurious correlations and overcome the limitations of model calculations. In this survey, we provide a comprehensive and structured review of causal inference methods in deep learning. Brain-like inference ideas are discussed from a brain-inspired perspective, and the basic concepts of causal learning are introduced. The article describes the integration of causal inference with traditional deep learning algorithms and illustrates its application to large model tasks as well as specific modalities in deep learning. The current limitations of causal inference and future research directions are discussed. Moreover, the commonly used benchmark datasets and the corresponding download links are summarized.
Collapse
Affiliation(s)
- Licheng Jiao
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Yuhan Wang
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Xu Liu
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Lingling Li
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Fang Liu
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Wenping Ma
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Yuwei Guo
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Puhua Chen
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Shuyuan Yang
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| | - Biao Hou
- The School of Artificial Intelligence, Xidian University, Xi'an, China
| |
Collapse
|
18
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024; 25:625-642. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
19
|
Rohe T, Hesse K, Ehlis AC, Noppeney U. Multisensory perceptual and causal inference is largely preserved in medicated post-acute individuals with schizophrenia. PLoS Biol 2024; 22:e3002790. [PMID: 39255328 PMCID: PMC11466413 DOI: 10.1371/journal.pbio.3002790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/10/2024] [Accepted: 08/06/2024] [Indexed: 09/12/2024] Open
Abstract
Hallucinations and perceptual abnormalities in psychosis are thought to arise from imbalanced integration of prior information and sensory inputs. We combined psychophysics, Bayesian modeling, and electroencephalography (EEG) to investigate potential changes in perceptual and causal inference in response to audiovisual flash-beep sequences in medicated individuals with schizophrenia who exhibited limited psychotic symptoms. Seventeen participants with schizophrenia and 23 healthy controls reported either the number of flashes or the number of beeps of audiovisual sequences that varied in their audiovisual numeric disparity across trials. Both groups balanced sensory integration and segregation in line with Bayesian causal inference rather than resorting to simpler heuristics. Both also showed comparable weighting of prior information regarding the signals' causal structure, although the schizophrenia group slightly overweighted prior information about the number of flashes or beeps. At the neural level, both groups computed Bayesian causal inference through dynamic encoding of independent estimates of the flash and beep counts, followed by estimates that flexibly combine audiovisual inputs. Our results demonstrate that the core neurocomputational mechanisms for audiovisual perceptual and causal inference in number estimation tasks are largely preserved in our limited sample of medicated post-acute individuals with schizophrenia. Future research should explore whether these findings generalize to unmedicated patients with acute psychotic symptoms.
Collapse
Affiliation(s)
- Tim Rohe
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Klaus Hesse
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Ann-Christine Ehlis
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
- Tübingen Center for Mental Health (TüCMH), Tübingen, Germany
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
20
|
Heng JG, Zhang J, Bonetti L, Lim WPH, Vuust P, Agres K, Chen SHA. Understanding music and aging through the lens of Bayesian inference. Neurosci Biobehav Rev 2024; 163:105768. [PMID: 38908730 DOI: 10.1016/j.neubiorev.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/24/2024]
Abstract
Bayesian inference has recently gained momentum in explaining music perception and aging. A fundamental mechanism underlying Bayesian inference is the notion of prediction. This framework could explain how predictions pertaining to musical (melodic, rhythmic, harmonic) structures engender action, emotion, and learning, expanding related concepts of music research, such as musical expectancies, groove, pleasure, and tension. Moreover, a Bayesian perspective of music perception may shed new insights on the beneficial effects of music in aging. Aging could be framed as an optimization process of Bayesian inference. As predictive inferences refine over time, the reliance on consolidated priors increases, while the updating of prior models through Bayesian inference attenuates. This may affect the ability of older adults to estimate uncertainties in their environment, limiting their cognitive and behavioral repertoire. With Bayesian inference as an overarching framework, this review synthesizes the literature on predictive inferences in music and aging, and details how music could be a promising tool in preventive and rehabilitative interventions for older adults through the lens of Bayesian inference.
Collapse
Affiliation(s)
- Jiamin Gladys Heng
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Jiayi Zhang
- Interdisciplinary Graduate Program, Nanyang Technological University, Singapore; School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy
| | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark
| | - Kat Agres
- Centre for Music and Health, National University of Singapore, Singapore; Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore
| | - Shen-Hsing Annabel Chen
- School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; National Institute of Education, Nanyang Technological University, Singapore.
| |
Collapse
|
21
|
Uemura M, Katagiri Y, Imai E, Kawahara Y, Otani Y, Ichinose T, Kondo K, Kowa H. Dorsal Anterior Cingulate Cortex Coordinates Contextual Mental Imagery for Single-Beat Manipulation during Rhythmic Sensorimotor Synchronization. Brain Sci 2024; 14:757. [PMID: 39199452 PMCID: PMC11352649 DOI: 10.3390/brainsci14080757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 09/01/2024] Open
Abstract
Flexible pulse-by-pulse regulation of sensorimotor synchronization is crucial for voluntarily showing rhythmic behaviors synchronously with external cueing; however, the underpinning neurophysiological mechanisms remain unclear. We hypothesized that the dorsal anterior cingulate cortex (dACC) plays a key role by coordinating both proactive and reactive motor outcomes based on contextual mental imagery. To test our hypothesis, a missing-oddball task in finger-tapping paradigms was conducted in 33 healthy young volunteers. The dynamic properties of the dACC were evaluated by event-related deep-brain activity (ER-DBA), supported by event-related potential (ERP) analysis and behavioral evaluation based on signal detection theory. We found that ER-DBA activation/deactivation reflected a strategic choice of motor control modality in accordance with mental imagery. Reverse ERP traces, as omission responses, confirmed that the imagery was contextual. We found that mental imagery was updated only by environmental changes via perceptual evidence and response-based abductive reasoning. Moreover, stable on-pulse tapping was achievable by maintaining proactive control while creating an imagery of syncopated rhythms from simple beat trains, whereas accuracy was degraded with frequent erroneous tapping for missing pulses. We conclude that the dACC voluntarily regulates rhythmic sensorimotor synchronization by utilizing contextual mental imagery based on experience and by creating novel rhythms.
Collapse
Affiliation(s)
- Maho Uemura
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | - Yoshitada Katagiri
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo 113-8655, Japan;
| | - Emiko Imai
- Department of Biophysics, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan;
| | - Yasuhiro Kawahara
- Department of Human life and Health Sciences, Division of Arts and Sciences, The Open University of Japan, Chiba 261-8586, Japan;
| | - Yoshitaka Otani
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- Faculty of Rehabilitation, Kobe International University, Kobe 658-0032, Japan
| | - Tomoko Ichinose
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | | | - Hisatomo Kowa
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
| |
Collapse
|
22
|
Smyre SA, Bean NL, Stein BE, Rowland BA. The brain can develop conflicting multisensory principles to guide behavior. Cereb Cortex 2024; 34:bhae247. [PMID: 38879756 PMCID: PMC11179994 DOI: 10.1093/cercor/bhae247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/23/2024] [Accepted: 05/30/2024] [Indexed: 06/19/2024] Open
Abstract
Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.
Collapse
Affiliation(s)
- Scott A Smyre
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Naomi L Bean
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| | - Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, Medical Center Blvd., Winston Salem, NC 27157, United States
| |
Collapse
|
23
|
Yasoda-Mohan A, Chen F, Ó Sé C, Allard R, Ost J, Vanneste S. Phantom perception as a Bayesian inference problem: a pilot study. J Neurophysiol 2024; 131:1311-1327. [PMID: 38718414 DOI: 10.1152/jn.00349.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 04/18/2024] [Accepted: 05/01/2024] [Indexed: 06/19/2024] Open
Abstract
Tinnitus is the perception of a continuous sound in the absence of an external source. Although the role of the auditory system is well investigated, there is a gap in how multisensory signals are integrated to produce a single percept in tinnitus. Here, we train participants to learn a new sensory environment by associating a cue with a target signal that varies in perceptual threshold. In the test phase, we present only the cue to see whether the person perceives an illusion of the target signal. We perform two separate experiments to observe the behavioral and electrophysiological responses to the learning and test phases in 1) healthy young adults and 2) people with continuous subjective tinnitus and matched control subjects. We observed that in both parts of the study the percentage of false alarms was negatively correlated with the 75% detection threshold. Additionally, the perception of an illusion goes together with increased evoked response potential in frontal regions of the brain. Furthermore, in patients with tinnitus, we observe no significant difference in behavioral or evoked response in the auditory paradigm, whereas patients with tinnitus were more likely to report false alarms along with increased evoked activity during the learning and test phases in the visual paradigm. This emphasizes the importance of integrity of sensory pathways in multisensory integration and how this process may be disrupted in people with tinnitus. Furthermore, the present study also presents preliminary data supporting evidence that tinnitus patients may be building stronger perceptual models, which needs future studies with a larger population to provide concrete evidence on.NEW & NOTEWORTHY Tinnitus is the continuous phantom perception of a ringing in the ears. Recently, it has been suggested that tinnitus may be a maladaptive inference of the brain to auditory anomalies, whether they are detected or undetected by an audiogram. The present study presents empirical evidence for this hypothesis by inducing an illusion in a sensory domain that is damaged (auditory) and one that is intact (visual). It also presents novel information about how people with tinnitus process multisensory stimuli in the audio-visual domain.
Collapse
Affiliation(s)
- Anusha Yasoda-Mohan
- Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Feifan Chen
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Colum Ó Sé
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - Remy Allard
- School of Optometry, University of Montreal, Montreal, Quebec, Canada
| | - Jan Ost
- Brain Research Center for Advanced, International, Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium
| | - Sven Vanneste
- Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Dublin, Ireland
- Brain Research Center for Advanced, International, Innovative and Interdisciplinary Neuromodulation, Ghent, Belgium
| |
Collapse
|
24
|
Rohe T. Complex multisensory causal inference in multi-signal scenarios (commentary on Kayser, Debats & Heuer, 2024). Eur J Neurosci 2024; 59:2890-2893. [PMID: 38706126 DOI: 10.1111/ejn.16388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/24/2024] [Accepted: 04/25/2024] [Indexed: 05/07/2024]
Affiliation(s)
- Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
25
|
Bliek A, Andreas D, Beckerle P, Rohe T. Measuring, modeling and fostering embodiment of robotic prosthesis. FRONTIERS IN NEUROERGONOMICS 2024; 5:1400868. [PMID: 38835490 PMCID: PMC11148325 DOI: 10.3389/fnrgo.2024.1400868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 04/29/2024] [Indexed: 06/06/2024]
Affiliation(s)
- Adna Bliek
- Chair of Autonomous Systems and Mechatronics, Department of Electrical Engineering, Faculty of Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Daniel Andreas
- Chair of Autonomous Systems and Mechatronics, Department of Electrical Engineering, Faculty of Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Philipp Beckerle
- Chair of Autonomous Systems and Mechatronics, Department of Electrical Engineering, Faculty of Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
26
|
Wegner-Clemens K, Malcolm GL, Shomstein S. Predicting attentional allocation in real-world environments: The need to investigate crossmodal semantic guidance. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1675. [PMID: 38243393 DOI: 10.1002/wcs.1675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/01/2023] [Accepted: 12/07/2023] [Indexed: 01/21/2024]
Abstract
Real-world environments are multisensory, meaningful, and highly complex. To parse these environments in a highly efficient manner, a subset of this information must be selected both within and across modalities. However, the bulk of attention research has been conducted within sensory modalities, with a particular focus on vision. Visual attention research has made great strides, with over a century of research methodically identifying the underlying mechanisms that allow us to select critical visual information. Spatial attention, attention to features, and object-based attention have all been studied extensively. More recently, research has established semantics (meaning) as a key component to allocating attention in real-world scenes, with the meaning of an item or environment affecting visual attentional selection. However, a full understanding of how semantic information modulates real-world attention requires studying more than vision in isolation. The world provides semantic information across all senses, but with this extra information comes greater complexity. Here, we summarize visual attention (including semantic-based visual attention), crossmodal attention, and argue for the importance of studying crossmodal semantic guidance of attention. This article is categorized under: Psychology > Attention Psychology > Perception and Psychophysics.
Collapse
Affiliation(s)
- Kira Wegner-Clemens
- Psychological and Brain Sciences, George Washington University, Washington, DC, USA
| | | | - Sarah Shomstein
- Psychological and Brain Sciences, George Washington University, Washington, DC, USA
| |
Collapse
|
27
|
Scheller M, Fang H, Sui J. Self as a prior: The malleability of Bayesian multisensory integration to social salience. Br J Psychol 2024; 115:185-205. [PMID: 37747452 DOI: 10.1111/bjop.12683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 08/26/2023] [Accepted: 09/11/2023] [Indexed: 09/26/2023]
Abstract
Our everyday perceptual experiences are grounded in the integration of information within and across our senses. Due to this direct behavioural relevance, cross-modal integration retains a certain degree of contextual flexibility, even to social relevance. However, how social relevance modulates cross-modal integration remains unclear. To investigate possible mechanisms, Experiment 1 tested the principles of audio-visual integration for numerosity estimation by deriving a Bayesian optimal observer model with perceptual prior from empirical data to explain perceptual biases. Such perceptual priors may shift towards locations of high salience in the stimulus space. Our results showed that the tendency to over- or underestimate numerosity, expressed in the frequency and strength of fission and fusion illusions, depended on the actual event numerosity. Experiment 2 replicated the effects of social relevance on multisensory integration from Scheller & Sui, 2022 JEP:HPP, using a lower number of events, thereby favouring the opposite illusion through enhanced influences of the prior. In line with the idea that the self acts like a prior, the more frequently observed illusion (more malleable to prior influences) was modulated by self-relevance. Our findings suggest that the self can influence perception by acting like a prior in cue integration, biasing perceptual estimates towards areas of high self-relevance.
Collapse
Affiliation(s)
- Meike Scheller
- Department of Psychology, University of Aberdeen, Aberdeen, UK
- Department of Psychology, Durham University, Durham, UK
| | - Huilin Fang
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - Jie Sui
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
28
|
Kayser C, Heuer H. Multisensory perception depends on the reliability of the type of judgment. J Neurophysiol 2024; 131:723-737. [PMID: 38416720 DOI: 10.1152/jn.00451.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 02/05/2024] [Accepted: 02/24/2024] [Indexed: 03/01/2024] Open
Abstract
The brain engages the processes of multisensory integration and recalibration to deal with discrepant multisensory signals. These processes consider the reliability of each sensory input, with the more reliable modality receiving the stronger weight. Sensory reliability is typically assessed via the variability of participants' judgments, yet these can be shaped by factors both external and internal to the nervous system. For example, motor noise and participant's dexterity with the specific response method contribute to judgment variability, and different response methods applied to the same stimuli can result in different estimates of sensory reliabilities. Here we ask how such variations in reliability induced by variations in the response method affect multisensory integration and sensory recalibration, as well as motor adaptation, in a visuomotor paradigm. Participants performed center-out hand movements and were asked to judge the position of the hand or rotated visual feedback at the movement end points. We manipulated the variability, and thus the reliability, of repeated judgments by asking participants to respond using either a visual or a proprioceptive matching procedure. We find that the relative weights of visual and proprioceptive signals, and thus the asymmetry of multisensory integration and recalibration, depend on the reliability modulated by the judgment method. Motor adaptation, in contrast, was insensitive to this manipulation. Hence, the outcome of multisensory binding is shaped by the noise introduced by sensorimotor processing, in line with perception and action being intertwined.NEW & NOTEWORTHY Our brain tends to combine multisensory signals based on their respective reliability. This reliability depends on sensory noise in the environment, noise in the nervous system, and, as we show here, variability induced by the specific judgment procedure.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
29
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
30
|
Schoffelen JM, Pesci UG, Noppeney U. Alpha Oscillations and Temporal Binding Windows in Perception-A Critical Review and Best Practice Guidelines. J Cogn Neurosci 2024; 36:655-690. [PMID: 38330177 DOI: 10.1162/jocn_a_02118] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
An intriguing question in cognitive neuroscience is whether alpha oscillations shape how the brain transforms the continuous sensory inputs into distinct percepts. According to the alpha temporal resolution hypothesis, sensory signals arriving within a single alpha cycle are integrated, whereas those in separate cycles are segregated. Consequently, shorter alpha cycles should be associated with smaller temporal binding windows and higher temporal resolution. However, the evidence supporting this hypothesis is contentious, and the neural mechanisms remain unclear. In this review, we first elucidate the alpha temporal resolution hypothesis and the neural circuitries that generate alpha oscillations. We then critically evaluate study designs, experimental paradigms, psychophysics, and neurophysiological analyses that have been employed to investigate the role of alpha frequency in temporal binding. Through the lens of this methodological framework, we then review evidence from between-subject, within-subject, and causal perturbation studies. Our review highlights the inherent interpretational ambiguities posed by previous study designs and experimental paradigms and the extensive variability in analysis choices across studies. We also suggest best practice recommendations that may help to guide future research. To establish a mechanistic role of alpha frequency in temporal parsing, future research is needed that demonstrates its causal effects on the temporal binding window with consistent, experimenter-independent methods.
Collapse
Affiliation(s)
| | | | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University
| |
Collapse
|
31
|
Jones SA, Noppeney U. Older adults preserve audiovisual integration through enhanced cortical activations, not by recruiting new regions. PLoS Biol 2024; 22:e3002494. [PMID: 38319934 PMCID: PMC10871488 DOI: 10.1371/journal.pbio.3002494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 02/16/2024] [Accepted: 01/09/2024] [Indexed: 02/08/2024] Open
Abstract
Effective interactions with the environment rely on the integration of multisensory signals: Our brains must efficiently combine signals that share a common source, and segregate those that do not. Healthy ageing can change or impair this process. This functional magnetic resonance imaging study assessed the neural mechanisms underlying age differences in the integration of auditory and visual spatial cues. Participants were presented with synchronous audiovisual signals at various degrees of spatial disparity and indicated their perceived sound location. Behaviourally, older adults were able to maintain localisation accuracy. At the neural level, they integrated auditory and visual cues into spatial representations along dorsal auditory and visual processing pathways similarly to their younger counterparts but showed greater activations in a widespread system of frontal, temporal, and parietal areas. According to multivariate Bayesian decoding, these areas encoded critical stimulus information beyond that which was encoded in the brain areas commonly activated by both groups. Surprisingly, however, the boost in information provided by these areas with age-related activation increases was comparable across the 2 age groups. This dissociation-between comparable information encoded in brain activation patterns across the 2 age groups, but age-related increases in regional blood-oxygen-level-dependent responses-contradicts the widespread notion that older adults recruit new regions as a compensatory mechanism to encode task-relevant information. Instead, our findings suggest that activation increases in older adults reflect nonspecific or modulatory mechanisms related to less efficient or slower processing, or greater demands on attentional resources.
Collapse
Affiliation(s)
- Samuel A. Jones
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Department of Psychology, Nottingham Trent University, Nottingham, United Kingdom
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
32
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
33
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
34
|
Grundei M, Schmidt TT, Blankenburg F. A multimodal cortical network of sensory expectation violation revealed by fMRI. Hum Brain Mapp 2023; 44:5871-5891. [PMID: 37721377 PMCID: PMC10619418 DOI: 10.1002/hbm.26482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/04/2023] [Accepted: 08/29/2023] [Indexed: 09/19/2023] Open
Abstract
The brain is subjected to multi-modal sensory information in an environment governed by statistical dependencies. Mismatch responses (MMRs), classically recorded with EEG, have provided valuable insights into the brain's processing of regularities and the generation of corresponding sensory predictions. Only few studies allow for comparisons of MMRs across multiple modalities in a simultaneous sensory stream and their corresponding cross-modal context sensitivity remains unknown. Here, we used a tri-modal version of the roving stimulus paradigm in fMRI to elicit MMRs in the auditory, somatosensory and visual modality. Participants (N = 29) were simultaneously presented with sequences of low and high intensity stimuli in each of the three senses while actively observing the tri-modal input stream and occasionally reporting the intensity of the previous stimulus in a prompted modality. The sequences were based on a probabilistic model, defining transition probabilities such that, for each modality, stimuli were more likely to repeat (p = .825) than change (p = .175) and stimulus intensities were equiprobable (p = .5). Moreover, each transition was conditional on the configuration of the other two modalities comprising global (cross-modal) predictive properties of the sequences. We identified a shared mismatch network of modality general inferior frontal and temporo-parietal areas as well as sensory areas, where the connectivity (psychophysiological interaction) between these regions was modulated during mismatch processing. Further, we found deviant responses within the network to be modulated by local stimulus repetition, which suggests highly comparable processing of expectation violation across modalities. Moreover, hierarchically higher regions of the mismatch network in the temporo-parietal area around the intraparietal sulcus were identified to signal cross-modal expectation violation. With the consistency of MMRs across audition, somatosensation and vision, our study provides insights into a shared cortical network of uni- and multi-modal expectation violation in response to sequence regularities.
Collapse
Affiliation(s)
- Miro Grundei
- Neurocomputation and Neuroimaging UnitFreie Universität BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| | | | - Felix Blankenburg
- Neurocomputation and Neuroimaging UnitFreie Universität BerlinBerlinGermany
- Berlin School of Mind and BrainHumboldt Universität zu BerlinBerlinGermany
| |
Collapse
|
35
|
Monti M, Molholm S, Cuppini C. Atypical development of causal inference in autism inferred through a neurocomputational model. Front Comput Neurosci 2023; 17:1258590. [PMID: 37927544 PMCID: PMC10620690 DOI: 10.3389/fncom.2023.1258590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
In everyday life, the brain processes a multitude of stimuli from the surrounding environment, requiring the integration of information from different sensory modalities to form a coherent perception. This process, known as multisensory integration, enhances the brain's response to redundant congruent sensory cues. However, it is equally important for the brain to segregate sensory inputs from distinct events, to interact with and correctly perceive the multisensory environment. This problem the brain must face, known as the causal inference problem, is strictly related to multisensory integration. It is widely recognized that the ability to integrate information from different senses emerges during the developmental period, as a function of our experience with multisensory stimuli. Consequently, multisensory integrative abilities are altered in individuals who have atypical experiences with cross-modal cues, such as those on the autistic spectrum. However, no research has been conducted on the developmental trajectories of causal inference and its relationship with experience thus far. Here, we used a neuro-computational model to simulate and investigate the development of causal inference in both typically developing children and those in the autistic spectrum. Our results indicate that higher exposure to cross-modal cues accelerates the acquisition of causal inference abilities, and a minimum level of experience with multisensory stimuli is required to develop fully mature behavior. We then simulated the altered developmental trajectory of causal inference in individuals with autism by assuming reduced multisensory experience during training. The results suggest that causal inference reaches complete maturity much later in these individuals compared to neurotypical individuals. Furthermore, we discuss the underlying neural mechanisms and network architecture involved in these processes, highlighting that the development of causal inference follows the evolution of the mechanisms subserving multisensory integration. Overall, this study provides a computational framework, unifying causal inference and multisensory integration, which allows us to suggest neural mechanisms and provide testable predictions about the development of such abilities in typically developed and autistic children.
Collapse
Affiliation(s)
- Melissa Monti
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| | - Sophie Molholm
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Cristiano Cuppini
- Department of Electrical, Electronic, and Information Engineering Guglielmo Marconi, University of Bologna, Bologna, Italy
| |
Collapse
|
36
|
Quinn M, Hirst RJ, McGovern DP. Distinct profiles of multisensory processing between professional goalkeepers and outfield football players. Curr Biol 2023; 33:R994-R995. [PMID: 37816326 DOI: 10.1016/j.cub.2023.08.050] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 10/12/2023]
Abstract
In association football (soccer), the position of goalkeeper is the most specialised position in the sport and has the primary objective of stopping the opposing team from scoring. While previous studies have highlighted differences in physiological and match performance profiles between goalkeepers and outfield players1, surprisingly little research has focused on whether goalkeepers differ in terms of their perceptual-cognitive abilities. Given that goalkeepers use multiple sensory cues and are often required to make rapid decisions based on incomplete multisensory information to fulfil their role2, we hypothesised that professional goalkeepers would display enhanced multisensory temporal processing relative to their outfield counterparts. To test this hypothesis, we measured the temporal binding windows - the time window within which signals from the different senses are integrated into a single percept - of professional goalkeepers, professional outfield players, and a control group with no professional football experience using the sound-induced flash illusion3. Our results indicated a marked difference in multisensory processing between the three groups. Specifically, we found that the goalkeepers displayed a narrower temporal binding window relative to both outfielders and control participants, indicating more precise audiovisual timing estimation. However, this enhanced multisensory temporal processing was accompanied by a general reduction in crossmodal interactions relative to the other two groups that could be attributed to an a priori tendency to segregate sensory signals. We propose that these differences stem from the idiosyncratic nature of the goalkeeping position that puts a premium on the ability of goalkeepers to make quick decisions, often based on partial or incomplete sensory information.
Collapse
Affiliation(s)
- Michael Quinn
- School of Psychology, Glasnevin Campus, Dublin City University, Dublin 9, Ireland
| | - Rebecca J Hirst
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland; Open Science Tools (PsychoPy) Lab, School of Psychology, University of Nottingham, University Park Campus, Nottingham NG7 2RD, UK
| | - David P McGovern
- School of Psychology, Glasnevin Campus, Dublin City University, Dublin 9, Ireland.
| |
Collapse
|
37
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
38
|
Cervantes Constantino F, Sánchez-Costa T, Cipriani GA, Carboni A. Visuospatial attention revamps cortical processing of sound amid audiovisual uncertainty. Psychophysiology 2023; 60:e14329. [PMID: 37166096 DOI: 10.1111/psyp.14329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 04/13/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023]
Abstract
Selective attentional biases arising from one sensory modality manifest in others. The effects of visuospatial attention, important in visual object perception, are unclear in the auditory domain during audiovisual (AV) scene processing. We investigate temporal and spatial factors that underlie such transfer neurally. Auditory encoding of random tone pips in AV scenes was addressed via a temporal response function model (TRF) of participants' electroencephalogram (N = 30). The spatially uninformative pips were associated with spatially distributed visual contrast reversals ("flips"), through asynchronous probabilistic AV temporal onset distributions. Participants deployed visuospatial selection on these AV stimuli to perform a task. A late (~300 ms) cross-modal influence over the neural representation of pips was found in the original and a replication study (N = 21). Transfer depended on selected visual input being (i) presented during or shortly after a related sound, in relatively limited temporal distributions (<165 ms); (ii) positioned across limited (1:4) visual foreground to background ratios. Neural encoding of auditory input, as a function of visual input, was largest at visual foreground quadrant sectors and lowest at locations opposite to the target. The results indicate that ongoing neural representations of sounds incorporate visuospatial attributes for auditory stream segregation, as cross-modal transfer conveys information that specifies the identity of multisensory signals. A potential mechanism is by enhancing or recalibrating the tuning properties of the auditory populations that represent them as objects. The results account for the dynamic evolution under visual attention of multisensory integration, specifying critical latencies at which relevant cortical networks operate.
Collapse
Affiliation(s)
- Francisco Cervantes Constantino
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Fundamentos y Métodos en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Investigaciones Biológicas "Clemente Estable", Montevideo, Uruguay
| | - Thaiz Sánchez-Costa
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| | - Germán A Cipriani
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| | - Alejandra Carboni
- Centro de Investigación Básica en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
- Instituto de Fundamentos y Métodos en Psicología, Facultad de Psicología, Universidad de la República, Montevideo, Uruguay
| |
Collapse
|
39
|
Bertoni T, Mastria G, Akulenko N, Perrin H, Zbinden B, Bassolino M, Serino A. The self and the Bayesian brain: Testing probabilistic models of body ownership through a self-localization task. Cortex 2023; 167:247-272. [PMID: 37586137 DOI: 10.1016/j.cortex.2023.06.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 03/29/2023] [Accepted: 06/19/2023] [Indexed: 08/18/2023]
Abstract
Simple multisensory manipulations can induce the illusory misattribution of external objects to one's own body, allowing to experimentally investigate body ownership. In this context, body ownership has been conceptualized as the result of the online Bayesian optimal estimation of the probability that one object belongs to the body from the congruence of multisensory inputs. This idea has been highly influential, as it provided a quantitative basis to bottom-up accounts of self-consciousness. However, empirical evidence fully supporting this view is scarce, as the optimality of the putative inference process has not been assessed rigorously. This pre-registered study aimed at filling this gap by testing a Bayesian model of hand ownership based on spatial and temporal visuo-proprioceptive congruences. Model predictions were compared to data from a virtual-reality reaching task, whereby reaching errors induced by a spatio-temporally mismatching virtual hand have been used as an implicit proxy of hand ownership. To rigorously test optimality, we compared the Bayesian model versus alternative non-Bayesian models of multisensory integration, and independently assess unisensory components and compare them to model estimates. We found that individually measured values of proprioceptive precision correlated with those fitted from our reaching task, providing compelling evidence that the underlying visuo-proprioceptive integration process approximates Bayesian optimality. Furthermore, reaching errors correlated with explicit ownership ratings at the single individual and trial level. Taken together, these results provide novel evidence that body ownership, a key component of self-consciousness, can be truly described as the bottom-up, behaviourally optimal processing of multisensory inputs.
Collapse
Affiliation(s)
- Tommaso Bertoni
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland
| | - Giulio Mastria
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland
| | - Nikita Akulenko
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland
| | - Henri Perrin
- School of Medicine, Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Boris Zbinden
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland
| | | | - Andrea Serino
- MySpace Lab, Department of Clinical Neurosciences, University Hospital of Lausanne, University of Lausanne, Lausanne, Switzerland.
| |
Collapse
|
40
|
Fetsch CR, Noppeney U. How the brain controls decision making in a multisensory world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220332. [PMID: 37545306 PMCID: PMC10404917 DOI: 10.1098/rstb.2022.0332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/11/2023] [Indexed: 08/08/2023] Open
Abstract
Sensory systems evolved to provide the organism with information about the environment to guide adaptive behaviour. Neuroscientists and psychologists have traditionally considered each sense independently, a legacy of Aristotle and a natural consequence of their distinct physical and anatomical bases. However, from the point of view of the organism, perception and sensorimotor behaviour are fundamentally multi-modal; after all, each modality provides complementary information about the same world. Classic studies revealed much about where and how sensory signals are combined to improve performance, but these tended to treat multisensory integration as a static, passive, bottom-up process. It has become increasingly clear how this approach falls short, ignoring the interplay between perception and action, the temporal dynamics of the decision process and the many ways by which the brain can exert top-down control of integration. The goal of this issue is to highlight recent advances on these higher order aspects of multisensory processing, which together constitute a mainstay of our understanding of complex, natural behaviour and its neural basis. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, Netherlands
| |
Collapse
|
41
|
Marly A, Yazdjian A, Soto-Faraco S. The role of conflict processing in multisensory perception: behavioural and electroencephalography evidence. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220346. [PMID: 37545310 PMCID: PMC10404919 DOI: 10.1098/rstb.2022.0346] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/04/2023] [Indexed: 08/08/2023] Open
Abstract
To form coherent multisensory perceptual representations, the brain must solve a causal inference problem: to decide if two sensory cues originated from the same event and should be combined, or if they came from different events and should be processed independently. According to current models of multisensory integration, during this process, the integrated (common cause) and segregated (different causes) internal perceptual models are entertained. In the present study, we propose that the causal inference process involves competition between these alternative perceptual models that engages the brain mechanisms of conflict processing. To test this hypothesis, we conducted two experiments, measuring reaction times (RTs) and electroencephalography, using an audiovisual ventriloquist illusion paradigm with varying degrees of intersensory disparities. Consistent with our hypotheses, incongruent trials led to slower RTs and higher fronto-medial theta power, both indicative of conflict. We also predicted that intermediate disparities would yield slower RTs and higher theta power when compared to congruent stimuli and to large disparities, owing to the steeper competition between causal models. Although this prediction was only validated in the RT study, both experiments displayed the anticipated trend. In conclusion, our findings suggest a potential involvement of the conflict mechanisms in multisensory integration of spatial information. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Adrià Marly
- Center for Brain and Cognition, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| | - Arek Yazdjian
- Center for Brain and Cognition, Universitat Pompeu Fabra, 08005 Barcelona, Spain
| | - Salvador Soto-Faraco
- Center for Brain and Cognition, Universitat Pompeu Fabra, 08005 Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, 08010 Barcelona, Spain
| |
Collapse
|
42
|
Noel JP, Bill J, Ding H, Vastola J, DeAngelis GC, Angelaki DE, Drugowitsch J. Causal inference during closed-loop navigation: parsing of self- and object-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220344. [PMID: 37545300 PMCID: PMC10404925 DOI: 10.1098/rstb.2022.0344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 06/20/2023] [Indexed: 08/08/2023] Open
Abstract
A key computation in building adaptive internal models of the external world is to ascribe sensory signals to their likely cause(s), a process of causal inference (CI). CI is well studied within the framework of two-alternative forced-choice tasks, but less well understood within the cadre of naturalistic action-perception loops. Here, we examine the process of disambiguating retinal motion caused by self- and/or object-motion during closed-loop navigation. First, we derive a normative account specifying how observers ought to intercept hidden and moving targets given their belief about (i) whether retinal motion was caused by the target moving, and (ii) if so, with what velocity. Next, in line with the modelling results, we show that humans report targets as stationary and steer towards their initial rather than final position more often when they are themselves moving, suggesting a putative misattribution of object-motion to the self. Further, we predict that observers should misattribute retinal motion more often: (i) during passive rather than active self-motion (given the lack of an efference copy informing self-motion estimates in the former), and (ii) when targets are presented eccentrically rather than centrally (given that lateral self-motion flow vectors are larger at eccentric locations during forward self-motion). Results support both of these predictions. Lastly, analysis of eye movements show that, while initial saccades toward targets were largely accurate regardless of the self-motion condition, subsequent gaze pursuit was modulated by target velocity during object-only motion, but not during concurrent object- and self-motion. These results demonstrate CI within action-perception loops, and suggest a protracted temporal unfolding of the computations characterizing CI. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Johannes Bill
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Department of Psychology, Harvard University, Boston, MA 02115, USA
| | - Haoran Ding
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - John Vastola
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, Center for Visual Science, University of Rochester, Rochester, NY 14611, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA
- Tandon School of Engineering, New York University, New York, NY 10003, USA
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard University, Boston, MA 02115, USA
- Center for Brain Science, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
43
|
Maynes R, Faulkner R, Callahan G, Mims CE, Ranjan S, Stalzer J, Odegaard B. Metacognitive awareness in the sound-induced flash illusion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220347. [PMID: 37545312 PMCID: PMC10404924 DOI: 10.1098/rstb.2022.0347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/27/2023] [Indexed: 08/08/2023] Open
Abstract
Hundreds (if not thousands) of multisensory studies provide evidence that the human brain can integrate temporally and spatially discrepant stimuli from distinct modalities into a singular event. This process of multisensory integration is usually portrayed in the scientific literature as contributing to our integrated, coherent perceptual reality. However, missing from this account is an answer to a simple question: how do confidence judgements compare between multisensory information that is integrated across multiple sources, and multisensory information that comes from a single, congruent source in the environment? In this paper, we use the sound-induced flash illusion to investigate if confidence judgements are similar across multisensory conditions when the numbers of auditory and visual events are the same, and the numbers of auditory and visual events are different. Results showed that congruent audiovisual stimuli produced higher confidence than incongruent audiovisual stimuli, even when the perceptual report was matched across the two conditions. Integrating these behavioural findings with recent neuroimaging and theoretical work, we discuss the role that prefrontal cortex may play in metacognition, multisensory causal inference and sensory source monitoring in general. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Randolph Maynes
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Ryan Faulkner
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Grace Callahan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Callie E. Mims
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
- Psychology Department, University of South Alabama, Mobile, 36688, AL, USA
| | - Saurabh Ranjan
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Justine Stalzer
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| | - Brian Odegaard
- University of Florida, 945 Center Drive, Gainesville, FL 32603, USA
| |
Collapse
|
44
|
Meijer D, Noppeney U. Metacognition in the audiovisual McGurk illusion: perceptual and causal confidence. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220348. [PMID: 37545307 PMCID: PMC10404922 DOI: 10.1098/rstb.2022.0348] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 07/02/2023] [Indexed: 08/08/2023] Open
Abstract
Almost all decisions in everyday life rely on multiple sensory inputs that can come from common or independent causes. These situations invoke perceptual uncertainty about environmental properties and the signals' causal structure. Using the audiovisual McGurk illusion, this study investigated how observers formed perceptual and causal confidence judgements in information integration tasks under causal uncertainty. Observers were presented with spoken syllables, their corresponding articulatory lip movements or their congruent and McGurk combinations (e.g. auditory B/P with visual G/K). Observers reported their perceived auditory syllable, the causal structure and confidence for each judgement. Observers were more accurate and confident on congruent than unisensory trials. Their perceptual and causal confidence were tightly related over trials as predicted by the interactive nature of perceptual and causal inference. Further, observers assigned comparable perceptual and causal confidence to veridical 'G/K' percepts on audiovisual congruent trials and their causal and perceptual metamers on McGurk trials (i.e. illusory 'G/K' percepts). Thus, observers metacognitively evaluate the integrated audiovisual percept with limited access to the conflicting unisensory stimulus components on McGurk trials. Collectively, our results suggest that observers form meaningful perceptual and causal confidence judgements about multisensory scenes that are qualitatively consistent with principles of Bayesian causal inference. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- David Meijer
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, 1040, Wien, Austria
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Kapittelweg 29, 6525 EN, Nijmegen, The Netherlands
| |
Collapse
|
45
|
Debats NB, Heuer H, Kayser C. Different time scales of common-cause evidence shape multisensory integration, recalibration and motor adaptation. Eur J Neurosci 2023; 58:3253-3269. [PMID: 37461244 DOI: 10.1111/ejn.16095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 07/03/2023] [Indexed: 09/05/2023]
Abstract
Perceptual coherence in the face of discrepant multisensory signals is achieved via the processes of multisensory integration, recalibration and sometimes motor adaptation. These supposedly operate on different time scales, with integration reducing immediate sensory discrepancies and recalibration and motor adaptation reflecting the cumulative influence of their recent history. Importantly, whether discrepant signals are bound during perception is guided by the brains' inference of whether they originate from a common cause. When combined, these two notions lead to the hypothesis that the time scales on which integration and recalibration (or motor adaptation) operate are associated with different time scales of evidence about a common cause underlying two signals. We tested this prediction in a well-established visuo-motor paradigm, in which human participants performed visually guided hand movements. The kinematic correlation between hand and cursor movements indicates their common origin, which allowed us to manipulate the common-cause evidence by titrating this correlation. Specifically, we dissociated hand and cursor signals during individual movements while preserving their correlation across the series of movement endpoints. Following our hypothesis, this manipulation reduced integration compared with a condition in which visual and proprioceptive signals were perfectly correlated. In contrast, recalibration and motor adaption were not affected by this manipulation. This supports the notion that multisensory integration and recalibration deal with sensory discrepancies on different time scales guided by common-cause evidence: Integration is prompted by local common-cause evidence and reduces immediate discrepancies, whereas recalibration and motor adaptation are prompted by global common-cause evidence and reduce persistent discrepancies.
Collapse
Affiliation(s)
- Nienke B Debats
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| |
Collapse
|
46
|
Kayser C, Park H, Heuer H. Cumulative multisensory discrepancies shape the ventriloquism aftereffect but not the ventriloquism bias. PLoS One 2023; 18:e0290461. [PMID: 37607201 PMCID: PMC10443876 DOI: 10.1371/journal.pone.0290461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 08/08/2023] [Indexed: 08/24/2023] Open
Abstract
Multisensory integration and recalibration are two processes by which perception deals with discrepant signals. Both are often studied in the spatial ventriloquism paradigm. There, integration is probed by the presentation of discrepant audio-visual stimuli, while recalibration manifests as an aftereffect in subsequent judgements of unisensory sounds. Both biases are typically quantified against the degree of audio-visual discrepancy, reflecting the possibility that both may arise from common underlying multisensory principles. We tested a specific prediction of this: that both processes should also scale similarly with the history of multisensory discrepancies, i.e. the sequence of discrepancies in several preceding audio-visual trials. Analyzing data from ten experiments with randomly varying spatial discrepancies we confirmed the expected dependency of each bias on the immediately presented discrepancy. And in line with the aftereffect being a cumulative process, this scaled with the discrepancies presented in at least three preceding audio-visual trials. However, the ventriloquism bias did not depend on this three-trial history of multisensory discrepancies and also did not depend on the aftereffect biases in previous trials - making these two multisensory processes experimentally dissociable. These findings support the notion that the ventriloquism bias and the aftereffect reflect distinct functions, with integration maintaining a stable percept by reducing immediate sensory discrepancies and recalibration maintaining an accurate percept by accounting for consistent discrepancies.
Collapse
Affiliation(s)
- Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Hame Park
- Department of Neurophysiology & Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Herbert Heuer
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
47
|
Otsuka T, Yotsumoto Y. Near-optimal integration of the magnitude information of time and numerosity. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230153. [PMID: 37564065 PMCID: PMC10410204 DOI: 10.1098/rsos.230153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 07/20/2023] [Indexed: 08/12/2023]
Abstract
Magnitude information is often correlated in the external world, providing complementary information about the environment. As if to reflect this relationship, the perceptions of different magnitudes (e.g. time and numerosity) are known to influence one another. Recent studies suggest that such magnitude interaction is similar to cue integration, such as multisensory integration. Here, we tested whether human observers could integrate the magnitudes of two quantities with distinct physical units (i.e. time and numerosity) as abstract magnitude information. The participants compared the magnitudes of two visual stimuli based on time, numerosity, or both. Consistent with the predictions of the maximum-likelihood estimation model, the participants integrated time and numerosity in a near-optimal manner; the weight of each dimension was proportional to their relative reliability, and the integrated estimate was more reliable than either the time or numerosity estimate. Furthermore, the integration approached a statistical optimum as the temporal discrepancy of the acquisition of each piece of information became smaller. These results suggest that magnitude interaction arises through a similar computational mechanism to cue integration. They are also consistent with the idea that different magnitudes are processed by a generalized magnitude system.
Collapse
Affiliation(s)
- Taku Otsuka
- Department of Life Sciences, University of Tokyo, Tokyo, Japan
| | - Yuko Yotsumoto
- Department of Life Sciences, University of Tokyo, Tokyo, Japan
| |
Collapse
|
48
|
Chancel M, Ehrsson HH. Proprioceptive uncertainty promotes the rubber hand illusion. Cortex 2023; 165:70-85. [PMID: 37269634 PMCID: PMC10284257 DOI: 10.1016/j.cortex.2023.04.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 03/15/2023] [Accepted: 04/17/2023] [Indexed: 06/05/2023]
Abstract
Body ownership is the multisensory perception of a body as one's own. Recently, the emergence of body ownership illusions like the visuotactile rubber hand illusion has been described by Bayesian causal inference models in which the observer computes the probability that visual and tactile signals come from a common source. Given the importance of proprioception for the perception of one's body, proprioceptive information and its relative reliability should impact this inferential process. We used a detection task based on the rubber hand illusion where participants had to report whether the rubber hand felt like their own or not. We manipulated the degree of asynchrony of visual and tactile stimuli delivered to the rubber hand and the real hand under two levels of proprioceptive noise using tendon vibration applied to the lower arm's antagonist extensor and flexor muscles. As hypothesized, the probability of the emergence of the rubber hand illusion increased with proprioceptive noise. Moreover, this result, well fitted by a Bayesian causal inference model, was best described by a change in the a priori probability of a common cause for vision and touch. These results offer new insights into how proprioceptive uncertainty shapes the multisensory perception of one's own body.
Collapse
Affiliation(s)
- Marie Chancel
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden; Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France.
| | - H Henrik Ehrsson
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden
| |
Collapse
|
49
|
Noel JP, Angelaki DE. A theory of autism bridging across levels of description. Trends Cogn Sci 2023; 27:631-641. [PMID: 37183143 PMCID: PMC10330321 DOI: 10.1016/j.tics.2023.04.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 04/18/2023] [Accepted: 04/19/2023] [Indexed: 05/16/2023]
Abstract
Autism impacts a wide range of behaviors and neural functions. As such, theories of autism spectrum disorder (ASD) are numerous and span different levels of description, from neurocognitive to molecular. We propose how existent behavioral, computational, algorithmic, and neural accounts of ASD may relate to one another. Specifically, we argue that ASD may be cast as a disorder of causal inference (computational level). This computation relies on marginalization, which is thought to be subserved by divisive normalization (algorithmic level). In turn, divisive normalization may be impaired by excitatory-to-inhibitory imbalances (neural implementation level). We also discuss ASD within similar frameworks, those of predictive coding and circular inference. Together, we hope to motivate work unifying the different accounts of ASD.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY, USA.
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
50
|
Huo H, Liu X, Tang Z, Dong Y, Zhao D, Chen D, Tang M, Qiao X, Du X, Guo J, Wang J, Fan Y. Interhemispheric multisensory perception and Bayesian causal inference. iScience 2023; 26:106706. [PMID: 37250338 PMCID: PMC10214730 DOI: 10.1016/j.isci.2023.106706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 02/07/2023] [Accepted: 04/17/2023] [Indexed: 05/31/2023] Open
Abstract
In daily life, our brain needs to eliminate irrelevant signals and integrate relevant signals to facilitate natural interactions with the surrounding. Previous study focused on paradigms without effect of dominant laterality and found that human observers process multisensory signals consistent with Bayesian causal inference (BCI). However, most human activities are of bilateral interaction involved in processing of interhemispheric sensory signals. It remains unclear whether the BCI framework also fits to such activities. Here, we presented a bilateral hand-matching task to understand the causal structure of interhemispheric sensory signals. In this task, participants were asked to match ipsilateral visual or proprioceptive cues with the contralateral hand. Our results suggest that interhemispheric causal inference is most derived from the BCI framework. The interhemispheric perceptual bias may vary strategy models to estimate the contralateral multisensory signals. The findings help to understand how the brain processes the uncertainty information coming from interhemispheric sensory signals.
Collapse
Affiliation(s)
- Hongqiang Huo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaoyu Liu
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| | - Zhili Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Ying Dong
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Di Zhao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Duo Chen
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Min Tang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xiaofeng Qiao
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Xin Du
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jieyi Guo
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jinghui Wang
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Yubo Fan
- Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
- School of Medical Science and Engineering Medicine, Beihang University, Beijing 100083, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100083, China
| |
Collapse
|