1
|
Artigas C, Morales-Torres R, Rojas-Thomas F, Villena-González M, Rubio I, Ramírez-Benavides D, Bekinschtein T, Campos-Arteaga G, Rodríguez E. When alertness fades: Drowsiness-induced visual dominance and oscillatory recalibration in audiovisual integration. Int J Psychophysiol 2025; 212:112562. [PMID: 40187499 DOI: 10.1016/j.ijpsycho.2025.112562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2025] [Revised: 04/01/2025] [Accepted: 04/02/2025] [Indexed: 04/07/2025]
Abstract
Multisensory integration allows the brain to align inputs from different sensory modalities, enhancing perception and behavior. However, transitioning into drowsiness, a state marked by decreased attentional control and altered cortical dynamics, offers a unique opportunity to examine adaptations in these multisensory processes. In this study, we investigated how drowsiness influences reaction times (RTs) and neural oscillations during audiovisual multisensory integration. Participants performed a task where auditory and visual stimuli were presented either in a coordinated manner or with temporal misalignment (visual-first or auditory-first uncoordinated conditions). Behavioral results showed that drowsiness slowed RTs overall but revealed a clear sensory dominance effect: visual-first uncoordination facilitated RTs compared to auditory-first uncoordination, reflecting vision's dominant role in recalibrating sensory conflicts. In contrast, RTs in coordinated conditions remained stable across alert and drowsy states, suggesting that multisensory redundancy compensates for reduced cortical integration during drowsiness. At the neural level, distinct patterns of oscillatory activity emerged. Alpha oscillations supported attentional realignment and temporal alignment in visual-first conditions, while Gamma oscillations were recruited during auditory-first uncoordination, reflecting heightened sensory-specific processing demands. These effects were state-dependent, becoming more pronounced during drowsiness. Our findings demonstrate that drowsiness fundamentally reshapes multisensory integration by amplifying sensory dominance mechanisms, particularly vision. Compensatory neural mechanisms involving Alpha and Gamma oscillations maintain perceptual coherence under conditions of reduced cortical interaction. These results provide critical insights into how the brain adapts to sensory conflicts during states of diminished awareness, with broader implications for performance and decision-making in real-world drowsy states.
Collapse
Affiliation(s)
- Claudio Artigas
- Departamento de Ciencias Biológicas, Universidad Autónoma de Chile, Santiago, RM, Chile.
| | | | - Felipe Rojas-Thomas
- Center for Social and Cognitive Neuroscience, School of Psychology, Universidad Adolfo Ibáñez, Santiago, Chile
| | | | - Iván Rubio
- Psychology Department, Pontificia Universidad Católica de Chile, Santiago, RM, Chile
| | | | - Tristán Bekinschtein
- Consciousness and Cognition Laboratory, Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Eugenio Rodríguez
- Psychology Department, Pontificia Universidad Católica de Chile, Santiago, RM, Chile
| |
Collapse
|
2
|
Yakouma MA, Anson E, Crane BT. Effect of inverted visual acceleration profile on vestibular heading perception. PLoS One 2025; 20:e0323348. [PMID: 40435269 PMCID: PMC12118926 DOI: 10.1371/journal.pone.0323348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Accepted: 04/07/2025] [Indexed: 06/01/2025] Open
Abstract
Visual motion is ambiguous in that it can either represent object motion or self-motion. Visual-vestibular integration is most advantageous during self-motion. The current experiment tests the hypothesis that the visual motion needs to have a motion profile consistent with the inertial motion. To test this, we examined the effect on heading perception when the visual stimulus was consistent with the inertial motion compared to an inverted visual stimulus, which was thus inconsistent with inertial motion. Twenty healthy human subjects (mean age 20 ± 3 years, 13 female) experienced 2s of translation, which they reported as left or right. A synchronized 2s visual heading was offset by 0°, ± 45°, ± 60°, or ±75°. In randomly interleaved trials, the visual motion was consistent with the inertial motion or inverted - it started at the peak velocity, decreased to zero mid-stimulus, and then accelerated back to the peak velocity at the end. When the velocity profile of the visual stimulus matched the velocity profile of inertial motion, the inertial stimulus was biased 10.0 ± 1.8° (mean ± SE) with a 45° visual offset, 8.9 ± 1.7° with a 60° offset, and 9.3° ± 2.5 ± with a 75° offset. When the visual stimulus was inverted so it was inconsistent with the inertial motion, the respective biases were 6.5 ± 1.5°, 5.6 ± 1.7°, and 5.9 ± 2.0°. The biases with the inverted stimulus were significantly smaller (p < 0.0001), demonstrating that the visual motion profile is considered in multisensory integration rather than simple trajectory endpoints.
Collapse
Affiliation(s)
- Miguel A. Yakouma
- Department of Biomedical Engineering, University of Rochester, Rochester, New York, United States of America
- Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America
| | - Eric Anson
- Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America
- Department of Neuroscience, University of Rochester, Rochester, New York, United States of America
| | - Benjamin T. Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York, United States of America
- Department of Otolaryngology, University of Rochester, Rochester, New York, United States of America
- Department of Neuroscience, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
3
|
Lonergan B, Seemungal BM, Ciocca M, Tai YF. The Effects of Deep Brain Stimulation on Balance in Parkinson's Disease as Measured Using Posturography-A Narrative Review. Brain Sci 2025; 15:535. [PMID: 40426705 PMCID: PMC12109885 DOI: 10.3390/brainsci15050535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2025] [Revised: 05/16/2025] [Accepted: 05/19/2025] [Indexed: 05/29/2025] Open
Abstract
BACKGROUND Postural imbalance with falls affects 80% of patients with Parkinson's disease (PD) at 10 years. Standard PD therapies (e.g., levodopa and/or deep brain stimulation-DBS) are poor at improving postural imbalance. Additionally, the mechanistic complexity of interpreting postural control is a major barrier to improving our understanding of treatment effects. In this paper, we review the effects of DBS on balance as measured using posturography. We also critically appraise the quantitative measures and analyses used in these studies. METHODS A literature search was performed independently by 2 researchers using the PUBMED database. Thirty-eight studies are included in this review, with DBS at the subthalamic nucleus (STN-) (n = 25), globus pallidus internus (GPi-) (n = 6), ventral intermediate nucleus (VIM)/thalamus (n = 2), and pedunculopontine nucleus (PPN) (n = 5). RESULTS STN- and GPi-DBS reduce static sway in PD and mitigate the increased sway from levodopa. STN-DBS impairs automatic responses to perturbations, whilst GPi-DBS has a more neutral effect. STN-DBS may promote protective strategies following external perturbations but does not improve adaptation. The evidence regarding the effects on gait initiation is less clear. Insufficient evidence exists to make conclusions regarding VIM- and PPN-DBS. CONCLUSIONS STN- and GPi-DBS have differing effects on posturography, which suggests site-specific and possibly non-dopaminergic mechanisms. Posturography tests should be utilised to answer specific questions regarding the mechanisms of and effects on postural control following DBS. We recommend standardising posturography measures and test conditions by expert consensus and greater long-term data collection, utilising ongoing DBS registries.
Collapse
Affiliation(s)
- Bradley Lonergan
- Department of Brain Sciences, Imperial College London, London W6 8RF, UK; (B.L.); (B.M.S.)
| | - Barry M. Seemungal
- Department of Brain Sciences, Imperial College London, London W6 8RF, UK; (B.L.); (B.M.S.)
- Department of Neurology, Charing Cross Hospital, Imperial College Healthcare Trust (ICHT), London W2 1NY, UK
| | - Matteo Ciocca
- Department of Brain Sciences, Imperial College London, London W6 8RF, UK; (B.L.); (B.M.S.)
- Department of Neurology, Charing Cross Hospital, Imperial College Healthcare Trust (ICHT), London W2 1NY, UK
| | - Yen F. Tai
- Department of Brain Sciences, Imperial College London, London W6 8RF, UK; (B.L.); (B.M.S.)
- Department of Neurology, Charing Cross Hospital, Imperial College Healthcare Trust (ICHT), London W2 1NY, UK
| |
Collapse
|
4
|
Hsiao A, Block HJ. The role of explicit knowledge in compensating for a visuo-proprioceptive cue conflict. Exp Brain Res 2024; 242:2249-2261. [PMID: 39042277 PMCID: PMC11512547 DOI: 10.1007/s00221-024-06898-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 07/15/2024] [Indexed: 07/24/2024]
Abstract
It is unclear how explicit knowledge of an externally imposed mismatch between visual and proprioceptive cues of hand position affects perceptual recalibration. The Bayesian causal inference framework might suggest such knowledge should abolish the visual and proprioceptive recalibration that occurs when individuals perceive these cues as coming from the same source (their hand), while the visuomotor adaptation literature suggests explicit knowledge of a cue conflict does not eliminate implicit compensatory processes. Here we compared visual and proprioceptive recalibration in three groups with varying levels of knowledge about the visuo-proprioceptive cue conflict. All participants estimated the position of visual, proprioceptive, or combined targets related to their left index fingertip, with a 70 mm visuo-proprioceptive offset gradually imposed. Groups 1, 2, and 3 received no information, medium information, and high information, respectively, about the offset. Information was manipulated using instructional and visual cues. All groups performed the task similarly at baseline in terms of variance, weighting, and integration. Results suggest the three groups recalibrated vision and proprioception differently, but there was no difference in variance or weighting. Participants who received only instructional cues about the mismatch (Group 2) did not recalibrate less, on average, than participants provided no information about the mismatch (Group 1). However, participants provided instructional cues and extra visual cues of their hands during the perturbation (Group 3) demonstrated significantly less recalibration than other groups. These findings are consistent with the idea that instructional cues alone are insufficient to override participants' intrinsic belief in common cause and reduce recalibration.
Collapse
Affiliation(s)
- Anna Hsiao
- Department of Kinesiology, School of Public Health, Indiana University Bloomington, 1025 E. 7th St., PH 112, Bloomington, IN, 47405, USA
| | - Hannah J Block
- Department of Kinesiology, School of Public Health, Indiana University Bloomington, 1025 E. 7th St., PH 112, Bloomington, IN, 47405, USA.
| |
Collapse
|
5
|
Li Z, Li Z, Tang W, Yao J, Dou Z, Gong J, Li Y, Zhang B, Dong Y, Xia J, Sun L, Jiang P, Cao X, Yang R, Miao X, Yang R. Crossmodal sensory neurons based on high-performance flexible memristors for human-machine in-sensor computing system. Nat Commun 2024; 15:7275. [PMID: 39179548 PMCID: PMC11344147 DOI: 10.1038/s41467-024-51609-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 08/13/2024] [Indexed: 08/26/2024] Open
Abstract
Constructing crossmodal in-sensor processing system based on high-performance flexible devices is of great significance for the development of wearable human-machine interfaces. A bio-inspired crossmodal in-sensor computing system can perform real-time energy-efficient processing of multimodal signals, alleviating data conversion and transmission between different modules in conventional chips. Here, we report a bio-inspired crossmodal spiking sensory neuron (CSSN) based on a flexible VO2 memristor, and demonstrate a crossmodal in-sensor encoding and computing system for wearable human-machine interfaces. We demonstrate excellent performance in the VO2 memristor including endurance (>1012), uniformity (0.72% for cycle-to-cycle variations and 3.73% for device-to-device variations), speed (<30 ns), and flexibility (bendable to a curvature radius of 1 mm). A flexible hardware processing system is implemented based on the CSSN, which can directly perceive and encode pressure and temperature bimodal information into spikes, and then enables the real-time haptic-feedback for human-machine interaction. We successfully construct a crossmodal in-sensor spiking reservoir computing system via the CSSNs, which can achieve dynamic objects identification with a high accuracy of 98.1% and real-time signal feedback. This work provides a feasible approach for constructing flexible bio-inspired crossmodal in-sensor computing systems for wearable human-machine interfaces.
Collapse
Affiliation(s)
- Zhiyuan Li
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
- Hubei Yangtze Memory Laboratories, Wuhan, China
| | - Zhongshao Li
- State Key Laboratory of High Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, China
| | - Wei Tang
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Jiaping Yao
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Zhipeng Dou
- State Key Laboratory of Catalysis, CAS Center for Excellence in Nanoscience, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian, China
| | - Junjie Gong
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Yongfei Li
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Beining Zhang
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Yunxiao Dong
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Jian Xia
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China
| | - Lin Sun
- State Key Laboratory of Catalysis, CAS Center for Excellence in Nanoscience, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian, China
| | - Peng Jiang
- State Key Laboratory of Catalysis, CAS Center for Excellence in Nanoscience, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian, China
| | - Xun Cao
- State Key Laboratory of High Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, China.
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, China.
| | - Rui Yang
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China.
- Hubei Yangtze Memory Laboratories, Wuhan, China.
| | - Xiangshui Miao
- School of Integrated Circuits, Huazhong University of Science and Technology, Wuhan, China.
- Hubei Yangtze Memory Laboratories, Wuhan, China.
| | - Ronggui Yang
- State Key Laboratory of Coal Combustion, School of Energy and Power Engineering, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
6
|
Madhav MS, Jayakumar RP, Li BY, Lashkari SG, Wright K, Savelli F, Knierim JJ, Cowan NJ. Control and recalibration of path integration in place cells using optic flow. Nat Neurosci 2024; 27:1599-1608. [PMID: 38937582 PMCID: PMC11563580 DOI: 10.1038/s41593-024-01681-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 05/13/2024] [Indexed: 06/29/2024]
Abstract
Hippocampal place cells are influenced by both self-motion (idiothetic) signals and external sensory landmarks as an animal navigates its environment. To continuously update a position signal on an internal 'cognitive map', the hippocampal system integrates self-motion signals over time, a process that relies on a finely calibrated path integration gain that relates movement in physical space to movement on the cognitive map. It is unclear whether idiothetic cues alone, such as optic flow, exert sufficient influence on the cognitive map to enable recalibration of path integration, or if polarizing position information provided by landmarks is essential for this recalibration. Here, we demonstrate both recalibration of path integration gain and systematic control of place fields by pure optic flow information in freely moving rats. These findings demonstrate that the brain continuously rebalances the influence of conflicting idiothetic cues to fine-tune the neural dynamics of path integration, and that this recalibration process does not require a top-down, unambiguous position signal from landmarks.
Collapse
Affiliation(s)
- Manu S Madhav
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, USA.
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA.
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada.
- Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada.
| | - Ravikrishnan P Jayakumar
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, USA
| | - Brian Y Li
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Shahin G Lashkari
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA
- Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, USA
| | - Kelly Wright
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, USA
| | - Francesco Savelli
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, USA
- Department of Neuroscience, Developmental and Regenerative Biology, The University of Texas at San Antonio, San Antonio, TX, USA
| | - James J Knierim
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, USA.
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, USA.
- Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA.
| | - Noah J Cowan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, USA.
- Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
7
|
Ma S, Zhou Y, Wan T, Ren Q, Yan J, Fan L, Yuan H, Chan M, Chai Y. Bioinspired In-Sensor Multimodal Fusion for Enhanced Spatial and Spatiotemporal Association. NANO LETTERS 2024; 24:7091-7099. [PMID: 38804877 DOI: 10.1021/acs.nanolett.4c01727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Multimodal perception can capture more precise and comprehensive information compared with unimodal approaches. However, current sensory systems typically merge multimodal signals at computing terminals following parallel processing and transmission, which results in the potential loss of spatial association information and requires time stamps to maintain temporal coherence for time-series data. Here we demonstrate bioinspired in-sensor multimodal fusion, which effectively enhances comprehensive perception and reduces the level of data transfer between sensory terminal and computation units. By adopting floating gate phototransistors with reconfigurable photoresponse plasticity, we realize the agile spatial and spatiotemporal fusion under nonvolatile and volatile photoresponse modes. To realize an optimal spatial estimation, we integrate spatial information from visual-tactile signals. For dynamic events, we capture and fuse in real time spatiotemporal information from visual-audio signals, realizing a dance-music synchronization recognition task without a time-stamping process. This in-sensor multimodal fusion approach provides the potential to simplify the multimodal integration system, extending the in-sensor computing paradigm.
Collapse
Affiliation(s)
- Sijie Ma
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Yue Zhou
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Tianqing Wan
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Qinqi Ren
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Jianmin Yan
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Lingwei Fan
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| | - Huanmei Yuan
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, People's Republic of China
| | - Mansun Chan
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong 999077, People's Republic of China
| | - Yang Chai
- Department of Applied Physics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
- Joint Research Centre of Microelectronics, The Hong Kong Polytechnic University, Kowloon, Hong Kong 999077, People's Republic of China
| |
Collapse
|
8
|
Jordan J, Sacramento J, Wybo WAM, Petrovici MA, Senn W. Conductance-based dendrites perform Bayes-optimal cue integration. PLoS Comput Biol 2024; 20:e1012047. [PMID: 38865345 PMCID: PMC11168673 DOI: 10.1371/journal.pcbi.1012047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/31/2024] [Indexed: 06/14/2024] Open
Abstract
A fundamental function of cortical circuits is the integration of information from different sources to form a reliable basis for behavior. While animals behave as if they optimally integrate information according to Bayesian probability theory, the implementation of the required computations in the biological substrate remains unclear. We propose a novel, Bayesian view on the dynamics of conductance-based neurons and synapses which suggests that they are naturally equipped to optimally perform information integration. In our approach apical dendrites represent prior expectations over somatic potentials, while basal dendrites represent likelihoods of somatic potentials. These are parametrized by local quantities, the effective reversal potentials and membrane conductances. We formally demonstrate that under these assumptions the somatic compartment naturally computes the corresponding posterior. We derive a gradient-based plasticity rule, allowing neurons to learn desired target distributions and weight synaptic inputs by their relative reliabilities. Our theory explains various experimental findings on the system and single-cell level related to multi-sensory integration, which we illustrate with simulations. Furthermore, we make experimentally testable predictions on Bayesian dendritic integration and synaptic plasticity.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Electrical Engineering, Yale University, New Haven, Connecticut, United States of America
| | - João Sacramento
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroinformatics, UZH / ETH Zurich, Zurich, Switzerland
| | - Willem A. M. Wybo
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine, Forschungszentrum Jülich, Jülich, Germany
| | | | - Walter Senn
- Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
9
|
Salinas E, Stanford TR. Conditional independence as a statistical assessment of evidence integration processes. PLoS One 2024; 19:e0297792. [PMID: 38722936 PMCID: PMC11081312 DOI: 10.1371/journal.pone.0297792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 01/12/2024] [Indexed: 05/13/2024] Open
Abstract
Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A, B, and C. We find that, if A and B are conditionally independent with respect to C, then the probability that C occurs given that both A and B are known, P(C|A, B), can be easily calculated without the need to measure the full three-way dependency between A, B, and C. This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R. Stanford
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
10
|
Smithson CH, Duncan EJ, Sait SM, Bretman A. Sensory perception of rivals has trait-dependent effects on plasticity in Drosophila melanogaster. Behav Ecol 2024; 35:arae031. [PMID: 38680228 PMCID: PMC11053361 DOI: 10.1093/beheco/arae031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 03/05/2024] [Accepted: 04/23/2024] [Indexed: 05/01/2024] Open
Abstract
The social environment has myriad effects on individuals, altering reproduction, immune function, cognition, and aging. Phenotypic plasticity enables animals to respond to heterogeneous environments such as the social environment but requires that they assess those environments accurately. It has been suggested that combinations of sensory cues allow animals to respond rapidly and accurately to changeable environments, but it is unclear whether the same sensory inputs are required in all traits that respond to a particular environmental cue. Drosophila melanogaster males, in the presence of rival males, exhibit a consistent behavioral response by extending mating duration. However, exposure to a rival also results in a reduction in their lifespan, a phenomenon interpreted as a trade-off associated with sperm competition strategies. D. melanogaster perceive their rivals by using multiple sensory cues; interfering with at least two olfactory, auditory, or tactile cues eliminates the extension of mating duration. Here, we assessed whether these same cues were implicated in the lifespan reduction. Removal of combinations of auditory and olfactory cues removed the extended mating duration response to a rival, as previously found. However, we found that these manipulations did not alter the reduction in lifespan of males exposed to rivals or induce any changes in activity patterns, grooming, or male-male aggression. Therefore, our analysis suggests that lifespan reduction is not a cost associated with the behavioral responses to sperm competition. Moreover, this highlights the trait-specific nature of the mechanisms underlying plasticity in response to the same environmental conditions.
Collapse
Affiliation(s)
- Claire H Smithson
- School of Biology, Faculty of Biological Sciences, University of Leeds, Clarendon Road, Leeds, West Yorkshire, LS2 9JT, United Kingdom
| | - Elizabeth J Duncan
- School of Biology, Faculty of Biological Sciences, University of Leeds, Clarendon Road, Leeds, West Yorkshire, LS2 9JT, United Kingdom
| | - Steven M Sait
- School of Biology, Faculty of Biological Sciences, University of Leeds, Clarendon Road, Leeds, West Yorkshire, LS2 9JT, United Kingdom
| | - Amanda Bretman
- School of Biology, Faculty of Biological Sciences, University of Leeds, Clarendon Road, Leeds, West Yorkshire, LS2 9JT, United Kingdom
| |
Collapse
|
11
|
Schnepel P, Paricio-Montesinos R, Ezquerra-Romano I, Haggard P, Poulet JFA. Cortical cellular encoding of thermotactile integration. Curr Biol 2024; 34:1718-1730.e3. [PMID: 38582078 DOI: 10.1016/j.cub.2024.03.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 12/24/2023] [Accepted: 03/13/2024] [Indexed: 04/08/2024]
Abstract
Recent evidence suggests that primary sensory cortical regions play a role in the integration of information from multiple sensory modalities. How primary cortical neurons integrate different sources of sensory information is unclear, partly because non-primary sensory input to a cortical sensory region is often weak or modulatory. To address this question, we take advantage of the robust representation of thermal (cooling) and tactile stimuli in mouse forelimb primary somatosensory cortex (fS1). Using a thermotactile detection task, we show that the perception of threshold-level cool or tactile information is enhanced when they are presented simultaneously, compared with presentation alone. To investigate the cortical cellular correlates of thermotactile integration, we performed in vivo extracellular recordings from fS1 in awake resting and anesthetized mice during unimodal and bimodal stimulation of the forepaw. Unimodal stimulation evoked thermal- or tactile- specific excitatory and inhibitory responses of fS1 neurons. The most prominent features of combined thermotactile stimulation are the recruitment of unimodally silent fS1 neurons, non-linear integration features, and response dynamics that favor longer response durations with additional spikes. Together, we identify quantitative and qualitative changes in cortical encoding that may underlie the improvement in perception of thermotactile surfaces during haptic exploration.
Collapse
Affiliation(s)
- Philipp Schnepel
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany
| | - Ricardo Paricio-Montesinos
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany
| | - Ivan Ezquerra-Romano
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany; Institute of Cognitive Neuroscience, University College London (UCL), London WC1N 3AZ, UK
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London (UCL), London WC1N 3AZ, UK
| | - James F A Poulet
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Berlin-Buch, Robert-Rössle-Strasse 10, 13125 Berlin, Germany; Neuroscience Research Center, Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin, Germany.
| |
Collapse
|
12
|
Forti B. The hidden structure of consciousness. Front Psychol 2024; 15:1344033. [PMID: 38650907 PMCID: PMC11033517 DOI: 10.3389/fpsyg.2024.1344033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/26/2024] [Indexed: 04/25/2024] Open
Abstract
According to Loorits, if we want consciousness to be explained in terms of natural sciences, we should be able to analyze its seemingly non-structural aspects, like qualia, in structural terms. However, the studies conducted over the last three decades do not seem to be able to bridge the explanatory gap between physical phenomena and phenomenal experience. One possible way to bridge the explanatory gap is to seek the structure of consciousness within consciousness itself, through a phenomenal analysis of the qualitative aspects of experience. First, this analysis leads us to identify the explanandum concerning the simplest forms of experience not in qualia but in the unitary set of qualities found in early vision. Second, it leads us to hypothesize that consciousness is also made up of non-apparent parts, and that there exists a hidden structure of consciousness. This structure, corresponding to a simple early visual experience, is constituted by a Hierarchy of Spatial Belongings nested within each other. Each individual Spatial Belonging is formed by a primary content and a primary space. The primary content can be traced in the perceptibility of the contents we can distinguish in the phenomenal field. The primary space is responsible for the perceptibility of the content and is not perceptible in itself. However, the phenomenon I refer to as subtraction of visibility allows us to characterize it as phenomenally negative. The hierarchical relationships between Spatial Belongings can ensure the qualitative nature of components of perceptual organization, such as object, background, and detail. The hidden structure of consciousness presents aspects that are decidedly counterintuitive compared to our idea of phenomenal experience. However, on the one hand, the Hierarchy of Spatial Belongings can explain the qualities of early vision and their appearance as a unitary whole, while on the other hand, it might be more easily explicable in terms of brain organization. In other words, the hidden structure of consciousness can be considered a bridge structure which, placing itself at an intermediate level between experience and physical properties, can contribute to bridging the explanatory gap.
Collapse
Affiliation(s)
- Bruno Forti
- Department of Mental Health, Azienda ULSS 1 Dolomiti, Belluno, Italy
| |
Collapse
|
13
|
Oude Lohuis MN, Marchesi P, Olcese U, Pennartz CMA. Triple dissociation of visual, auditory and motor processing in mouse primary visual cortex. Nat Neurosci 2024; 27:758-771. [PMID: 38307971 DOI: 10.1038/s41593-023-01564-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 12/19/2023] [Indexed: 02/04/2024]
Abstract
Primary sensory cortices respond to crossmodal stimuli-for example, auditory responses are found in primary visual cortex (V1). However, it remains unclear whether these responses reflect sensory inputs or behavioral modulation through sound-evoked body movement. We address this controversy by showing that sound-evoked activity in V1 of awake mice can be dissociated into auditory and behavioral components with distinct spatiotemporal profiles. The auditory component began at approximately 27 ms, was found in superficial and deep layers and originated from auditory cortex. Sound-evoked orofacial movements correlated with V1 neural activity starting at approximately 80-100 ms and explained auditory frequency tuning. Visual, auditory and motor activity were expressed by different laminar profiles and largely segregated subsets of neuronal populations. During simultaneous audiovisual stimulation, visual representations remained dissociable from auditory-related and motor-related activity. This three-fold dissociability of auditory, motor and visual processing is central to understanding how distinct inputs to visual cortex interact to support vision.
Collapse
Affiliation(s)
- Matthijs N Oude Lohuis
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Pietro Marchesi
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Umberto Olcese
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Cyriel M A Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, Faculty of Science, University of Amsterdam, Amsterdam, Netherlands.
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands.
| |
Collapse
|
14
|
Nikbakht N. More Than the Sum of Its Parts: Visual-Tactile Integration in the Behaving Rat. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:37-58. [PMID: 38270852 DOI: 10.1007/978-981-99-7611-9_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
We experience the world by constantly integrating cues from multiple modalities to form unified sensory percepts. Once familiar with multimodal properties of an object, we can recognize it regardless of the modality involved. In this chapter we will examine the case of a visual-tactile orientation categorization experiment in rats. We will explore the involvement of the cerebral cortex in recognizing objects through multiple sensory modalities. In the orientation categorization task, rats learned to examine and judge the orientation of a raised, black and white grating using touch, vision, or both. Their multisensory performance was better than the predictions of linear models for cue combination, indicating synergy between the two sensory channels. Neural recordings made from a candidate associative cortical area, the posterior parietal cortex (PPC), reflected the principal neuronal correlates of the behavioral results: PPC neurons encoded both graded information about the object and categorical information about the animal's decision. Intriguingly single neurons showed identical responses under each of the three modality conditions providing a substrate for a neural circuit in the cortex that is involved in modality-invariant processing of objects.
Collapse
Affiliation(s)
- Nader Nikbakht
- Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
15
|
Jones SA, Noppeney U. Multisensory Integration and Causal Inference in Typical and Atypical Populations. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:59-76. [PMID: 38270853 DOI: 10.1007/978-981-99-7611-9_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory perception is critical for effective interaction with the environment, but human responses to multisensory stimuli vary across the lifespan and appear changed in some atypical populations. In this review chapter, we consider multisensory integration within a normative Bayesian framework. We begin by outlining the complex computational challenges of multisensory causal inference and reliability-weighted cue integration, and discuss whether healthy young adults behave in accordance with normative Bayesian models. We then compare their behaviour with various other human populations (children, older adults, and those with neurological or neuropsychiatric disorders). In particular, we consider whether the differences seen in these groups are due only to changes in their computational parameters (such as sensory noise or perceptual priors), or whether the fundamental computational principles (such as reliability weighting) underlying multisensory perception may also be altered. We conclude by arguing that future research should aim explicitly to differentiate between these possibilities.
Collapse
Affiliation(s)
- Samuel A Jones
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
16
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
17
|
Zhao Y, Lu E, Zeng Y. Brain-inspired bodily self-perception model for robot rubber hand illusion. PATTERNS (NEW YORK, N.Y.) 2023; 4:100888. [PMID: 38106608 PMCID: PMC10724368 DOI: 10.1016/j.patter.2023.100888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 08/21/2023] [Accepted: 11/07/2023] [Indexed: 12/19/2023]
Abstract
The core of bodily self-consciousness involves perceiving ownership of one's body. A central question is how body illusions like the rubber hand illusion (RHI) occur. Existing theoretical models still lack satisfying computational explanations from connectionist perspectives, especially for how the brain encodes body perception and generates illusions from neuronal interactions. Moreover, the integration of disability experiments is also neglected. Here, we integrate biological findings of bodily self-consciousness to propose a brain-inspired bodily self-perception model by which perceptions of bodily self are autonomously constructed without any supervision signals. We successfully validated the model with six RHI experiments and a disability experiment on an iCub humanoid robot and simulated environments. The results show that our model can not only well-replicate the behavioral and neural data of monkeys in biological experiments but also reasonably explain the causes and results of RHI at the neuronal level, thus contributing to the revelation of mechanisms underlying RHI.
Collapse
Affiliation(s)
- Yuxuan Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Enmeng Lu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Center for Long-term Artificial Intelligence, Beijing, China
| |
Collapse
|
18
|
Suzuki M, Pennartz CMA, Aru J. How deep is the brain? The shallow brain hypothesis. Nat Rev Neurosci 2023; 24:778-791. [PMID: 37891398 DOI: 10.1038/s41583-023-00756-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/25/2023] [Indexed: 10/29/2023]
Abstract
Deep learning and predictive coding architectures commonly assume that inference in neural networks is hierarchical. However, largely neglected in deep learning and predictive coding architectures is the neurobiological evidence that all hierarchical cortical areas, higher or lower, project to and receive signals directly from subcortical areas. Given these neuroanatomical facts, today's dominance of cortico-centric, hierarchical architectures in deep learning and predictive coding networks is highly questionable; such architectures are likely to be missing essential computational principles the brain uses. In this Perspective, we present the shallow brain hypothesis: hierarchical cortical processing is integrated with a massively parallel process to which subcortical areas substantially contribute. This shallow architecture exploits the computational capacity of cortical microcircuits and thalamo-cortical loops that are not included in typical hierarchical deep learning and predictive coding networks. We argue that the shallow brain architecture provides several critical benefits over deep hierarchical structures and a more complete depiction of how mammalian brains achieve fast and flexible computational capabilities.
Collapse
Affiliation(s)
- Mototaka Suzuki
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands.
| | - Cyriel M A Pennartz
- Department of Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
| | - Jaan Aru
- Institute of Computer Science, University of Tartu, Tartu, Estonia.
| |
Collapse
|
19
|
Lange RD, Shivkumar S, Chattoraj A, Haefner RM. Bayesian encoding and decoding as distinct perspectives on neural coding. Nat Neurosci 2023; 26:2063-2072. [PMID: 37996525 PMCID: PMC11003438 DOI: 10.1038/s41593-023-01458-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 09/08/2023] [Indexed: 11/25/2023]
Abstract
The Bayesian brain hypothesis is one of the most influential ideas in neuroscience. However, unstated differences in how Bayesian ideas are operationalized make it difficult to draw general conclusions about how Bayesian computations map onto neural circuits. Here, we identify one such unstated difference: some theories ask how neural circuits could recover information about the world from sensory neural activity (Bayesian decoding), whereas others ask how neural circuits could implement inference in an internal model (Bayesian encoding). These two approaches require profoundly different assumptions and lead to different interpretations of empirical data. We contrast them in terms of motivations, empirical support and relationship to neural data. We also use a simple model to argue that encoding and decoding models are complementary rather than competing. Appreciating the distinction between Bayesian encoding and Bayesian decoding will help to organize future work and enable stronger empirical tests about the nature of inference in the brain.
Collapse
Affiliation(s)
- Richard D Lange
- Department of Neurobiology, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Computer Science, Rochester Institute of Technology, Rochester, NY, USA.
| | - Sabyasachi Shivkumar
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Ankani Chattoraj
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Ralf M Haefner
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
20
|
Brannick S, Vibell JF. Motion aftereffects in vision, audition, and touch, and their crossmodal interactions. Neuropsychologia 2023; 190:108696. [PMID: 37793544 DOI: 10.1016/j.neuropsychologia.2023.108696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/26/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023]
|
21
|
Salinas E, Stanford TR. Conditional independence as a statistical assessment of evidence integration processes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.03.539321. [PMID: 37646001 PMCID: PMC10461915 DOI: 10.1101/2023.05.03.539321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A , B , and C . We find that, if A and B are conditionally independent with respect to C , then the probability that C occurs given that both A and B are known, P C | A , B , can be easily calculated without the need to measure the full three-way dependency between A , B , and C . This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R Stanford
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
22
|
Newman PM, Qi Y, Mou W, McNamara TP. Statistically Optimal Cue Integration During Human Spatial Navigation. Psychon Bull Rev 2023; 30:1621-1642. [PMID: 37038031 DOI: 10.3758/s13423-023-02254-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 04/12/2023]
Abstract
In 2007, Cheng and colleagues published their influential review wherein they analyzed the literature on spatial cue interaction during navigation through a Bayesian lens, and concluded that models of optimal cue integration often applied in psychophysical studies could explain cue interaction during navigation. Since then, numerous empirical investigations have been conducted to assess the degree to which human navigators are optimal when integrating multiple spatial cues during a variety of navigation-related tasks. In the current review, we discuss the literature on human cue integration during navigation that has been published since Cheng et al.'s original review. Evidence from most studies demonstrate optimal navigation behavior when humans are presented with multiple spatial cues. However, applications of optimal cue integration models vary in their underlying assumptions (e.g., uninformative priors and decision rules). Furthermore, cue integration behavior depends in part on the nature of the cues being integrated and the navigational task (e.g., homing versus non-home goal localization). We discuss the implications of these models and suggest directions for future research.
Collapse
Affiliation(s)
- Phillip M Newman
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Yafei Qi
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Weimin Mou
- Department of Psychology, P-217 Biological Sciences Building, University of Alberta, Edmonton, Alberta, T6G 2R3, Canada
| | - Timothy P McNamara
- Department of Psychology, Vanderbilt University, 301 Wilson Hall, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
23
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
24
|
Zeng Z, Zhang C, Gu Y. Visuo-vestibular heading perception: a model system to study multi-sensory decision making. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220334. [PMID: 37545303 PMCID: PMC10404926 DOI: 10.1098/rstb.2022.0334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 08/08/2023] Open
Abstract
Integrating noisy signals across time as well as sensory modalities, a process named multi-sensory decision making (MSDM), is an essential strategy for making more accurate and sensitive decisions in complex environments. Although this field is just emerging, recent extraordinary works from different perspectives, including computational theory, psychophysical behaviour and neurophysiology, begin to shed new light onto MSDM. In the current review, we focus on MSDM by using a model system of visuo-vestibular heading. Combining well-controlled behavioural paradigms on virtual-reality systems, single-unit recordings, causal manipulations and computational theory based on spiking activity, recent progress reveals that vestibular signals contain complex temporal dynamics in many brain regions, including unisensory, multi-sensory and sensory-motor association areas. This challenges the brain for cue integration across time and sensory modality such as optic flow which mainly contains a motion velocity signal. In addition, new evidence from the higher-level decision-related areas, mostly in the posterior and frontal/prefrontal regions, helps revise our conventional thought on how signals from different sensory modalities may be processed, converged, and moment-by-moment accumulated through neural circuits for forming a unified, optimal perceptual decision. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Zhao Zeng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Ce Zhang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, People's Republic of China
- University of Chinese Academy of Sciences, 100049 Beijing, People's Republic of China
| |
Collapse
|
25
|
Pennartz CMA, Oude Lohuis MN, Olcese U. How 'visual' is the visual cortex? The interactions between the visual cortex and other sensory, motivational and motor systems as enabling factors for visual perception. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220336. [PMID: 37545313 PMCID: PMC10404929 DOI: 10.1098/rstb.2022.0336] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 06/13/2023] [Indexed: 08/08/2023] Open
Abstract
The definition of the visual cortex is primarily based on the evidence that lesions of this area impair visual perception. However, this does not exclude that the visual cortex may process more information than of retinal origin alone, or that other brain structures contribute to vision. Indeed, research across the past decades has shown that non-visual information, such as neural activity related to reward expectation and value, locomotion, working memory and other sensory modalities, can modulate primary visual cortical responses to retinal inputs. Nevertheless, the function of this non-visual information is poorly understood. Here we review recent evidence, coming primarily from studies in rodents, arguing that non-visual and motor effects in visual cortex play a role in visual processing itself, for instance disentangling direct auditory effects on visual cortex from effects of sound-evoked orofacial movement. These findings are placed in a broader framework casting vision in terms of predictive processing under control of frontal, reward- and motor-related systems. In contrast to the prevalent notion that vision is exclusively constructed by the visual cortical system, we propose that visual percepts are generated by a larger network-the extended visual system-spanning other sensory cortices, supramodal areas and frontal systems. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
| | - Matthijs N. Oude Lohuis
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
- Champalimaud Research, Champalimaud Foundation, 1400-038 Lisbon, Portugal
| | - Umberto Olcese
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Science Park 904, 1098XH Amsterdam, The Netherlands
| |
Collapse
|
26
|
Fetsch CR, Noppeney U. How the brain controls decision making in a multisensory world. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220332. [PMID: 37545306 PMCID: PMC10404917 DOI: 10.1098/rstb.2022.0332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/11/2023] [Indexed: 08/08/2023] Open
Abstract
Sensory systems evolved to provide the organism with information about the environment to guide adaptive behaviour. Neuroscientists and psychologists have traditionally considered each sense independently, a legacy of Aristotle and a natural consequence of their distinct physical and anatomical bases. However, from the point of view of the organism, perception and sensorimotor behaviour are fundamentally multi-modal; after all, each modality provides complementary information about the same world. Classic studies revealed much about where and how sensory signals are combined to improve performance, but these tended to treat multisensory integration as a static, passive, bottom-up process. It has become increasingly clear how this approach falls short, ignoring the interplay between perception and action, the temporal dynamics of the decision process and the many ways by which the brain can exert top-down control of integration. The goal of this issue is to highlight recent advances on these higher order aspects of multisensory processing, which together constitute a mainstay of our understanding of complex, natural behaviour and its neural basis. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Uta Noppeney
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN Nijmegen, Netherlands
| |
Collapse
|
27
|
Badde S, Landy MS, Adams WJ. Multisensory causal inference is feature-specific, not object-based. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220345. [PMID: 37545302 PMCID: PMC10404918 DOI: 10.1098/rstb.2022.0345] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration depends on causal inference about the sensory signals. We tested whether implicit causal-inference judgements pertain to entire objects or focus on task-relevant object features. Participants in our study judged virtual visual, haptic and visual-haptic surfaces with respect to two features-slant and roughness-against an internal standard in a two-alternative forced-choice task. Modelling of participants' responses revealed that the degree to which their perceptual judgements were based on integrated visual-haptic information varied unsystematically across features. For example, a perceived mismatch between visual and haptic roughness would not deter the observer from integrating visual and haptic slant. These results indicate that participants based their perceptual judgements on a feature-specific selection of information, suggesting that multisensory causal inference proceeds not at the object level but at the level of single object features. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Stephanie Badde
- Department of Psychology, Tufts University, 490 Boston Avenue, Medford, MA 02155, USA
| | - Michael S. Landy
- Department of Psychology and Center of Neural Science, New York University, 6 Washington Place, New York, NY 10003, USA
| | - Wendy J. Adams
- Department of Psychology, University of Southampton, 44 Highfield Campus, Southampton SO17 1BJ, UK
| |
Collapse
|
28
|
Chancel M, Ehrsson HH. Proprioceptive uncertainty promotes the rubber hand illusion. Cortex 2023; 165:70-85. [PMID: 37269634 PMCID: PMC10284257 DOI: 10.1016/j.cortex.2023.04.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 03/15/2023] [Accepted: 04/17/2023] [Indexed: 06/05/2023]
Abstract
Body ownership is the multisensory perception of a body as one's own. Recently, the emergence of body ownership illusions like the visuotactile rubber hand illusion has been described by Bayesian causal inference models in which the observer computes the probability that visual and tactile signals come from a common source. Given the importance of proprioception for the perception of one's body, proprioceptive information and its relative reliability should impact this inferential process. We used a detection task based on the rubber hand illusion where participants had to report whether the rubber hand felt like their own or not. We manipulated the degree of asynchrony of visual and tactile stimuli delivered to the rubber hand and the real hand under two levels of proprioceptive noise using tendon vibration applied to the lower arm's antagonist extensor and flexor muscles. As hypothesized, the probability of the emergence of the rubber hand illusion increased with proprioceptive noise. Moreover, this result, well fitted by a Bayesian causal inference model, was best described by a change in the a priori probability of a common cause for vision and touch. These results offer new insights into how proprioceptive uncertainty shapes the multisensory perception of one's own body.
Collapse
Affiliation(s)
- Marie Chancel
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden; Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, Grenoble, France.
| | - H Henrik Ehrsson
- Department of Neuroscience, Brain, Body and Self Laboratory, Karolinska Institutet, Sweden
| |
Collapse
|
29
|
Mertens PEC, Marchesi P, Ruikes TR, Oude Lohuis M, Krijger Q, Pennartz CMA, Lansink CS. Coherent mapping of position and head direction across auditory and visual cortex. Cereb Cortex 2023; 33:7369-7385. [PMID: 36967108 PMCID: PMC10267650 DOI: 10.1093/cercor/bhad045] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 01/31/2023] [Accepted: 02/01/2023] [Indexed: 09/21/2024] Open
Abstract
Neurons in primary visual cortex (V1) may not only signal current visual input but also relevant contextual information such as reward expectancy and the subject's spatial position. Such contextual representations need not be restricted to V1 but could participate in a coherent mapping throughout sensory cortices. Here, we show that spiking activity coherently represents a location-specific mapping across auditory cortex (AC) and lateral, secondary visual cortex (V2L) of freely moving rats engaged in a sensory detection task on a figure-8 maze. Single-unit activity of both areas showed extensive similarities in terms of spatial distribution, reliability, and position coding. Importantly, reconstructions of subject position based on spiking activity displayed decoding errors that were correlated between areas. Additionally, we found that head direction, but not locomotor speed or head angular velocity, was an important determinant of activity in AC and V2L. By contrast, variables related to the sensory task cues or to trial correctness and reward were not markedly encoded in AC and V2L. We conclude that sensory cortices participate in coherent, multimodal representations of the subject's sensory-specific location. These may provide a common reference frame for distributed cortical sensory and motor processes and may support crossmodal predictive processing.
Collapse
Affiliation(s)
- Paul E C Mertens
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
| | - Pietro Marchesi
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
| | - Thijs R Ruikes
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
| | - Matthijs Oude Lohuis
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Quincy Krijger
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
| | - Cyriel M A Pennartz
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
| | - Carien S Lansink
- Center for Neuroscience, Faculty of Science, Swammerdam Institute for Life Sciences, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Science Park 904, Amsterdam 1098 XH, The Netherlands
| |
Collapse
|
30
|
Yan M, Zhang WH, Wang H, Wong KYM. Bimodular continuous attractor neural networks with static and moving stimuli. Phys Rev E 2023; 107:064302. [PMID: 37464697 DOI: 10.1103/physreve.107.064302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 05/08/2023] [Indexed: 07/20/2023]
Abstract
We investigated the dynamical behaviors of bimodular continuous attractor neural networks, each processing a modality of sensory input and interacting with each other. We found that when bumps coexist in both modules, the position of each bump is shifted towards the other input when the intermodular couplings are excitatory and is shifted away when inhibitory. When one intermodular coupling is excitatory while another is moderately inhibitory, temporally modulated population spikes can be generated. On further increase of the inhibitory coupling, momentary spikes will emerge. In the regime of bump coexistence, bump heights are primarily strengthened by excitatory intermodular couplings, but there is a lesser weakening effect due to a bump being displaced from the direct input. When bimodular networks serve as decoders of multisensory integration, we extend the Bayesian framework to show that excitatory and inhibitory couplings encode attractive and repulsive priors, respectively. At low disparity, the bump positions decode the posterior means in the Bayesian framework, whereas at high disparity, multiple steady states exist. In the regime of multiple steady states, the less stable state can be accessed if the input causing the more stable state arrives after a sufficiently long delay. When one input is moving, the bump in the corresponding module is pinned when the moving stimulus is weak, unpinned at intermediate stimulus strength, and tracks the input at strong stimulus strength, and the stimulus strengths for these transitions increase with the velocity of the moving stimulus. These results are important to understanding multisensory integration of static and dynamic stimuli.
Collapse
Affiliation(s)
- Min Yan
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| | - Wen-Hao Zhang
- Lyda Hill Department of Bioinformatics, UT Southwestern Medical Center, Dallas, Texas 75390, USA
- O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, Texas 75390, USA
| | - He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
- Hong Kong University of Science and Technology, Shenzhen Research Institute, Shenzhen 518057, China
| | - K Y Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong SAR, People's Republic of China
| |
Collapse
|
31
|
Markkula G, Lin YS, Srinivasan AR, Billington J, Leonetti M, Kalantari AH, Yang Y, Lee YM, Madigan R, Merat N. Explaining human interactions on the road by large-scale integration of computational psychological theory. PNAS NEXUS 2023; 2:pgad163. [PMID: 37346270 PMCID: PMC10281388 DOI: 10.1093/pnasnexus/pgad163] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/22/2023] [Accepted: 04/25/2023] [Indexed: 06/23/2023]
Abstract
When humans share space in road traffic, as drivers or as vulnerable road users, they draw on their full range of communicative and interactive capabilities. Much remains unknown about these behaviors, but they need to be captured in models if automated vehicles are to coexist successfully with human road users. Empirical studies of human road user behavior implicate a large number of underlying cognitive mechanisms, which taken together are well beyond the scope of existing computational models. Here, we note that for all of these putative mechanisms, computational theories exist in different subdisciplines of psychology, for more constrained tasks. We demonstrate how these separate theories can be generalized from abstract laboratory paradigms and integrated into a computational framework for modeling human road user interaction, combining Bayesian perception, a theory of mind regarding others' intentions, behavioral game theory, long-term valuation of action alternatives, and evidence accumulation decision-making. We show that a model with these assumptions-but not simpler versions of the same model-can account for a number of previously unexplained phenomena in naturalistic driver-pedestrian road-crossing interactions, and successfully predicts interaction outcomes in an unseen data set. Our modeling results contribute to demonstrating the real-world value of the theories from which we draw, and address calls in psychology for cumulative theory-building, presenting human road use as a suitable setting for work of this nature. Our findings also underscore the formidable complexity of human interaction in road traffic, with strong implications for the requirements to set on development and testing of vehicle automation.
Collapse
Affiliation(s)
- Gustav Markkula
- Institute for Transport Studies, University of Leeds, LS2 9JT Leeds, UK
- School of Psychology, University of Leeds, LS2 9JT Leeds, UK
| | - Yi-Shin Lin
- Institute for Transport Studies, University of Leeds, LS2 9JT Leeds, UK
| | | | - Jac Billington
- School of Psychology, University of Leeds, LS2 9JT Leeds, UK
| | - Matteo Leonetti
- Department of Informatics, King’s College London, WC2B 4BG London, UK
| | | | - Yue Yang
- Institute for Transport Studies, University of Leeds, LS2 9JT Leeds, UK
| | - Yee Mun Lee
- Institute for Transport Studies, University of Leeds, LS2 9JT Leeds, UK
| | - Ruth Madigan
- Institute for Transport Studies, University of Leeds, LS2 9JT Leeds, UK
| | - Natasha Merat
- Institute for Transport Studies, University of Leeds, LS2 9JT Leeds, UK
| |
Collapse
|
32
|
Sharif M, Saman Y, Burling R, Rea O, Patel R, Barrett DJK, Rea P, Kheradmand A, Arshad Q. Altered visual conscious awareness in patients with vestibular dysfunctions; a cross-sectional observation study. J Neurol Sci 2023; 448:120617. [PMID: 36989587 PMCID: PMC10112837 DOI: 10.1016/j.jns.2023.120617] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 03/10/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023]
Abstract
BACKGROUND Patients with vestibular dysfunctions often experience visual-induced symptoms. Here we asked whether such visual dependence can be related to alterations in visual conscious awareness in these patients. METHODS To measure visual conscious awareness, we used the effect of motion-induced blindness (MIB,) in which the perceptual awareness of the visual stimulus alternates despite its unchanged physical characteristics. In this phenomenon, a salient visual target spontaneously disappears and subsequently reappears from visual perception when presented against a moving visual background. The number of perceptual switches during the experience of the MIB stimulus was measured for 120 s in 15 healthy controls, 15 patients with vestibular migraine, 15 patients with benign positional paroxysmal vertigo (BPPV) and 15 with migraine without vestibular symptoms. RESULTS Patients with vestibular dysfunctions (i.e., both vestibular migraine and BPPV) exhibited increased perceptual fluctuations during MIB compared to healthy controls and migraine patients without vertigo. In VM patients, those with more severe symptoms exhibited higher fluctuations of visual awareness (i.e., positive correlation), whereas, in BPPV patients, those with more severe symptoms had lower fluctuations of visual awareness (i.e., negative correlation). IMPLICATIONS Taken together, these findings show that fluctuations of visual awareness are linked to the severity of visual-induced symptoms in patients with vestibular dysfunctions, and distinct pathophysiological mechanisms may mediate visual vertigo in peripheral versus central vestibular dysfunctions.
Collapse
Affiliation(s)
- Mishaal Sharif
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Yougan Saman
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK; Balance Clinic, E.N.T Department, Leicester Royal Infirmary, Leicester LE1 5WW, UK
| | - Rose Burling
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Oliver Rea
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Rakesh Patel
- Faculty of Health and Life Sciences, De Montfort University, The Gateway, Leicester LE1 9BH, UK
| | - Douglas J K Barrett
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK
| | - Peter Rea
- Balance Clinic, E.N.T Department, Leicester Royal Infirmary, Leicester LE1 5WW, UK
| | - Amir Kheradmand
- Department of Neurology, The Johns Hopkins University, Baltimore, MD, USA; Department of Neuroscience, The Johns Hopkins University, Baltimore, MD, USA.
| | - Qadeer Arshad
- inAmind Laboratory, Department of Neuroscience, Psychology and Behaviour, University of Leicester, Leicester LE1 7RH, UK; Neuro-Otology Unit, Division of Brain Sciences, Charing Cross Hospital Campus, Imperial College London, Fulham Palace Road, London W6 8RF, UK.
| |
Collapse
|
33
|
Rios D, Katzman N, Burdick KJ, Gellert M, Klein J, Bitan Y, Schlesinger JJ. Multisensory alarm to benefit alarm identification and decrease workload: a feasibility study. J Clin Monit Comput 2023:10.1007/s10877-023-01014-4. [PMID: 37133627 PMCID: PMC10154742 DOI: 10.1007/s10877-023-01014-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 04/05/2023] [Indexed: 05/04/2023]
Abstract
The poor design of conventional auditory medical alarms has contributed to alarm desensitization, and eventually, alarm fatigue in medical personnel. This study tested a novel multisensory alarm system which aims to help medical personnel better interpret and respond to alarm annunciation during periods of high cognitive load such as those found within intensive care units. We tested a multisensory alarm that combined auditory and vibrotactile cues to convey alarm type, alarm priority, and patient identity. Testing was done in three phases: control (conventional auditory), Half (limited multisensory alarm), and Full (complete multisensory alarm). Participants (N = 19, undergraduates) identified alarm type, priority, and patient identity (patient 1 or 2) using conventional and multisensory alarms, while simultaneously completing a cognitively demanding task. Performance was based on reaction time (RT) and identification accuracy of alarm type and priority. Participants also reported their perceived workload. RT was significantly faster for the Control phase (p < 0.05). Participant performance in identifying alarm type, priority, and patient did not differ significantly between the three phase conditions (p = 0.87, 0.37, and 0.14 respectively). The Half multisensory phase produced the lowest mental demand, temporal demand, and overall perceived workload score. These data suggest that implementation of a multisensory alarm with alarm and patient information may decrease perceived workload without significant changes in alarm identification performance. Additionally, a ceiling effect may exist for multisensory stimuli, with only part of an alarm benefitting from multisensory integration.
Collapse
Affiliation(s)
- Derek Rios
- Department of Neuroscience Nashville, Vanderbilt University, Nashville, TN, 37235, USA
| | - Nuphar Katzman
- Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Be'er Sheva, Beersheba, Israel
| | | | - May Gellert
- Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Be'er Sheva, Beersheba, Israel
| | - Jessica Klein
- Vanderbilt University School of Medicine, 1161 21st Ave South, Nashville, TN, 37232, USA
| | - Yuval Bitan
- Department of Health Policy and Management, Ben-Gurion University of the Negev, Be'er Sheva, Israel.
| | - Joseph J Schlesinger
- Division of Critical Care Medicine, Vanderbilt University Medical Center, Nashville, TN, 37209, USA.
| |
Collapse
|
34
|
Okray Z, Jacob PF, Stern C, Desmond K, Otto N, Talbot CB, Vargas-Gutierrez P, Waddell S. Multisensory learning binds neurons into a cross-modal memory engram. Nature 2023; 617:777-784. [PMID: 37100911 PMCID: PMC10208976 DOI: 10.1038/s41586-023-06013-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 03/24/2023] [Indexed: 04/28/2023]
Abstract
Associating multiple sensory cues with objects and experience is a fundamental brain process that improves object recognition and memory performance. However, neural mechanisms that bind sensory features during learning and augment memory expression are unknown. Here we demonstrate multisensory appetitive and aversive memory in Drosophila. Combining colours and odours improved memory performance, even when each sensory modality was tested alone. Temporal control of neuronal function revealed visually selective mushroom body Kenyon cells (KCs) to be required for enhancement of both visual and olfactory memory after multisensory training. Voltage imaging in head-fixed flies showed that multisensory learning binds activity between streams of modality-specific KCs so that unimodal sensory input generates a multimodal neuronal response. Binding occurs between regions of the olfactory and visual KC axons, which receive valence-relevant dopaminergic reinforcement, and is propagated downstream. Dopamine locally releases GABAergic inhibition to permit specific microcircuits within KC-spanning serotonergic neurons to function as an excitatory bridge between the previously 'modality-selective' KC streams. Cross-modal binding thereby expands the KCs representing the memory engram for each modality into those representing the other. This broadening of the engram improves memory performance after multisensory learning and permits a single sensory feature to retrieve the memory of the multimodal experience.
Collapse
Affiliation(s)
- Zeynep Okray
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK.
| | - Pedro F Jacob
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK
| | - Ciara Stern
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK
| | - Kieran Desmond
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK
| | - Nils Otto
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK
- Institute of Anatomy and Molecular Neurobiology, Westfälische Wilhelms-Universität Münster, Münster, Germany
| | - Clifford B Talbot
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK
| | | | - Scott Waddell
- Centre for Neural Circuits & Behaviour, University of Oxford, Oxford, UK.
| |
Collapse
|
35
|
Verbe A, Martinez D, Viollet S. Sensory fusion in the hoverfly righting reflex. Sci Rep 2023; 13:6138. [PMID: 37061548 PMCID: PMC10105705 DOI: 10.1038/s41598-023-33302-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/11/2023] [Indexed: 04/17/2023] Open
Abstract
We study how falling hoverflies use sensory cues to trigger appropriate roll righting behavior. Before being released in a free fall, flies were placed upside-down with their legs contacting the substrate. The prior leg proprioceptive information about their initial orientation sufficed for the flies to right themselves properly. However, flies also use visual and antennal cues to recover faster and disambiguate sensory conflicts. Surprisingly, in one of the experimental conditions tested, hoverflies flew upside-down while still actively flapping their wings. In all the other conditions, flies were able to right themselves using two roll dynamics: fast ([Formula: see text]50ms) and slow ([Formula: see text]110ms) in the presence of consistent and conflicting cues, respectively. These findings suggest that a nonlinear sensory integration of the three types of sensory cues occurred. A ring attractor model was developed and discussed to account for this cue integration process.
Collapse
Affiliation(s)
- Anna Verbe
- Aix-Marseille Université, CNRS, ISM, 13009, Marseille, France
- PNI, Princeton University, Washington Road, Princeton, NJ, 08540, USA
| | - Dominique Martinez
- Aix-Marseille Université, CNRS, ISM, 13009, Marseille, France
- Université de Lorraine, CNRS, LORIA, 54000, Nancy, France
| | | |
Collapse
|
36
|
Factors influencing clinical outcome in vestibular neuritis - A focussed review and reanalysis of prospective data. J Neurol Sci 2023; 446:120579. [PMID: 36807973 DOI: 10.1016/j.jns.2023.120579] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 12/22/2022] [Accepted: 01/31/2023] [Indexed: 02/05/2023]
Abstract
Following vestibular neuritis (VN), long term prognosis is not dependent on the magnitude of the residual peripheral function as measured with either caloric or the video head-impulse test. Rather, recovery is determined by a combination of visuo-vestibular (visual dependence), psychological (anxiety) and vestibular perceptual factors. Our recent research in healthy individuals has also revealed a strong association between the degree of lateralisation of vestibulo-cortical processing and gating of vestibular signals, anxiety and visual dependence. In the context of several functional brain changes occurring in the interaction between visual, vestibular and emotional cortices, which underpin the aforementioned psycho-physiological features in patients with VN, we re-examined our previously published findings focusing on additional factors impacting long term clinical outcome and function. These included: (i) the role of concomitant neuro-otological dysfunction (i.e. migraine and benign paroxysmal positional vertigo (BPPV)) and (ii) the degree to which brain lateralisation of vestibulo-cortical processing influences gating of vestibular function in the acute stage. We found that migraine and BPPV interfere with symptomatic recovery following VN. That is, dizziness handicap at short-term recovery stage was significantly predicted by migraine (r = 0.523, n = 28, p = .002), BPPV (r = 0.658, n = 31, p < .001) and acute visual dependency (r = 0.504, n = 28, p = .003). Moreover, dizziness handicap in the long-term recovery stage continued to be predicted by migraine (r = 0.640, n = 22, p = .001), BPPV (r = 0.626, n = 24, p = .001) and acute visual dependency (r = 0.667, n = 22, p < .001). Furthermore, surrogate measures of vestibulo-cortical lateralisation were predictive of the amount of cortical suppression exerted over vestibular thresholds. That is, in right-sided VN patients, we observed a positive correlation between visual dependence and acute ipsilesional oculomotor thresholds (R2 0.497; p < .001), but not contralateral thresholds (R2 0.017: p > .05). In left-sided VN patients, we observed a negative correlation between visual dependence and ipsilesional oculomotor thresholds (R2 0.459; p < .001), but not for contralateral thresholds (R2 0.013; p > .05). To surmise, our findings illustrate that in VN, neuro-otological co-morbidities retard recovery, and that measures of the peripheral vestibular system are an aggregate of residual function and cortically mediated gating of vestibular input.
Collapse
|
37
|
Rineau AL, Bringoux L, Sarrazin JC, Berberian B. Being active over one's own motion: Considering predictive mechanisms in self-motion perception. Neurosci Biobehav Rev 2023; 146:105051. [PMID: 36669748 DOI: 10.1016/j.neubiorev.2023.105051] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/16/2023] [Accepted: 01/16/2023] [Indexed: 01/19/2023]
Abstract
Self-motion perception is a key element guiding pilots' behavior. Its importance is mostly revealed when impaired, leading in most cases to spatial disorientation which is still today a major factor of accidents occurrence. Self-motion perception is known as mainly based on visuo-vestibular integration and can be modulated by the physical properties of the environment with which humans interact. For instance, several studies have shown that the respective weight of visual and vestibular information depends on their reliability. More recently, it has been suggested that the internal state of an operator can also modulate multisensory integration. Interestingly, the systems' automation can interfere with this internal state through the loss of the intentional nature of movements (i.e., loss of agency) and the modulation of associated predictive mechanisms. In this context, one of the new challenges is to better understand the relationship between automation and self-motion perception. The present review explains how linking the concepts of agency and self-motion is a first approach to address this issue.
Collapse
Affiliation(s)
- Anne-Laure Rineau
- Information Processing and Systems, ONERA, Salon de Provence, Base Aérienne 701, France.
| | | | | | - Bruno Berberian
- Information Processing and Systems, ONERA, Salon de Provence, Base Aérienne 701, France.
| |
Collapse
|
38
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/30/2022] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
39
|
Idris A, Christensen BA, Walker EM, Maier JX. Multisensory integration of orally-sourced gustatory and olfactory inputs to the posterior piriform cortex in awake rats. J Physiol 2023; 601:151-169. [PMID: 36385245 PMCID: PMC9869978 DOI: 10.1113/jp283873] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022] Open
Abstract
Flavour refers to the sensory experience of food, which is a combination of sensory inputs sourced from multiple modalities during consumption, including taste and odour. Previous work has demonstrated that orally-sourced taste and odour cues interact to determine perceptual judgements of flavour stimuli, although the underlying cellular- and circuit-level neural mechanisms remain unknown. We recently identified a region of the piriform olfactory cortex in rats that responds to both taste and odour stimuli. Here, we investigated how converging taste and odour inputs to this area interact to affect single neuron responsiveness ensemble coding of flavour identity. To accomplish this, we recorded spiking activity from ensembles of single neurons in the posterior piriform cortex (pPC) in awake, tasting rats while delivering taste solutions, odour solutions and taste + odour mixtures directly into the oral cavity. Our results show that taste and odour inputs evoke highly selective, temporally-overlapping responses in multisensory pPC neurons. Comparing responses to mixtures and their unisensory components revealed that taste and odour inputs interact in a non-linear manner to produce unique response patterns. Taste input enhances trial-by-trial decoding of odour identity from small ensembles of simultaneously recorded neurons. Together, these results demonstrate that taste and odour inputs to pPC interact in complex, non-linear ways to form amodal flavour representations that enhance identity coding. KEY POINTS: Experience of food involves taste and smell, although how information from these different senses is combined by the brain to create our sense of flavour remains unknown. We recorded from small groups of neurons in the olfactory cortex of awake rats while they consumed taste solutions, odour solutions and taste + odour mixtures. Taste and smell solutions evoke highly selective responses. When presented in a mixture, taste and smell inputs interacted to alter responses, resulting in activation of unique sets of neurons that could not be predicted by the component responses. Synergistic interactions increase discriminability of odour representations. The olfactory cortex uses taste and smell to create new information representing multisensory flavour identity.
Collapse
Affiliation(s)
- Ammar Idris
- Department of Neurobiology & AnatomyWake Forest School of MedicineWinston‐SalemNCUSA
| | - Brooke A. Christensen
- Department of Neurobiology & AnatomyWake Forest School of MedicineWinston‐SalemNCUSA
| | - Ellen M. Walker
- Department of Neurobiology & AnatomyWake Forest School of MedicineWinston‐SalemNCUSA
| | - Joost X. Maier
- Department of Neurobiology & AnatomyWake Forest School of MedicineWinston‐SalemNCUSA
| |
Collapse
|
40
|
Rineau AL, Berberian B, Sarrazin JC, Bringoux L. Active self-motion control and the role of agency under ambiguity. Front Psychol 2023; 14:1148793. [PMID: 37151332 PMCID: PMC10158821 DOI: 10.3389/fpsyg.2023.1148793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 03/31/2023] [Indexed: 05/09/2023] Open
Abstract
Purpose Self-motion perception is a key factor in daily behaviours such as driving a car or piloting an aircraft. It is mainly based on visuo-vestibular integration, whose weighting mechanisms are modulated by the reliability properties of sensory inputs. Recently, it has been shown that the internal state of the operator can also modulate multisensory integration and may sharpen the representation of relevant inputs. In line with the concept of agency, it thus appears relevant to evaluate the impact of being in control of our own action on self-motion perception. Methodology Here, we tested two conditions of motion control (active/manual trigger versus passive/ observer condition), asking participants to discriminate between two consecutive longitudinal movements by identifying the larger displacement (displacement of higher intensity). We also tested motion discrimination under two levels of ambiguity by applying acceleration ratios that differed from our two "standard" displacements (i.e., 3 s; 0.012 m.s-2 and 0.030 m.s-2). Results We found an effect of control condition, but not of the level of ambiguity on the way participants perceived the standard displacement, i.e., perceptual bias (Point of Subjective Equality; PSE). Also, we found a significant effect of interaction between the active condition and the level of ambiguity on the ability to discriminate between displacements, i.e., sensitivity (Just Noticeable Difference; JND). Originality Being in control of our own motion through a manual intentional trigger of self-displacement maintains overall motion sensitivity when ambiguity increases.
Collapse
Affiliation(s)
- Anne-Laure Rineau
- ONERA, Information Processing and Systems Department (DTIS), Salon-de-Provence, France
- *Correspondence: Anne-Laure Rineau,
| | - Bruno Berberian
- ONERA, Information Processing and Systems Department (DTIS), Salon-de-Provence, France
| | | | | |
Collapse
|
41
|
De Corte BJ, Akdoğan B, Balsam PD. Temporal scaling and computing time in neural circuits: Should we stop watching the clock and look for its gears? Front Behav Neurosci 2022; 16:1022713. [PMID: 36570701 PMCID: PMC9773401 DOI: 10.3389/fnbeh.2022.1022713] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 10/31/2022] [Indexed: 12/13/2022] Open
Abstract
Timing underlies a variety of functions, from walking to perceiving causality. Neural timing models typically fall into one of two categories-"ramping" and "population-clock" theories. According to ramping models, individual neurons track time by gradually increasing or decreasing their activity as an event approaches. To time different intervals, ramping neurons adjust their slopes, ramping steeply for short intervals and vice versa. In contrast, according to "population-clock" models, multiple neurons track time as a group, and each neuron can fire nonlinearly. As each neuron changes its rate at each point in time, a distinct pattern of activity emerges across the population. To time different intervals, the brain learns the population patterns that coincide with key events. Both model categories have empirical support. However, they often differ in plausibility when applied to certain behavioral effects. Specifically, behavioral data indicate that the timing system has a rich computational capacity, allowing observers to spontaneously compute novel intervals from previously learned ones. In population-clock theories, population patterns map to time arbitrarily, making it difficult to explain how different patterns can be computationally combined. Ramping models are viewed as more plausible, assuming upstream circuits can set the slope of ramping neurons according to a given computation. Critically, recent studies suggest that neurons with nonlinear firing profiles often scale to time different intervals-compressing for shorter intervals and stretching for longer ones. This "temporal scaling" effect has led to a hybrid-theory where, like a population-clock model, population patterns encode time, yet like a ramping neuron adjusting its slope, the speed of each neuron's firing adapts to different intervals. Here, we argue that these "relative" population-clock models are as computationally plausible as ramping theories, viewing population-speed and ramp-slope adjustments as equivalent. Therefore, we view identifying these "speed-control" circuits as a key direction for evaluating how the timing system performs computations. Furthermore, temporal scaling highlights that a key distinction between different neural models is whether they propose an absolute or relative time-representation. However, we note that several behavioral studies suggest the brain processes both scales, cautioning against a dichotomy.
Collapse
Affiliation(s)
- Benjamin J. De Corte
- Department of Psychology, Columbia University, New York, NY, United States
- Division of Developmental Neuroscience, New York State Psychiatric Institute, New York, NY, United States
| | - Başak Akdoğan
- Department of Psychology, Columbia University, New York, NY, United States
- Division of Developmental Neuroscience, New York State Psychiatric Institute, New York, NY, United States
| | - Peter D. Balsam
- Department of Psychology, Columbia University, New York, NY, United States
- Division of Developmental Neuroscience, New York State Psychiatric Institute, New York, NY, United States
- Department of Neuroscience and Behavior, Barnard College, New York, NY, United States
| |
Collapse
|
42
|
Hsiao A, Lee-Miller T, Block HJ. Conscious awareness of a visuo-proprioceptive mismatch: Effect on cross-sensory recalibration. Front Neurosci 2022; 16:958513. [PMID: 36117619 PMCID: PMC9470947 DOI: 10.3389/fnins.2022.958513] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/12/2022] [Indexed: 11/29/2022] Open
Abstract
The brain estimates hand position using vision and position sense (proprioception). The relationship between visual and proprioceptive estimates is somewhat flexible: visual information about the index finger can be spatially displaced from proprioceptive information, resulting in cross-sensory recalibration of the visual and proprioceptive unimodal position estimates. According to the causal inference framework, recalibration occurs when the unimodal estimates are attributed to a common cause and integrated. If separate causes are perceived, then recalibration should be reduced. Here we assessed visuo-proprioceptive recalibration in response to a gradual visuo-proprioceptive mismatch at the left index fingertip. Experiment 1 asked how frequently a 70 mm mismatch is consciously perceived compared to when no mismatch is present, and whether awareness is linked to reduced visuo-proprioceptive recalibration, consistent with causal inference predictions. However, conscious offset awareness occurred rarely. Experiment 2 tested a larger displacement, 140 mm, and asked participants about their perception more frequently, including at 70 mm. Experiment 3 confirmed that participants were unbiased at estimating distances in the 2D virtual reality display. Results suggest that conscious awareness of the mismatch was indeed linked to reduced cross-sensory recalibration as predicted by the causal inference framework, but this was clear only at higher mismatch magnitudes (70–140 mm). At smaller offsets (up to 70 mm), conscious perception of an offset may not override unconscious belief in a common cause, perhaps because the perceived offset magnitude is in range of participants’ natural sensory biases. These findings highlight the interaction of conscious awareness with multisensory processes in hand perception.
Collapse
|
43
|
Jiang Y, Chen Y, Chi X. A theoretical model and empirical analysis of university library readers' spatial cognition. LIBRARY HI TECH 2022. [DOI: 10.1108/lht-05-2022-0242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PurposeThe practice of renovation and construction of university libraries is flourishing, but how to attract readers to use the library is an issue that urgently needs to be explored. Spatial cognition is a subjective judgment of a person's tendency to take action in the future and implies behavioral intention. Based on the sensory–image–cognition relationship, a theoretical model of university library readers' spatial cognition is conducted, and the influencing factors and mechanisms of spatial cognition are explored based on empirical data to provide theoretical references for spatial practices in university libraries.Design/methodology/approachA visual and art-based mental map approach is introduced based on a questionnaire survey. The questionnaire is mainly used for the specific evaluation of spatial use and the breakdown of the detailed elements, while the mental map method is mainly used for the evaluation of readers' spatial cognition. Relevant empirical data are collected from the library of the Zhejiang University of Technology.FindingsThe results indicate that readers' spatial sensory experience and mental imagery have positive effects on readers' behavior via the mediator spatial cognition, readers' spatial sensory experience and mental imagery have a positive effect on readers' spatial cognition and spatial cognition has a significant effect on readers' behavior.Originality/valueThe main contribution of this study is to construct a theoretical model of readers' spatial cognition and to explore the factors that have an impact on spatial cognition and the influence of cognition on behavior. This provides a more rational and in-depth thinking paradigm for the study of university library space and provides theoretical references for library practice.
Collapse
|
44
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
45
|
Pennartz CMA. What is neurorepresentationalism? From neural activity and predictive processing to multi-level representations and consciousness. Behav Brain Res 2022; 432:113969. [PMID: 35718232 DOI: 10.1016/j.bbr.2022.113969] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 05/18/2022] [Accepted: 05/20/2022] [Indexed: 11/02/2022]
Abstract
This review provides an update on Neurorepresentationalism, a theoretical framework that defines conscious experience as multimodal, situational survey and explains its neural basis from brain systems constructing best-guess representations of sensations originating in our environment and body [1]. It posits that conscious experience is characterized by five essential hallmarks: (i) multimodal richness, (ii) situatedness and immersion, (iii) unity and integration, (iv) dynamics and stability, and (v) intentionality. Consciousness is furthermore proposed to have a biological function, framed by the contrast between reflexes and habits (not requiring consciousness) versus goal-directed, planned behavior (requiring multimodal, situational survey). Conscious experience is therefore understood as a sensorily rich, spatially encompassing representation of body and environment, while we nevertheless have the impression of experiencing external reality directly. Contributions to understanding neural mechanisms underlying consciousness are derived from models for predictive processing, which are trained in an unsupervised manner, do not necessarily require overt action, and have been extended to deep neural networks. Even with predictive processing in place, however, the question remains why this type of neural network activity would give rise to phenomenal experience. Here, I propose to tackle the Hard Problem with the concept of multi-level representations which emergently give rise to multimodal, spatially wide superinferences corresponding to phenomenal experiences. Finally, Neurorepresentationalism is compared to other neural theories of consciousness, and its implications for defining indicators of consciousness in animals, artificial intelligence devices and immobile or unresponsive patients with disorders of consciousness are discussed.
Collapse
Affiliation(s)
- Cyriel M A Pennartz
- Swammerdam Institute for Life Sciences, Center for Neuroscience, Faculty of Science, University of Amsterdam, the Netherlands; Research Priority Program Brain and Cognition, University of Amsterdam, the Netherlands.
| |
Collapse
|
46
|
Sun X, Chen PH, Rau PLP. Do Congruent Auditory Stimuli Facilitate Visual Search in Dynamic Environments? An Experimental Study Based on Multisensory Interaction. Multisens Res 2022; 35:1-15. [PMID: 35523736 DOI: 10.1163/22134808-bja10075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 04/02/2022] [Indexed: 11/19/2022]
Abstract
The purpose of this study was to investigate the cue congruency effect of auditory stimuli during visual search in dynamic environments. Twenty-eight participants were recruited to conduct a visual search experiment. The experiment applied auditory stimuli to understand whether they could facilitate visual search in different types of background. Additionally, target location and target orientation were manipulated to clarify their influences on visual search. Target location was related to horizontal visual search and target orientation was associated with visual search for an inverted target. The results regarding dynamic backgrounds reported that target-congruent auditory stimuli could speed up the visual search time. In addition, the cue congruency effect of auditory stimuli was critical for the center of the visual display but declined for the edge, indicating the inhibition of horizontal visual search behavior. Moreover, few improvements accompanying auditory stimuli were provided for the visual detection of non-inverted and inverted targets. The findings of this study suggested developing multisensory interaction with head-mounted displays, such as augmented reality glasses, in real life.
Collapse
Affiliation(s)
- Xiaofang Sun
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China
| | - Pin-Hsuan Chen
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China
| | - Pei-Luen Patrick Rau
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
47
|
French F. Expanding Aesthetics. Front Vet Sci 2022; 9:855087. [PMID: 35601399 PMCID: PMC9114928 DOI: 10.3389/fvets.2022.855087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 03/23/2022] [Indexed: 11/13/2022] Open
Abstract
This paper seeks to expand traditional aesthetic dimensions of design beyond the limits of human capability in order to encompass other species' sensory modalities. To accomplish this, the idea of inclusivity is extended beyond human cultural and personal identities and needs, to embrace multi-species experiences of places, events and interactions in the world. This involves drawing together academic perspectives from ecology, neuroscience, anthropology, philosophy and interaction design, as well as exploring artistic perspectives and demonstrating how these different frames of reference can inspire and complement each other. This begins with a rationale for the existence of non-human aesthetics, followed by an overview of existing research into non-human aesthetic dimensions. Novel aesthetic categories are proposed and the challenge of how to include non-human aesthetic sensibility in design is discussed.
Collapse
Affiliation(s)
- Fiona French
- School of Computing and Digital Media, London Metropolitan University, London, United Kingdom
| |
Collapse
|
48
|
Chung W, Barnett-Cowan M. Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality. Multisens Res 2022; 35:1-23. [PMID: 35477696 DOI: 10.1163/22134808-bja10074] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 03/17/2022] [Indexed: 02/21/2024]
Abstract
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
Collapse
Affiliation(s)
- William Chung
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
49
|
Direct eye gaze enhances the ventriloquism effect. Atten Percept Psychophys 2022; 84:2293-2302. [PMID: 35359228 PMCID: PMC9481494 DOI: 10.3758/s13414-022-02468-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
The “ventriloquism effect” describes an illusory phenomenon where the perceived location of an auditory stimulus is pulled toward the location of a visual stimulus. Ventriloquists use this phenomenon to create an illusion where an inanimate puppet is perceived to speak. Ventriloquists use the expression and suppression of their own and the puppet’s mouth movements as well the direction of their respective eye gaze to maximize the illusion. While the puppet’s often exaggerated mouth movements have been demonstrated to enhance the ventriloquism effect, the contribution of direct eye gaze remains unknown. In Experiment 1, participants viewed an image of a person’s face while hearing a temporally synchronous recording of a voice originating from different locations on the azimuthal plane. The eyes of the facial stimuli were either looking directly at participants or were closed. Participants were more likely to misperceive the location of a range of voice locations as coming from a central position when the eye gaze of the facial stimuli were directed toward them. Thus, direct gaze enhances the ventriloquist effect by attracting participants’ perception of the voice locations toward the location of the face. In an exploratory analysis, we furthermore found no evidence for an other-race effect between White vs Asian listeners. In Experiment 2, we replicated the effect of direct eye gaze on the ventriloquism effect, also showing that faces per se attract perceived sound locations compared with audio-only sound localization. Showing a modulation of the ventriloquism effect by socially-salient eye gaze information thus adds to previous findings reporting top-down influences on this effect.
Collapse
|
50
|
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA;
- Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|