1
|
Wu HP, Nakul E, Betka S, Lance F, Herbelin B, Blanke O. Out-of-body illusion induced by visual-vestibular stimulation. iScience 2024; 27:108547. [PMID: 38161418 PMCID: PMC10755362 DOI: 10.1016/j.isci.2023.108547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 09/22/2023] [Accepted: 11/20/2023] [Indexed: 01/03/2024] Open
Abstract
Out-of-body experiences (OBEs) are characterized by the subjective feeling of being located outside one's physical body and perceiving one's own body from an elevated perspective looking downwards. OBEs have been correlated with abnormal integration of bodily signals, including visual and vestibular information. In two studies, we used mixed reality combined with a motion platform to manipulate visual and vestibular integration in healthy participants. Behavioral data and questionnaires show that congruent visual-vestibular stimulation in a self-centered reference frame induced an OBE-like illusion characterized by elevated self-location and feelings of disembodiment and lightness. The OBE-like illusion was also modulated by individuals' visual field dependency assessed by the Rod and Frame Test. These results show that the manipulation of visual-vestibular stimulation in the present study induces various aspects of OBEs and further link OBE to congruency mechanisms between visual and vestibular gravitational and self-motion cues.
Collapse
Affiliation(s)
- Hsin-Ping Wu
- Laboratory of Cognitive Neuroscience, Neuro-X Institute & Brain Mind Institute, Faculty of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Estelle Nakul
- Laboratory of Cognitive Neuroscience, Neuro-X Institute & Brain Mind Institute, Faculty of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Sophie Betka
- Laboratory of Cognitive Neuroscience, Neuro-X Institute & Brain Mind Institute, Faculty of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Florian Lance
- Laboratory of Cognitive Neuroscience, Neuro-X Institute & Brain Mind Institute, Faculty of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Bruno Herbelin
- Laboratory of Cognitive Neuroscience, Neuro-X Institute & Brain Mind Institute, Faculty of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Neuro-X Institute & Brain Mind Institute, Faculty of Life Sciences, Ecole Polytechnique Fédérale de Lausanne, Geneva, Switzerland
- Department of Clinical Neurosciences, University Hospital Geneva, Geneva, Switzerland
| |
Collapse
|
2
|
Keshavarzi S, Velez-Fort M, Margrie TW. Cortical Integration of Vestibular and Visual Cues for Navigation, Visual Processing, and Perception. Annu Rev Neurosci 2023; 46:301-320. [PMID: 37428601 DOI: 10.1146/annurev-neuro-120722-100503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
Despite increasing evidence of its involvement in several key functions of the cerebral cortex, the vestibular sense rarely enters our consciousness. Indeed, the extent to which these internal signals are incorporated within cortical sensory representation and how they might be relied upon for sensory-driven decision-making, during, for example, spatial navigation, is yet to be understood. Recent novel experimental approaches in rodents have probed both the physiological and behavioral significance of vestibular signals and indicate that their widespread integration with vision improves both the cortical representation and perceptual accuracy of self-motion and orientation. Here, we summarize these recent findings with a focus on cortical circuits involved in visual perception and spatial navigation and highlight the major remaining knowledge gaps. We suggest that vestibulo-visual integration reflects a process of constant updating regarding the status of self-motion, and access to such information by the cortex is used for sensory perception and predictions that may be implemented for rapid, navigation-related decision-making.
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Mateo Velez-Fort
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| | - Troy W Margrie
- The Sainsbury Wellcome Centre for Neural Circuits and Behavior, University College London, London, United Kingdom;
| |
Collapse
|
3
|
Gao W, Lin Y, Shen J, Han J, Song X, Lu Y, Zhan H, Li Q, Ge H, Lin Z, Shi W, Drugowitsch J, Tang H, Chen X. Diverse effects of gaze direction on heading perception in humans. Cereb Cortex 2023:7024719. [PMID: 36734278 DOI: 10.1093/cercor/bhac541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 12/24/2022] [Accepted: 12/27/2022] [Indexed: 02/04/2023] Open
Abstract
Gaze change can misalign spatial reference frames encoding visual and vestibular signals in cortex, which may affect the heading discrimination. Here, by systematically manipulating the eye-in-head and head-on-body positions to change the gaze direction of subjects, the performance of heading discrimination was tested with visual, vestibular, and combined stimuli in a reaction-time task in which the reaction time is under the control of subjects. We found the gaze change induced substantial biases in perceived heading, increased the threshold of discrimination and reaction time of subjects in all stimulus conditions. For the visual stimulus, the gaze effects were induced by changing the eye-in-world position, and the perceived heading was biased in the opposite direction of gaze. In contrast, the vestibular gaze effects were induced by changing the eye-in-head position, and the perceived heading was biased in the same direction of gaze. Although the bias was reduced when the visual and vestibular stimuli were combined, integration of the 2 signals substantially deviated from predictions of an extended diffusion model that accumulates evidence optimally over time and across sensory modalities. These findings reveal diverse gaze effects on the heading discrimination and emphasize that the transformation of spatial reference frames may underlie the effects.
Collapse
Affiliation(s)
- Wei Gao
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Yipeng Lin
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Jiangrong Shen
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Jianing Han
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaoxiao Song
- Department of Liberal Arts, School of Art Administration and Education, China Academy of Art, 218 Nanshan Road, Shangcheng District, Hangzhou 310002, China
| | - Yukun Lu
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Huijia Zhan
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Qianbing Li
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Haoting Ge
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| | - Zheng Lin
- Department of Psychiatry, Second Affiliated Hospital, School of Medicine, Zhejiang University, 88 Jiefang Road, Shangcheng District, Hangzhou 310009, China
| | - Wenlei Shi
- Center for the Study of the History of Chinese Language and Center for the Study of Language and Cognition, Zhejiang University, 866 Yuhangtang Road, Xihu District, Hangzhou 310058, China
| | - Jan Drugowitsch
- Department of Neurobiology, Harvard Medical School, Longwood Avenue 220, Boston, MA 02116, United States
| | - Huajin Tang
- College of Computer Science and Technology, Zhejiang University, 38 Zheda Road, Xihu District, Hangzhou 310027, China
| | - Xiaodong Chen
- Department of Neurology and Psychiatry of the Second Affiliated Hospital, College of Biomedical Engineering and Instrument Science, Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, 268 Kaixuan Road, Jianggan District, Hangzhou 310029, China
| |
Collapse
|
4
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
5
|
Aston S, Pattie C, Graham R, Slater H, Beierholm U, Nardini M. Newly learned shape-color associations show signatures of reliability-weighted averaging without forced fusion or a memory color effect. J Vis 2022; 22:8. [PMID: 36580296 PMCID: PMC9804025 DOI: 10.1167/jov.22.13.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Reliability-weighted averaging of multiple perceptual estimates (or cues) can improve precision. Research suggests that newly learned statistical associations can be rapidly integrated in this way for efficient decision-making. Yet, it remains unclear if the integration of newly learned statistics into decision-making can directly influence perception, rather than taking place only at the decision stage. In two experiments, we implicitly taught observers novel associations between shape and color. Observers made color matches by adjusting the color of an oval to match a simultaneously presented reference. As the color of the oval changed across trials, so did its shape according to a novel mapping of axis ratio to color. Observers showed signatures of reliability-weighted averaging-a precision improvement in both experiments and reweighting of the newly learned shape cue with changes in uncertainty in Experiment 2. To ask whether this was accompanied by perceptual effects, Experiment 1 tested for forced fusion by measuring color discrimination thresholds with and without incongruent novel cues. Experiment 2 tested for a memory color effect, observers adjusting the color of ovals with different axis ratios until they appeared gray. There was no evidence for forced fusion and the opposite of a memory color effect. Overall, our results suggest that the ability to quickly learn novel cues and integrate them with familiar cues is not immediately (within the short duration of our experiments and in the domain of color and shape) accompanied by common perceptual effects.
Collapse
Affiliation(s)
- Stacey Aston
- Department of Psychology, Durham University, Durham, UK,
| | - Cat Pattie
- Biosciences Institute, Newcastle University, Newcastle, UK,
| | - Rachael Graham
- Department of Psychology, Durham University, Durham, UK,
| | - Heather Slater
- Department of Psychology, Durham University, Durham, UK,
| | | | - Marko Nardini
- Department of Psychology, Durham University, Durham, UK,
| |
Collapse
|
6
|
Chen Z, Yu R, Yu X, Li E, Wang C, Liu Y, Guo T, Chen H. Bioinspired Artificial Motion Sensory System for Rotation Recognition and Rapid Self-Protection. ACS NANO 2022; 16:19155-19164. [PMID: 36269153 DOI: 10.1021/acsnano.2c08328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As one of the most common synergies between the exteroceptors and proprioceptors, the synergy between visual and vestibule enables the human brain to judge the state of human motion, which is essential for motion recognition and human self-protection. Hence, in this work, an artificial motion sensory system (AMSS) based on artificial vestibule and visual is developed, which consists of a tribo-nanogenerator (TENG) as a vestibule that can sense rotation and synaptic transistor array as retina. The principle of temporal congruency has been successfully realized by multisensory input. In addition, pattern recognition results show that the accuracy of multisensory integration is more than 15% higher than that of single sensory. Moreover, due to the rotation recognition and visual recognition functions of AMSS, we realized multimodal information recognition including angles and numbers in the spiking correlated neural network (SCNN), and the accuracy rate reached 89.82%. Besides, the rapid self-protection of a human was successfully realized by AMSS in the case of simulated amusement rides, and the reaction time of multiple motion sensory integration is only one-third of that of a single vestibule. The development of AMSS based on the synergy of simulated vision and vestibule will show great potential in neural robot, artificial limbs, and soft electronics.
Collapse
Affiliation(s)
- Zhenjia Chen
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Rengjian Yu
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Xipeng Yu
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Enlong Li
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Congyong Wang
- Joint School of National University of Singapore and Tianjin University, International Campus of Tianjin University, Binhai New City, Fuzhou350207, China
- Department of Chemistry, National University of Singapore, 3 Science Drive 3, Singapore117543, Singapore
| | - Yaqian Liu
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
| | - Tailiang Guo
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
- Fujian Science & Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou350100, China
| | - Huipeng Chen
- Institute of Optoelectronic Display, National & Local United Engineering Lab of Flat Panel Display Technology, Fuzhou University, Fuzhou350002, China
- Fujian Science & Technology Innovation Laboratory for Optoelectronic Information of China, Fuzhou350100, China
| |
Collapse
|
7
|
Preuss Mattsson N, Coppi S, Chancel M, Ehrsson HH. Combination of visuo-tactile and visuo-vestibular correlations in illusory body ownership and self-motion sensations. PLoS One 2022; 17:e0277080. [PMID: 36378668 PMCID: PMC9665377 DOI: 10.1371/journal.pone.0277080] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 10/18/2022] [Indexed: 11/17/2022] Open
Abstract
Previous studies have shown that illusory ownership over a mannequin's body can be induced through synchronous visuo-tactile stimulation as well as through synchronous visuo-vestibular stimulation. The current study aimed to elucidate how three-way combinations of correlated visual, tactile and vestibular signals contribute to the senses of body ownership and self-motion. Visuo-tactile temporal congruence was manipulated by touching the mannequin's body and the participant's unseen real body on the trunk with a small object either synchronously or asynchronously. Visuo-vestibular temporal congruence was manipulated by synchronous or asynchronous presentation of a visual motion cue (the background rotating around the mannequin in one direction) and galvanic stimulation of the vestibular nerve generating a rotation sensation (in the same direction). The illusory experiences were quantified using a questionnaire; threat-evoked skin-conductance responses (SCRs) provided complementary indirect physiological evidence for the illusion. Ratings on the illusion questionnaire statement showed significant main effects of synchronous visuo-vestibular and synchronous visuo-tactile stimulations, suggesting that both of these pairs of bimodal correlations contribute to the ownership illusion. Interestingly, visuo-tactile synchrony dominated because synchronous visuo-tactile stimulation combined with asynchronous visuo-vestibular stimulation elicited a body ownership illusion of similar strength as when both bimodal combinations were synchronous. Moreover, both visuo-tactile and visuo-vestibular synchrony were associated with enhanced self-motion perception; self-motion sensations were even triggered when visuo-tactile synchrony was combined with visuo-vestibular asynchrony, suggesting that ownership enhanced the relevance of visual information as a self-motion cue. Finally, the SCR results suggest that synchronous stimulation of either modality pair led to a stronger illusion compared to the asynchronous conditions. Collectively, the results suggest that visuo-tactile temporal correlations have a stronger influence on body ownership than visuo-vestibular correlations and that ownership boosts self-motion perception. We present a Bayesian causal inference model that can explain how visuo-vestibular and visuo-tactile information are combined in multisensory own-body perception.
Collapse
Affiliation(s)
| | - Sara Coppi
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Marie Chancel
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
- University Grenoble Alpes, CNRS, LPNC, Grenoble, France
| | - H. Henrik Ehrsson
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
8
|
Chung W, Barnett-Cowan M. Influence of Sensory Conflict on Perceived Timing of Passive Rotation in Virtual Reality. Multisens Res 2022; 35:1-23. [PMID: 35477696 DOI: 10.1163/22134808-bja10074] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 03/17/2022] [Indexed: 02/21/2024]
Abstract
Integration of incoming sensory signals from multiple modalities is central in the determination of self-motion perception. With the emergence of consumer virtual reality (VR), it is becoming increasingly common to experience a mismatch in sensory feedback regarding motion when using immersive displays. In this study, we explored whether introducing various discrepancies between the vestibular and visual motion would influence the perceived timing of self-motion. Participants performed a series of temporal-order judgements between an auditory tone and a passive whole-body rotation on a motion platform accompanied by visual feedback using a virtual environment generated through a head-mounted display. Sensory conflict was induced by altering the speed and direction by which the movement of the visual scene updated relative to the observer's physical rotation. There were no differences in perceived timing of the rotation without vision, with congruent visual feedback and when the speed of the updating of the visual motion was slower. However, the perceived timing was significantly further from zero when the direction of the visual motion was incongruent with the rotation. These findings demonstrate the potential interaction between visual and vestibular signals in the temporal perception of self-motion. Additionally, we recorded cybersickness ratings and found that sickness severity was significantly greater when visual motion was present and incongruent with the physical motion. This supports previous research regarding cybersickness and the sensory conflict theory, where a mismatch between the visual and vestibular signals may lead to a greater likelihood for the occurrence of sickness symptoms.
Collapse
Affiliation(s)
- William Chung
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
| | | |
Collapse
|
9
|
Hong F, Badde S, Landy MS. Causal inference regulates audiovisual spatial recalibration via its influence on audiovisual perception. PLoS Comput Biol 2021; 17:e1008877. [PMID: 34780469 PMCID: PMC8629398 DOI: 10.1371/journal.pcbi.1008877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 11/29/2021] [Accepted: 10/26/2021] [Indexed: 11/23/2022] Open
Abstract
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli. Audiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical audiovisual recalibration experiment in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the audiovisual recalibration. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, this model is also able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a sensory measurement and the perceptual estimate for the same sensory modality. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.
Collapse
Affiliation(s)
- Fangfang Hong
- Department of Psychology, New York University, New York City, New York, United States of America
- * E-mail:
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, Massachusetts, United States of America
| | - Michael S. Landy
- Department of Psychology, New York University, New York City, New York, United States of America
- Center for Neural Science, New York University, New York City, New York, United States of America
| |
Collapse
|
10
|
Keshavarzi S, Bracey EF, Faville RA, Campagner D, Tyson AL, Lenzi SC, Branco T, Margrie TW. Multisensory coding of angular head velocity in the retrosplenial cortex. Neuron 2021; 110:532-543.e9. [PMID: 34788632 PMCID: PMC8823706 DOI: 10.1016/j.neuron.2021.10.031] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 07/29/2021] [Accepted: 10/20/2021] [Indexed: 01/05/2023]
Abstract
To successfully navigate the environment, animals depend on their ability to continuously track their heading direction and speed. Neurons that encode angular head velocity (AHV) are fundamental to this process, yet the contribution of various motion signals to AHV coding in the cortex remains elusive. By performing chronic single-unit recordings in the retrosplenial cortex (RSP) of the mouse and tracking the activity of individual AHV cells between freely moving and head-restrained conditions, we find that vestibular inputs dominate AHV signaling. Moreover, the addition of visual inputs onto these neurons increases the gain and signal-to-noise ratio of their tuning during active exploration. Psychophysical experiments and neural decoding further reveal that vestibular-visual integration increases the perceptual accuracy of angular self-motion and the fidelity of its representation by RSP ensembles. We conclude that while cortical AHV coding requires vestibular input, where possible, it also uses vision to optimize heading estimation during navigation. Angular head velocity (AHV) coding is widespread in the retrosplenial cortex (RSP) AHV cells maintain their tuning during passive motion and require vestibular input The perception of angular self-motion is improved when visual cues are present AHV coding is similarly improved when both vestibular and visual stimuli are used
Collapse
Affiliation(s)
- Sepiedeh Keshavarzi
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom.
| | - Edward F Bracey
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom
| | - Richard A Faville
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom
| | - Dario Campagner
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom; Gatsby Computational Neuroscience Unit, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom
| | - Adam L Tyson
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom
| | - Stephen C Lenzi
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom
| | - Tiago Branco
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom
| | - Troy W Margrie
- Sainsbury Wellcome Centre for Neural Circuits and Behaviour, University College London (UCL), 25 Howland Street, London W1T 4JG, United Kingdom.
| |
Collapse
|
11
|
Noel JP, Angelaki DE. Cognitive, Systems, and Computational Neurosciences of the Self in Motion. Annu Rev Psychol 2021; 73:103-129. [PMID: 34546803 DOI: 10.1146/annurev-psych-021021-103038] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Navigating by path integration requires continuously estimating one's self-motion. This estimate may be derived from visual velocity and/or vestibular acceleration signals. Importantly, these senses in isolation are ill-equipped to provide accurate estimates, and thus visuo-vestibular integration is an imperative. After a summary of the visual and vestibular pathways involved, the crux of this review focuses on the human and theoretical approaches that have outlined a normative account of cue combination in behavior and neurons, as well as on the systems neuroscience efforts that are searching for its neural implementation. We then highlight a contemporary frontier in our state of knowledge: understanding how velocity cues with time-varying reliabilities are integrated into an evolving position estimate over prolonged time periods. Further, we discuss how the brain builds internal models inferring when cues ought to be integrated versus segregated-a process of causal inference. Lastly, we suggest that the study of spatial navigation has not yet addressed its initial condition: self-location. Expected final online publication date for the Annual Review of Psychology, Volume 73 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, NY 10003, USA;
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY 10003, USA; .,Tandon School of Engineering, New York University, New York, NY 11201, USA
| |
Collapse
|
12
|
Tekgün E, Erdeniz B. Influence of vestibular signals on bodily self-consciousness: Different sensory weighting strategies based on visual dependency. Conscious Cogn 2021; 91:103108. [PMID: 33770704 DOI: 10.1016/j.concog.2021.103108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 01/27/2021] [Accepted: 02/26/2021] [Indexed: 10/21/2022]
Abstract
Previous studies showed that the vestibular system is crucial for multisensory integration, however, its contribution to bodily self-consciousness more specifically on full-body illusions is not well understood. Thus, the current study examined the role of visuo-vestibular conflict on a full-body illusion (FBI) experiment that was induced during a supine body position. In a mixed design experiment, 56 participants underwent through a full-body illusion protocol. During the experiment, half of the participants received synchronous visuo-tactile stimulation, and the other half received asynchronous visuo-tactile stimulation, while their physical body was lying in a supine position, but the virtual body was standing. Additionally, the contribution of individual sensory weighting strategies was investigated via the Rod and Frame task (RFT), which was applied both before (pre-FBI standing and pre-FBI supine) and after the full-body illusion (post-FBI supine) protocol. Subjective reports of the participants confirmed previous findings suggesting that there was a significant increase in ownership over a virtual body during synchronous visuo-tactile stimulation. Additionally, further categorization of participants based on their visual dependency (by RFT) showed that those participants who rely more on visual information (visual field dependents) perceived the full-body illusion more strongly than non-visual field dependents during the synchronous visuo-tactile stimulation condition. Further analysis provided not only a quantitative demonstration of full-body illusion but also revealed changes in perceived self-orientation based on their field dependency. Altogether, findings of the current study make further contributions to our understanding of the vestibular system and brought new insight for individual sensory weighting strategies during a full-body illusion.
Collapse
Affiliation(s)
- Ege Tekgün
- İzmir University of Economics, Department of Psychology, İzmir, Turkey
| | - Burak Erdeniz
- İzmir University of Economics, Department of Psychology, İzmir, Turkey.
| |
Collapse
|
13
|
Barra J, Giroux M, Metral M, Cian C, Luyat M, Kavounoudias A, Guerraz M. Functional properties of extended body representations in the context of kinesthesia. Neurophysiol Clin 2020; 50:455-465. [PMID: 33176990 DOI: 10.1016/j.neucli.2020.10.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 10/16/2020] [Accepted: 10/16/2020] [Indexed: 12/18/2022] Open
Abstract
A person's internal representation of his/her body is not fixed. It can be substantially modified by neurological injuries and can also be extended (in healthy participants) to incorporate objects that have a corporeal appearance (such as fake body segments, e.g. a rubber hand), virtual whole bodies (e.g. avatars), and even objects that do not have a corporeal appearance (e.g. tools). Here, we report data from patients and healthy participants that emphasize the flexible nature of body representation and question the extent to which incorporated objects have the same functional properties as biological body parts. Our data shed new light by highlighting the involvement of visual motion information from incorporated objects (rubber hands, full body avatars and hand-held tools) in the perception of one's own movement (kinesthesia). On the basis of these findings, we argue that incorporated objects can be treated as body parts, especially when kinesthesia is involved.
Collapse
Affiliation(s)
- Julien Barra
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Marion Giroux
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France
| | - Morgane Metral
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, LIP/PC2S, Grenoble, France
| | - Corinne Cian
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France; Institut de Recherche Biomédicale des Armées, Brétigny sur Orge, France
| | - Marion Luyat
- Univ. Lille, URL 4072 - PSITEC - Psychologie : Interactions, Temps, Emotions, Cognition, F-59000 Lille, France
| | - Anne Kavounoudias
- Aix-Marseille University, CNRS, LNSC UMR 7260, F-13331 Marseille, France
| | - Michel Guerraz
- Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, LPNC, 38000 Grenoble, France.
| |
Collapse
|
14
|
Gallagher M, Choi R, Ferrè ER. Multisensory Interactions in Virtual Reality: Optic Flow Reduces Vestibular Sensitivity, but Only for Congruent Planes of Motion. Multisens Res 2020; 33:625-644. [PMID: 31972542 DOI: 10.1163/22134808-20201487] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Accepted: 12/02/2019] [Indexed: 11/19/2022]
Abstract
During exposure to Virtual Reality (VR) a sensory conflict may be present, whereby the visual system signals that the user is moving in a certain direction with a certain acceleration, while the vestibular system signals that the user is stationary. In order to reduce this conflict, the brain may down-weight vestibular signals, which may in turn affect vestibular contributions to self-motion perception. Here we investigated whether vestibular perceptual sensitivity is affected by VR exposure. Participants' ability to detect artificial vestibular inputs was measured during optic flow or random motion stimuli on a VR head-mounted display. Sensitivity to vestibular signals was significantly reduced when optic flow stimuli were presented, but importantly this was only the case when both visual and vestibular cues conveyed information on the same plane of self-motion. Our results suggest that the brain dynamically adjusts the weight given to incoming sensory cues for self-motion in VR; however this is dependent on the congruency of visual and vestibular cues.
Collapse
Affiliation(s)
| | - Reno Choi
- Royal Holloway, University of London, Egham, UK
| | | |
Collapse
|
15
|
Caron-Guyon J, Corbo J, Zennou-Azogui Y, Xerri C, Kavounoudias A, Catz N. Neuronal Encoding of Multisensory Motion Features in the Rat Associative Parietal Cortex. Cereb Cortex 2020; 30:5372-5386. [PMID: 32494803 DOI: 10.1093/cercor/bhaa118] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 04/06/2020] [Accepted: 04/14/2020] [Indexed: 11/13/2022] Open
Abstract
Motion perception is facilitated by the interplay of various sensory channels. In rodents, the cortical areas involved in multisensory motion coding remain to be identified. Using voltage-sensitive-dye imaging, we revealed a visuo-tactile convergent region that anatomically corresponds to the associative parietal cortex (APC). Single unit responses to moving visual gratings or whiskers deflections revealed a specific coding of motion characteristics strikingly found in both sensory modalities. The heteromodality of this region was further supported by a large proportion of bimodal neurons and by a classification procedure revealing that APC carries information about motion features, sensory origin and multisensory direction-congruency. Altogether, the results point to a central role of APC in multisensory integration for motion perception.
Collapse
Affiliation(s)
| | - Julien Corbo
- Aix Marseille Université, CNRS, LNSC UMR 7260, Marseille 13331, France.,Center for Molecular and Behavioral Neuroscience, Rutgers University-Newark, NJ 07102, USA
| | | | - Christian Xerri
- Aix Marseille Université, CNRS, LNSC UMR 7260, Marseille 13331, France
| | - Anne Kavounoudias
- Aix Marseille Université, CNRS, LNSC UMR 7260, Marseille 13331, France
| | - Nicolas Catz
- Aix Marseille Université, CNRS, LNSC UMR 7260, Marseille 13331, France
| |
Collapse
|
16
|
Yeon J, Rahnev D. The suboptimality of perceptual decision making with multiple alternatives. Nat Commun 2020; 11:3857. [PMID: 32737317 PMCID: PMC7395091 DOI: 10.1038/s41467-020-17661-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Accepted: 07/08/2020] [Indexed: 11/23/2022] Open
Abstract
It is becoming widely appreciated that human perceptual decision making is suboptimal but the nature and origins of this suboptimality remain poorly understood. Most past research has employed tasks with two stimulus categories, but such designs cannot fully capture the limitations inherent in naturalistic perceptual decisions where choices are rarely between only two alternatives. We conduct four experiments with tasks involving multiple alternatives and use computational modeling to determine the decision-level representation on which the perceptual decisions are based. The results from all four experiments point to the existence of robust suboptimality such that most of the information in the sensory representation is lost during the transformation to a decision-level representation. These results reveal severe limits in the quality of decision-level representations for multiple alternatives and have strong implications about perceptual decision making in naturalistic settings.
Collapse
Affiliation(s)
- Jiwon Yeon
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| |
Collapse
|
17
|
Thür C, Roel Lesur M, Bockisch CJ, Lopez C, Lenggenhager B. The Tilted Self: Visuo-Graviceptive Mismatch in the Full-Body Illusion. Front Neurol 2019; 10:436. [PMID: 31133959 PMCID: PMC6517513 DOI: 10.3389/fneur.2019.00436] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 04/10/2019] [Indexed: 11/18/2022] Open
Abstract
The bodily self is a fundamental part of human self-consciousness and relies on online multimodal information and prior beliefs about one's own body. While the contribution of the vestibular system in this process remains under-investigated, it has been theorized to be important. The present experiment investigates the influence of conflicting gravity-related visual and bodily information on the sense of a body and, vice versa, the influence of altered embodiment on verticality and own-body orientation perception. In a full-body illusion setup, participants saw in a head-mounted display a projection of their own body 2 m in front of them, on which they saw a tactile stimulation on their back displayed either synchronously or asynchronously. By tilting the seen body to one side, an additional visuo-graviceptive conflict about the body orientation was created. Self-identification with the seen body was measured explicitly with a questionnaire and implicitly with skin temperature. As measures of orientation with respect to gravity, we assessed subjective haptic vertical and the haptic body orientation. Finally, we measured the individual visual field dependence using the rod-and-frame test. The results show a decrease in self-identification during the additional visuo-graviceptive conflict, but no modulation of perceived verticality or subjective body orientation. Furthermore, explorative analyses suggest a stimulation-dependent modulation of the perceived body orientation in individuals with a strong visual field dependence only. The results suggest a mutual interaction of graviceptive and other sensory signals and the individual's weighting style in defining our sense of a bodily self.
Collapse
Affiliation(s)
- Carla Thür
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Marte Roel Lesur
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Christopher J. Bockisch
- Department of Neurology, University Hospital Zurich, Zurich, Switzerland
- Department of Otorhinolaryngology, University Hospital Zurich, Zurich, Switzerland
- Department of Ophthalmology, University Hospital Zurich, Zurich, Switzerland
| | | | | |
Collapse
|
18
|
Meijer D, Veselič S, Calafiore C, Noppeney U. Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation. Cortex 2019; 119:74-88. [PMID: 31082680 PMCID: PMC6864592 DOI: 10.1016/j.cortex.2019.03.026] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 03/26/2019] [Accepted: 03/28/2019] [Indexed: 01/01/2023]
Abstract
Multisensory perception is regarded as one of the most prominent examples where human behaviour conforms to the computational principles of maximum likelihood estimation (MLE). In particular, observers are thought to integrate auditory and visual spatial cues weighted in proportion to their relative sensory reliabilities into the most reliable and unbiased percept consistent with MLE. Yet, evidence to date has been inconsistent. The current pre-registered, large-scale (N = 36) replication study investigated the extent to which human behaviour for audiovisual localization is in line with maximum likelihood estimation. The acquired psychophysics data show that while observers were able to reduce their multisensory variance relative to the unisensory variances in accordance with MLE, they weighed the visual signals significantly stronger than predicted by MLE. Simulations show that this dissociation can be explained by a greater sensitivity of standard estimation procedures to detect deviations from MLE predictions for sensory weights than for audiovisual variances. Our results therefore suggest that observers did not integrate audiovisual spatial signals weighted exactly in proportion to their relative reliabilities for localization. These small deviations from the predictions of maximum likelihood estimation may be explained by observers' uncertainty about the world's causal structure as accounted for by Bayesian causal inference.
Collapse
Affiliation(s)
- David Meijer
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK.
| | - Sebastijan Veselič
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Carmelo Calafiore
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
19
|
Stengård E, van den Berg R. Imperfect Bayesian inference in visual perception. PLoS Comput Biol 2019; 15:e1006465. [PMID: 30998675 PMCID: PMC6472731 DOI: 10.1371/journal.pcbi.1006465] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 03/08/2019] [Indexed: 11/24/2022] Open
Abstract
Optimal Bayesian models have been highly successful in describing human performance on perceptual decision-making tasks, such as cue combination and visual search. However, recent studies have argued that these models are often overly flexible and therefore lack explanatory power. Moreover, there are indications that neural computation is inherently imprecise, which makes it implausible that humans would perform optimally on any non-trivial task. Here, we reconsider human performance on a visual-search task by using an approach that constrains model flexibility and tests for computational imperfections. Subjects performed a target detection task in which targets and distractors were tilted ellipses with orientations drawn from Gaussian distributions with different means. We varied the amount of overlap between these distributions to create multiple levels of external uncertainty. We also varied the level of sensory noise, by testing subjects under both short and unlimited display times. On average, empirical performance-measured as d'-fell 18.1% short of optimal performance. We found no evidence that the magnitude of this suboptimality was affected by the level of internal or external uncertainty. The data were well accounted for by a Bayesian model with imperfections in its computations. This "imperfect Bayesian" model convincingly outperformed the "flawless Bayesian" model as well as all ten heuristic models that we tested. These results suggest that perception is founded on Bayesian principles, but with suboptimalities in the implementation of these principles. The view of perception as imperfect Bayesian inference can provide a middle ground between traditional Bayesian and anti-Bayesian views.
Collapse
Affiliation(s)
- Elina Stengård
- Department of Psychology, University of Uppsala, Uppsala, Sweden
| | | |
Collapse
|
20
|
Britton Z, Arshad Q. Vestibular and Multi-Sensory Influences Upon Self-Motion Perception and the Consequences for Human Behavior. Front Neurol 2019; 10:63. [PMID: 30899238 PMCID: PMC6416181 DOI: 10.3389/fneur.2019.00063] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 01/17/2019] [Indexed: 11/16/2022] Open
Abstract
In this manuscript, we comprehensively review both the human and animal literature regarding vestibular and multi-sensory contributions to self-motion perception. This covers the anatomical basis and how and where the signals are processed at all levels from the peripheral vestibular system to the brainstem and cerebellum and finally to the cortex. Further, we consider how and where these vestibular signals are integrated with other sensory cues to facilitate self-motion perception. We conclude by demonstrating the wide-ranging influences of the vestibular system and self-motion perception upon behavior, namely eye movement, postural control, and spatial awareness as well as new discoveries that such perception can impact upon numerical cognition, human affect, and bodily self-consciousness.
Collapse
Affiliation(s)
- Zelie Britton
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| | - Qadeer Arshad
- Department of Neuro-Otology, Charing Cross Hospital, Imperial College London, London, United Kingdom
| |
Collapse
|
21
|
Kaliuzhna M, Serino A, Berger S, Blanke O. Differential effects of vestibular processing on orienting exogenous and endogenous covert visual attention. Exp Brain Res 2018; 237:401-410. [PMID: 30421244 DOI: 10.1007/s00221-018-5403-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Accepted: 10/12/2018] [Indexed: 11/30/2022]
Abstract
Recent research highlights the overwhelming role of vestibular information for higher order cognition. Central to body perception, vestibular cues provide information about self-location in space, self-motion versus object motion, and modulate the perception of space. Surprisingly, however, little research has dealt with how vestibular information combines with other senses to orient one's attention in space. Here we used passive whole body rotations as exogenous (Experiment 1) or endogenous (Experiment 2) attentional cues and studied their effects on orienting visual attention in a classical Posner paradigm. We show that-when employed as an exogenous stimulus-rotation impacts attention orienting only immediately after vestibular stimulation onset. However, when acting as an endogenous stimulus, vestibular stimulation provides a robust benefit to target detection throughout the rotation profile. Our data also demonstrate that vestibular stimulation boosts attentional processing more generally, independent of rotation direction, associated with a general improvement in performance. These data provide evidence for distinct effects of vestibular processing on endogenous and exogenous attention as well as alertness that differ with respect to the temporal dynamics of the motion profile. These data reveal that attentional spatial processing and spatial body perception as manipulated through vestibular stimulation share important brain mechanisms.
Collapse
Affiliation(s)
- Mariia Kaliuzhna
- Center for Neuroprosthetics, Brain Mind Institute, Faculty of Life Sciences, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, 1015, Lausanne, Switzerland
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Steve Berger
- Center for Neuroprosthetics, Brain Mind Institute, Faculty of Life Sciences, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, 1015, Lausanne, Switzerland
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Brain Mind Institute, Faculty of Life Sciences, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, 1015, Lausanne, Switzerland.
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Science, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
- Department of Neurology, University Hospital, Geneva, Switzerland.
| |
Collapse
|
22
|
Hand movement illusions show changes in sensory reliance and preservation of multisensory integration with age for kinaesthesia. Neuropsychologia 2018; 119:45-58. [DOI: 10.1016/j.neuropsychologia.2018.07.027] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 07/18/2018] [Accepted: 07/25/2018] [Indexed: 11/20/2022]
|
23
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
24
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
25
|
Pfeiffer C, Noel J, Serino A, Blanke O. Vestibular modulation of peripersonal space boundaries. Eur J Neurosci 2018; 47:800-811. [DOI: 10.1111/ejn.13872] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Revised: 02/08/2018] [Accepted: 02/13/2018] [Indexed: 11/29/2022]
Affiliation(s)
- Christian Pfeiffer
- Center for Neuroprosthetics School of Life Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL) Campus Biotech H4, Chemin des Mines 9 Geneva CH – 1202 Switzerland
- Laboratory of Cognitive Neuroscience Brain Mind Institute Ecole Polytechnique Fédérale de Lausanne (EPFL) Geneva Switzerland
- Autonomous Systems Laboratory Institute of Robotics and Intelligent Systems Eidgenössische Technische Hochschule Zürich (ETHZ) Zürich Switzerland
| | - Jean‐Paul Noel
- Center for Neuroprosthetics School of Life Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL) Campus Biotech H4, Chemin des Mines 9 Geneva CH – 1202 Switzerland
- Laboratory of Cognitive Neuroscience Brain Mind Institute Ecole Polytechnique Fédérale de Lausanne (EPFL) Geneva Switzerland
- Vanderbilt Brain Institute Vanderbilt University Nashville TN USA
| | - Andrea Serino
- Center for Neuroprosthetics School of Life Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL) Campus Biotech H4, Chemin des Mines 9 Geneva CH – 1202 Switzerland
- Laboratory of Cognitive Neuroscience Brain Mind Institute Ecole Polytechnique Fédérale de Lausanne (EPFL) Geneva Switzerland
- MySpace Lab Department of Clinical Neuroscience Lausanne University and University Hospital (CHUV) Lausanne Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics School of Life Sciences Ecole Polytechnique Fédérale de Lausanne (EPFL) Campus Biotech H4, Chemin des Mines 9 Geneva CH – 1202 Switzerland
- Laboratory of Cognitive Neuroscience Brain Mind Institute Ecole Polytechnique Fédérale de Lausanne (EPFL) Geneva Switzerland
- Department of Neurology University Hospital Geneva Geneva Switzerland
| |
Collapse
|
26
|
Kaliuzhna M, Gale S, Prsa M, Maire R, Blanke O. Optimal visuo-vestibular integration for self-motion perception in patients with unilateral vestibular loss. Neuropsychologia 2018; 111:112-116. [DOI: 10.1016/j.neuropsychologia.2018.01.033] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 12/12/2017] [Accepted: 01/22/2018] [Indexed: 11/29/2022]
|
27
|
Abstract
Human perceptual decisions are often described as optimal. Critics of this view have argued that claims of optimality are overly flexible and lack explanatory power. Meanwhile, advocates for optimality have countered that such criticisms single out a few selected papers. To elucidate the issue of optimality in perceptual decision making, we review the extensive literature on suboptimal performance in perceptual tasks. We discuss eight different classes of suboptimal perceptual decisions, including improper placement, maintenance, and adjustment of perceptual criteria; inadequate tradeoff between speed and accuracy; inappropriate confidence ratings; misweightings in cue combination; and findings related to various perceptual illusions and biases. In addition, we discuss conceptual shortcomings of a focus on optimality, such as definitional difficulties and the limited value of optimality claims in and of themselves. We therefore advocate that the field drop its emphasis on whether observed behavior is optimal and instead concentrate on building and testing detailed observer models that explain behavior across a wide range of tasks. To facilitate this transition, we compile the proposed hypotheses regarding the origins of suboptimal perceptual decisions reviewed here. We argue that verifying, rejecting, and expanding these explanations for suboptimal behavior - rather than assessing optimality per se - should be among the major goals of the science of perceptual decision making.
Collapse
Affiliation(s)
- Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA 30332.
| | - Rachel N Denison
- Department of Psychology and Center for Neural Science, New York University, New York, NY 10003.
| |
Collapse
|
28
|
Uesaki M, Takemura H, Ashida H. Computational neuroanatomy of human stratum proprium of interparietal sulcus. Brain Struct Funct 2018; 223:489-507. [PMID: 28871500 PMCID: PMC5772143 DOI: 10.1007/s00429-017-1492-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 08/04/2017] [Indexed: 11/19/2022]
Abstract
Recent advances in diffusion-weighted MRI (dMRI) and tractography have enabled identification of major long-range white matter tracts in the human brain. Yet, our understanding of shorter tracts, such as those within the parietal lobe, remains limited. Over a century ago, a tract connecting the superior and inferior parts of the parietal cortex was identified in a post-mortem study: stratum proprium of interparietal sulcus (SIPS; Sachs, Das hemisphärenmark des menschlichen grosshirns. Verlag von georg thieme, Leipzig, 1892). The tract has since been replicated in another fibre dissection study (Vergani et al., Cortex 56:145-156, 2014), however, it has not been fully investigated in the living human brain and its precise anatomical properties are yet to be described. We used dMRI and tractography to identify and characterise SIPS in vivo, and explored its spatial proximity to the cortical areas associated with optic-flow processing using fMRI. SIPS was identified bilaterally in all subjects, and its anatomical position and trajectory are consistent with previous post-mortem studies. Subsequent evaluation of the tractography results using the linear fascicle evaluation and virtual lesion analysis yielded strong statistical evidence for SIPS. We also found that the SIPS endpoints are adjacent to the optic-flow selective areas. In sum, we show that SIPS is a short-range tract connecting the superior and inferior parts of the parietal cortex, wrapping around the intraparietal sulcus, and that it may be a crucial anatomy underlying optic-flow processing. In vivo identification and characterisation of SIPS will facilitate further research on SIPS in relation to cortical functions, their development, and diseases that affect them.
Collapse
Affiliation(s)
- Maiko Uesaki
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto, Japan.
- Japan Society for the Promotion of Science, Tokyo, Japan.
- Open Innovation and Collaboration Research Organization, Ritsumeikan University, Osaka, Japan.
| | - Hiromasa Takemura
- Japan Society for the Promotion of Science, Tokyo, Japan.
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology, and Osaka University, Suita, Japan.
- Graduate School of Frontier Biosciences, Osaka University, Suita, Japan.
| | - Hiroshi Ashida
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto, Japan
| |
Collapse
|
29
|
Gallagher M, Ferrè ER. Cybersickness: a Multisensory Integration Perspective. Multisens Res 2018; 31:645-674. [PMID: 31264611 DOI: 10.1163/22134808-20181293] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 01/05/2018] [Indexed: 11/19/2022]
Abstract
In the past decade, there has been a rapid advance in Virtual Reality (VR) technology. Key to the user's VR experience are multimodal interactions involving all senses. The human brain must integrate real-time vision, hearing, vestibular and proprioceptive inputs to produce the compelling and captivating feeling of immersion in a VR environment. A serious problem with VR is that users may develop symptoms similar to motion sickness, a malady called cybersickness. At present the underlying cause of cybersickness is not yet fully understood. Cybersickness may be due to a discrepancy between the sensory signals which provide information about the body's orientation and motion: in many VR applications, optic flow elicits an illusory sensation of motion which tells users that they are moving in a certain direction with certain acceleration. However, since users are not actually moving, their proprioceptive and vestibular organs provide no cues of self-motion. These conflicting signals may lead to sensory discrepancies and eventually cybersickness. Here we review the current literature to develop a conceptual scheme for understanding the neural mechanisms of cybersickness. We discuss an approach to cybersickness based on sensory cue integration, focusing on the dynamic re-weighting of visual and vestibular signals for self-motion.
Collapse
Affiliation(s)
- Maria Gallagher
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | | |
Collapse
|
30
|
Garzorz IT, MacNeilage PR. Visual-Vestibular Conflict Detection Depends on Fixation. Curr Biol 2017; 27:2856-2861.e4. [DOI: 10.1016/j.cub.2017.08.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/19/2017] [Accepted: 08/04/2017] [Indexed: 10/18/2022]
|
31
|
Churan J, Paul J, Klingenhoefer S, Bremmer F. Integration of visual and tactile information in reproduction of traveled distance. J Neurophysiol 2017; 118:1650-1663. [PMID: 28659463 DOI: 10.1152/jn.00342.2017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Revised: 06/27/2017] [Accepted: 06/27/2017] [Indexed: 11/22/2022] Open
Abstract
In the natural world, self-motion always stimulates several different sensory modalities. Here we investigated the interplay between a visual optic flow stimulus simulating self-motion and a tactile stimulus (air flow resulting from self-motion) while human observers were engaged in a distance reproduction task. We found that adding congruent tactile information (i.e., speed of the air flow and speed of visual motion are directly proportional) to the visual information significantly improves the precision of the actively reproduced distances. This improvement, however, was smaller than predicted for an optimal integration of visual and tactile information. In contrast, incongruent tactile information (i.e., speed of the air flow and speed of visual motion are inversely proportional) did not improve subjects' precision indicating that incongruent tactile information and visual information were not integrated. One possible interpretation of the results is a link to properties of neurons in the ventral intraparietal area that have been shown to have spatially and action-congruent receptive fields for visual and tactile stimuli.NEW & NOTEWORTHY This study shows that tactile and visual information can be integrated to improve the estimates of the parameters of self-motion. This, however, happens only if the two sources of information are congruent-as they are in a natural environment. In contrast, an incongruent tactile stimulus is still used as a source of information about self-motion but it is not integrated with visual information.
Collapse
Affiliation(s)
- Jan Churan
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| | - Johannes Paul
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| | - Steffen Klingenhoefer
- Department of Neurophysics, Marburg University, Marburg, Germany; and.,Center for Molecular and Behavioral Neuroscience, Rutgers University, Newark, New Jersey
| | - Frank Bremmer
- Department of Neurophysics, Marburg University, Marburg, Germany; and
| |
Collapse
|
32
|
Nesti A, de Winkel K, Bülthoff HH. Accumulation of Inertial Sensory Information in the Perception of Whole Body Yaw Rotation. PLoS One 2017; 12:e0170497. [PMID: 28125681 PMCID: PMC5268484 DOI: 10.1371/journal.pone.0170497] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2016] [Accepted: 12/15/2016] [Indexed: 11/26/2022] Open
Abstract
While moving through the environment, our central nervous system accumulates sensory information over time to provide an estimate of our self-motion, allowing for completing crucial tasks such as maintaining balance. However, little is known on how the duration of the motion stimuli influences our performances in a self-motion discrimination task. Here we study the human ability to discriminate intensities of sinusoidal (0.5 Hz) self-rotations around the vertical axis (yaw) for four different stimulus durations (1, 2, 3 and 5 s) in darkness. In a typical trial, participants experienced two consecutive rotations of equal duration and different peak amplitude, and reported the one perceived as stronger. For each stimulus duration, we determined the smallest detectable change in stimulus intensity (differential threshold) for a reference velocity of 15 deg/s. Results indicate that differential thresholds decrease with stimulus duration and asymptotically converge to a constant, positive value. This suggests that the central nervous system accumulates sensory information on self-motion over time, resulting in improved discrimination performances. Observed trends in differential thresholds are consistent with predictions based on a drift diffusion model with leaky integration of sensory evidence.
Collapse
Affiliation(s)
- Alessandro Nesti
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Ksander de Winkel
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Heinrich H Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
33
|
Gori M, Cappagli G, Baud-Bovy G, Finocchietti S. Shape Perception and Navigation in Blind Adults. Front Psychol 2017; 8:10. [PMID: 28144226 PMCID: PMC5240028 DOI: 10.3389/fpsyg.2017.00010] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 01/03/2017] [Indexed: 11/25/2022] Open
Abstract
Different sensory systems interact to generate a representation of space and to navigate. Vision plays a critical role in the representation of space development. During navigation, vision is integrated with auditory and mobility cues. In blind individuals, visual experience is not available and navigation therefore lacks this important sensory signal. In blind individuals, compensatory mechanisms can be adopted to improve spatial and navigation skills. On the other hand, the limitations of these compensatory mechanisms are not completely clear. Both enhanced and impaired reliance on auditory cues in blind individuals have been reported. Here, we develop a new paradigm to test both auditory perception and navigation skills in blind and sighted individuals and to investigate the effect that visual experience has on the ability to reproduce simple and complex paths. During the navigation task, early blind, late blind and sighted individuals were required first to listen to an audio shape and then to recognize and reproduce it by walking. After each audio shape was presented, a static sound was played and the participants were asked to reach it. Movements were recorded with a motion tracking system. Our results show three main impairments specific to early blind individuals. The first is the tendency to compress the shapes reproduced during navigation. The second is the difficulty to recognize complex audio stimuli, and finally, the third is the difficulty in reproducing the desired shape: early blind participants occasionally reported perceiving a square but they actually reproduced a circle during the navigation task. We discuss these results in terms of compromised spatial reference frames due to lack of visual input during the early period of development.
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| | - Gabriel Baud-Bovy
- Robotics, Brain and Cognitive Science Department, Istituto Italiano di TecnologiaGenoa, Italy; The Unit of Experimental Psychology, Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele UniversityMilan, Italy
| | - Sara Finocchietti
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|
34
|
Cuturi LF, Aggius-Vella E, Campus C, Parmiggiani A, Gori M. From science to technology: Orientation and mobility in blind children and adults. Neurosci Biobehav Rev 2016; 71:240-251. [DOI: 10.1016/j.neubiorev.2016.08.019] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 08/13/2016] [Accepted: 08/16/2016] [Indexed: 11/27/2022]
|
35
|
Schurger A, Gale S, Gozel O, Blanke O. Performance monitoring for brain-computer-interface actions. Brain Cogn 2016; 111:44-50. [PMID: 27816779 DOI: 10.1016/j.bandc.2016.09.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Revised: 08/31/2016] [Accepted: 09/15/2016] [Indexed: 11/16/2022]
Abstract
When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body.
Collapse
Affiliation(s)
- Aaron Schurger
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Department of Life Sciences, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland; Defitech Chair in Non-Invasive Brain-Machine Interface, Center for Neuroprosthetics, School of Engineering, Ecole Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland; Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland.
| | - Steven Gale
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Department of Life Sciences, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland; Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland
| | - Olivia Gozel
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Department of Life Sciences, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland; Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, Department of Life Sciences, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland; Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Campus Biotech, 9 Chemin des Mines, 1202 Genève, Switzerland; Department of Neurology, University Hospital of Geneva, Rue Micheli-du-Crest 24, 1205 Geneva, Switzerland
| |
Collapse
|
36
|
Ramkhalawansingh R, Keshavarz B, Haycock B, Shahab S, Campos JL. Examining the Effect of Age on Visual–Vestibular Self-Motion Perception Using a Driving Paradigm. Perception 2016; 46:566-585. [DOI: 10.1177/0301006616675883] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous psychophysical research has examined how younger adults and non-human primates integrate visual and vestibular cues to perceive self-motion. However, there is much to be learned about how multisensory self-motion perception changes with age, and how these changes affect performance on everyday tasks involving self-motion. Evidence suggests that older adults display heightened multisensory integration compared with younger adults; however, few previous studies have examined this for visual–vestibular integration. To explore age differences in the way that visual and vestibular cues contribute to self-motion perception, we had younger and older participants complete a basic driving task containing visual and vestibular cues. We compared their performance against a previously established control group that experienced visual cues alone. Performance measures included speed, speed variability, and lateral position. Vestibular inputs resulted in more precise speed control among older adults, but not younger adults, when traversing curves. Older adults demonstrated more variability in lateral position when vestibular inputs were available versus when they were absent. These observations align with previous evidence of age-related differences in multisensory integration and demonstrate that they may extend to visual–vestibular integration. These findings may have implications for vehicle and simulator design when considering older users.
Collapse
Affiliation(s)
- Robert Ramkhalawansingh
- Department of Psychology, University of Toronto, Canada; Toronto Rehabilitation Institute, University Health Network, Canada
| | - Behrang Keshavarz
- Toronto Rehabilitation Institute, University Health Network, Canada; Department of Psychology, Ryerson University
| | - Bruce Haycock
- Toronto Rehabilitation Institute, University Health Network, Canada; Institute for Aerospace Studies, University of Toronto, Canada
| | - Saba Shahab
- Faculty of Medicine, University of Toronto, Canada
| | - Jennifer L. Campos
- Toronto Rehabilitation Institute, University Health Network, Canada; Department of Psychology, University of Toronto, Canada
| |
Collapse
|
37
|
Genzel D, Firzlaff U, Wiegrebe L, MacNeilage PR. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals. J Neurophysiol 2016; 116:765-75. [PMID: 27169504 DOI: 10.1152/jn.00052.2016] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 05/09/2016] [Indexed: 11/22/2022] Open
Abstract
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating.
Collapse
Affiliation(s)
- Daria Genzel
- Department Biology II, Ludwig-Maximilian University of Munich, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany
| | - Uwe Firzlaff
- Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany; Chair of Zoology, Technische Universität München, Freising-Weihenstephan, Germany; and
| | - Lutz Wiegrebe
- Department Biology II, Ludwig-Maximilian University of Munich, Planegg-Martinsried, Germany; Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany
| | - Paul R MacNeilage
- Bernstein Center for Computational Neuroscience Munich, Planegg-Martinsried, Germany; Deutsches Schwindel- und Gleichgewichtszentrum, University Hospital of Munich, Munich, Germany
| |
Collapse
|
38
|
Chancel M, Blanchard C, Guerraz M, Montagnini A, Kavounoudias A. Optimal visuotactile integration for velocity discrimination of self-hand movements. J Neurophysiol 2016; 116:1522-1535. [PMID: 27385802 DOI: 10.1152/jn.00883.2015] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Accepted: 07/06/2016] [Indexed: 11/22/2022] Open
Abstract
Illusory hand movements can be elicited by a textured disk or a visual pattern rotating under one's hand, while proprioceptive inputs convey immobility information (Blanchard C, Roll R, Roll JP, Kavounoudias A. PLoS One 8: e62475, 2013). Here, we investigated whether visuotactile integration can optimize velocity discrimination of illusory hand movements in line with Bayesian predictions. We induced illusory movements in 15 volunteers by visual and/or tactile stimulation delivered at six angular velocities. Participants had to compare hand illusion velocities with a 5°/s hand reference movement in an alternative forced choice paradigm. Results showed that the discrimination threshold decreased in the visuotactile condition compared with unimodal (visual or tactile) conditions, reflecting better bimodal discrimination. The perceptual strength (gain) of the illusions also increased: the stimulation required to give rise to a 5°/s illusory movement was slower in the visuotactile condition compared with each of the two unimodal conditions. The maximum likelihood estimation model satisfactorily predicted the improved discrimination threshold but not the increase in gain. When we added a zero-centered prior, reflecting immobility information, the Bayesian model did actually predict the gain increase but systematically overestimated it. Interestingly, the predicted gains better fit the visuotactile performances when a proprioceptive noise was generated by covibrating antagonist wrist muscles. These findings show that kinesthetic information of visual and tactile origins is optimally integrated to improve velocity discrimination of self-hand movements. However, a Bayesian model alone could not fully describe the illusory phenomenon pointing to the crucial importance of the omnipresent muscle proprioceptive cues with respect to other sensory cues for kinesthesia.
Collapse
Affiliation(s)
- M Chancel
- LNIA UMR 7260, Aix Marseille Université-Centre National de la Recherche Scientifique (CNRS), Marseille, France; LPNC UMR 5105, Université Savoie Mont Blanc-CNRS, Chambéry, France
| | - C Blanchard
- School of Psychology, University of Nottingham, Nottingham, United Kingdom; and
| | - M Guerraz
- LPNC UMR 5105, Université Savoie Mont Blanc-CNRS, Chambéry, France
| | - A Montagnini
- INT UMR 7289, Aix Marseille Université-CNRS, Marseille, France
| | - A Kavounoudias
- LNIA UMR 7260, Aix Marseille Université-Centre National de la Recherche Scientifique (CNRS), Marseille, France;
| |
Collapse
|
39
|
Nash CJ, Cole DJ, Bigler RS. A review of human sensory dynamics for application to models of driver steering and speed control. BIOLOGICAL CYBERNETICS 2016; 110:91-116. [PMID: 27086133 PMCID: PMC4903114 DOI: 10.1007/s00422-016-0682-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 02/22/2016] [Indexed: 06/05/2023]
Abstract
In comparison with the high level of knowledge about vehicle dynamics which exists nowadays, the role of the driver in the driver-vehicle system is still relatively poorly understood. A large variety of driver models exist for various applications; however, few of them take account of the driver's sensory dynamics, and those that do are limited in their scope and accuracy. A review of the literature has been carried out to consolidate information from previous studies which may be useful when incorporating human sensory systems into the design of a driver model. This includes information on sensory dynamics, delays, thresholds and integration of multiple sensory stimuli. This review should provide a basis for further study into sensory perception during driving.
Collapse
Affiliation(s)
- Christopher J. Nash
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - David J. Cole
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| | - Robert S. Bigler
- Cambridge University Engineering Department, Trumpington Street, Cambridge, CB2 1PZ UK
| |
Collapse
|
40
|
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction. Sci Rep 2016; 6:26301. [PMID: 27198907 PMCID: PMC4873743 DOI: 10.1038/srep26301] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 04/25/2016] [Indexed: 12/01/2022] Open
Abstract
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion.
Collapse
|
41
|
Pfeiffer C, Grivaz P, Herbelin B, Serino A, Blanke O. Visual gravity contributes to subjective first-person perspective. Neurosci Conscious 2016; 2016:niw006. [PMID: 30109127 PMCID: PMC6084587 DOI: 10.1093/nc/niw006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
A fundamental component of conscious experience involves a first-person perspective (1PP), characterized by the experience of being a subject and of being directed at the world. Extending earlier work on multisensory perceptual mechanisms of 1PP, we here asked whether the experienced direction of the 1PP (i.e. the spatial direction of subjective experience of the world) depends on visual-tactile-vestibular conflicts, including the direction of gravity. Sixteen healthy subjects in supine position received visuo-tactile synchronous or asynchronous stroking to induce a full-body illusion. In the critical manipulation, we presented gravitational visual object motion directed toward or away from the participant’s body and thus congruent or incongruent with respect to the direction of vestibular and somatosensory gravitational cues. The results showed that multisensory gravitational conflict induced within-subject changes of the experienced direction of the 1PP that depended on the direction of visual gravitational cues. Participants experienced more often a downward direction of their 1PP (incongruent with respect to the participant’s physical body posture) when visual object motion was directed away rather than towards the participant’s body. These downward-directed 1PP experiences positively correlated with measures of elevated self-location. Together, these results show that visual gravitational cues contribute to the experienced direction of the 1PP, defining the subjective location and perspective from where humans experience to perceive the world.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neuroscience, Lausanne University and University Hospital, Switzerland
| | - Petr Grivaz
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Bruno Herbelin
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Andrea Serino
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Olaf Blanke
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Department of Neurology, University Hospital Geneva, Switzerland
| |
Collapse
|
42
|
Ramkhalawansingh R, Keshavarz B, Haycock B, Shahab S, Campos JL. Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task. Front Psychol 2016; 7:595. [PMID: 27199829 PMCID: PMC4848465 DOI: 10.3389/fpsyg.2016.00595] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Accepted: 04/11/2016] [Indexed: 11/17/2022] Open
Abstract
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.
Collapse
Affiliation(s)
- Robert Ramkhalawansingh
- Research/iDAPT, Toronto Rehabilitation InstituteToronto, ON, Canada; Department of Psychology, University of TorontoToronto, ON, Canada
| | - Behrang Keshavarz
- Research/iDAPT, Toronto Rehabilitation Institute Toronto, ON, Canada
| | - Bruce Haycock
- Research/iDAPT, Toronto Rehabilitation Institute Toronto, ON, Canada
| | - Saba Shahab
- Research/iDAPT, Toronto Rehabilitation InstituteToronto, ON, Canada; Department of Psychology, University of TorontoToronto, ON, Canada; Institute of Medical Science, Faculty of Medicine, University of TorontoToronto, ON, Canada
| | - Jennifer L Campos
- Research/iDAPT, Toronto Rehabilitation InstituteToronto, ON, Canada; Department of Psychology, University of TorontoToronto, ON, Canada
| |
Collapse
|
43
|
|
44
|
Bedford R, Pellicano E, Mareschal D, Nardini M. Flexible integration of visual cues in adolescents with autism spectrum disorder. Autism Res 2016; 9:272-81. [PMID: 26097109 PMCID: PMC4864758 DOI: 10.1002/aur.1509] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Accepted: 05/20/2015] [Indexed: 11/18/2022]
Abstract
Although children with autism spectrum disorder (ASD) show atypical sensory processing, evidence for impaired integration of multisensory information has been mixed. In this study, we took a Bayesian model-based approach to assess within-modality integration of congruent and incongruent texture and disparity cues to judge slant in typical and autistic adolescents. Human adults optimally combine multiple sources of sensory information to reduce perceptual variance but in typical development this ability to integrate cues does not develop until late childhood. While adults cannot help but integrate cues, even when they are incongruent, young children's ability to keep cues separate gives them an advantage in discriminating incongruent stimuli. Given that mature cue integration emerges in later childhood, we hypothesized that typical adolescents would show adult-like integration, combining both congruent and incongruent cues. For the ASD group there were three possible predictions (1) "no fusion": no integration of congruent or incongruent cues, like 6-year-old typical children; (2) "mandatory fusion": integration of congruent and incongruent cues, like typical adults; (3) "selective fusion": cues are combined when congruent but not incongruent, consistent with predictions of Enhanced Perceptual Functioning (EPF) theory. As hypothesized, typical adolescents showed significant integration of both congruent and incongruent cues. The ASD group showed results consistent with "selective fusion," integrating congruent but not incongruent cues. This allowed adolescents with ASD to make perceptual judgments which typical adolescents could not. In line with EPF, results suggest that perception in ASD may be more flexible and less governed by mandatory top-down feedback.
Collapse
Affiliation(s)
- Rachael Bedford
- Biostatistics DepartmentInstitute of Psychiatry, King's College LondonUnited Kingdom
| | - Elizabeth Pellicano
- Centre for Research in Autism and Education (CRAE)Institute of Education, University of LondonUnited Kingdom
- School of PsychologyUniversity of Western AustraliaPerthAustralia
| | - Denis Mareschal
- Centre for Brain and Cognitive DevelopmentBirkbeck University of LondonUnited Kingdom
| | - Marko Nardini
- Department of PsychologyDurham UniversityDurhamUnited Kingdom
| |
Collapse
|
45
|
Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object. J Neurosci 2016; 35:13599-607. [PMID: 26446214 DOI: 10.1523/jneurosci.2267-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion.
Collapse
|
46
|
Greenlee M, Frank S, Kaliuzhna M, Blanke O, Bremmer F, Churan J, Cuturi LF, MacNeilage P, Smith A. Multisensory Integration in Self Motion Perception. Multisens Res 2016. [DOI: 10.1163/22134808-00002527] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Self motion perception involves the integration of visual, vestibular, somatosensory and motor signals. This article reviews the findings from single unit electrophysiology, functional and structural magnetic resonance imaging and psychophysics to present an update on how the human and non-human primate brain integrates multisensory information to estimate one’s position and motion in space. The results indicate that there is a network of regions in the non-human primate and human brain that processes self motion cues from the different sense modalities.
Collapse
Affiliation(s)
- Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
| | - Sebastian M. Frank
- Institute of Experimental Psychology, University of Regensburg, Regensburg, Germany
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Mariia Kaliuzhna
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, Laboratory of Cognitive Neuroscience, Ecole Polytechnique Fédérale de Lausanne, EPFL, Switzerland
| | - Frank Bremmer
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, University of Marburg, Marburg, Germany
| | - Luigi F. Cuturi
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Paul R. MacNeilage
- German Center for Vertigo, University Hospital of Munich, LMU, Munich, Germany
| | - Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, UK
| |
Collapse
|
47
|
Gale S, Prsa M, Schurger A, Gay A, Paillard A, Herbelin B, Guyot JP, Lopez C, Blanke O. Oscillatory neural responses evoked by natural vestibular stimuli in humans. J Neurophysiol 2015; 115:1228-42. [PMID: 26683063 DOI: 10.1152/jn.00153.2015] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2015] [Accepted: 12/12/2015] [Indexed: 11/22/2022] Open
Abstract
While there have been numerous studies of the vestibular system in mammals, less is known about the brain mechanisms of vestibular processing in humans. In particular, of the studies that have been carried out in humans over the last 30 years, none has investigated how vestibular stimulation (VS) affects cortical oscillations. Here we recorded high-density electroencephalography (EEG) in healthy human subjects and a group of bilateral vestibular loss patients (BVPs) undergoing transient and constant-velocity passive whole body yaw rotations, focusing our analyses on the modulation of cortical oscillations in response to natural VS. The present approach overcame significant technical challenges associated with combining natural VS with human electrophysiology and reveals that both transient and constant-velocity VS are associated with a prominent suppression of alpha power (8-13 Hz). Alpha band suppression was localized over bilateral temporo-parietal scalp regions, and these alpha modulations were significantly smaller in BVPs. We propose that suppression of oscillations in the alpha band over temporo-parietal scalp regions reflects cortical vestibular processing, potentially comparable with alpha and mu oscillations in the visual and sensorimotor systems, respectively, opening the door to the investigation of human cortical processing under various experimental conditions during natural VS.
Collapse
Affiliation(s)
- Steven Gale
- Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Laboratory of Cognitive Neuroscience, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Mario Prsa
- Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Laboratory of Cognitive Neuroscience, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Aaron Schurger
- Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Laboratory of Cognitive Neuroscience, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Annietta Gay
- Department of Otorhinolaryngology, University Hospital Geneva, Geneva, Switzerland
| | - Aurore Paillard
- Laboratory of Cognitive Neuroscience, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Bruno Herbelin
- Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Laboratory of Cognitive Neuroscience, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Jean-Philippe Guyot
- Department of Otorhinolaryngology, University Hospital Geneva, Geneva, Switzerland
| | - Christophe Lopez
- Aix Marseille Université, CNRS, NIA UMR 7260, Marseille, France; and
| | - Olaf Blanke
- Center for Neuroprosthetics, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Laboratory of Cognitive Neuroscience, Brain-Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Department of Neurology, University Hospital Geneva, Geneva, Switzerland
| |
Collapse
|
48
|
Salomon R, Kaliuzhna M, Herbelin B, Blanke O. Balancing awareness: Vestibular signals modulate visual consciousness in the absence of awareness. Conscious Cogn 2015. [DOI: 10.1016/j.concog.2015.07.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
49
|
Pfeiffer C, van Elk M, Bernasconi F, Blanke O. Distinct vestibular effects on early and late somatosensory cortical processing in humans. Neuroimage 2015; 125:208-219. [PMID: 26466979 DOI: 10.1016/j.neuroimage.2015.10.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 08/31/2015] [Accepted: 10/01/2015] [Indexed: 11/28/2022] Open
Abstract
In non-human primates several brain areas contain neurons that respond to both vestibular and somatosensory stimulation. In humans, vestibular stimulation activates several somatosensory brain regions and improves tactile perception. However, less is known about the spatio-temporal dynamics of such vestibular-somatosensory interactions in the human brain. To address this issue, we recorded high-density electroencephalography during left median nerve electrical stimulation to obtain Somatosensory Evoked Potentials (SEPs). We analyzed SEPs during vestibular activation following sudden decelerations from constant-velocity (90°/s and 60°/s) earth-vertical axis yaw rotations and SEPs during a non-vestibular control period. SEP analysis revealed two distinct temporal effects of vestibular activation: An early effect (28-32ms post-stimulus) characterized by vestibular suppression of SEP response strength that depended on rotation velocity and a later effect (97-112ms post-stimulus) characterized by vestibular modulation of SEP topographical pattern that was rotation velocity-independent. Source estimation localized these vestibular effects, during both time periods, to activation differences in a distributed cortical network including the right postcentral gyrus, right insula, left precuneus, and bilateral secondary somatosensory cortex. These results suggest that vestibular-somatosensory interactions in humans depend on processing in specific time periods in somatosensory and vestibular cortical regions.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neuroscience, Lausanne University and University Hospital, Lausanne, Switzerland; Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Michiel van Elk
- Department of Psychology, University of Amsterdam, Netherlands
| | - Fosco Bernasconi
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Olaf Blanke
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland; Department of Neurology, University Hospital Geneva, Switzerland.
| |
Collapse
|
50
|
Blanke O, Slater M, Serino A. Behavioral, Neural, and Computational Principles of Bodily Self-Consciousness. Neuron 2015; 88:145-66. [PMID: 26447578 DOI: 10.1016/j.neuron.2015.09.029] [Citation(s) in RCA: 386] [Impact Index Per Article: 42.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Affiliation(s)
- Olaf Blanke
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland; Department of Neurology, University of Geneva, 24 rue Micheli-du-Crest, 1211 Geneva, Switzerland.
| | - Mel Slater
- ICREA-University of Barcelona, Campus de Mundet, 08035 Barcelona, Spain; Department of Computer Science, University College London, Malet Place Engineering Building, Gower Street, London, WC1E 6BT, UK
| | - Andrea Serino
- Laboratory of Cognitive Neuroscience, Center for Neuroprosthetics and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL), 9 Chemin des Mines, 1202 Geneva, Switzerland.
| |
Collapse
|