51
|
Pitkow X, Angelaki DE. Inference in the Brain: Statistics Flowing in Redundant Population Codes. Neuron 2017; 94:943-953. [PMID: 28595050 PMCID: PMC5543692 DOI: 10.1016/j.neuron.2017.05.028] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Revised: 05/10/2017] [Accepted: 05/19/2017] [Indexed: 12/25/2022]
Abstract
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors.
Collapse
Affiliation(s)
- Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA.
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| |
Collapse
|
52
|
Smith AT, Greenlee MW, DeAngelis GC, Angelaki D. Distributed Visual–Vestibular Processing in the Cerebral Cortex of Man and Macaque. Multisens Res 2017. [DOI: 10.1163/22134808-00002568] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in understanding the neurobiological underpinnings of visual–vestibular interactions underlying self-motion perception are reviewed with an emphasis on comparisons between the macaque and human brains. In both species, several distinct cortical regions have been identified that are active during both visual and vestibular stimulation and in some of these there is clear evidence for sensory integration. Several possible cross-species homologies between cortical regions are identified. A key feature of cortical organization is that the same information is apparently represented in multiple, anatomically diverse cortical regions, suggesting that information about self-motion is used for different purposes in different brain regions.
Collapse
Affiliation(s)
- Andrew T. Smith
- Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| | - Mark W. Greenlee
- Institute of Experimental Psychology, University of Regensburg, 93053 Regensburg, Germany
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas 77030, USA
| |
Collapse
|
53
|
Gu Y, Cheng Z, Yang L, DeAngelis GC, Angelaki DE. Multisensory Convergence of Visual and Vestibular Heading Cues in the Pursuit Area of the Frontal Eye Field. Cereb Cortex 2016; 26:3785-801. [PMID: 26286917 PMCID: PMC5004753 DOI: 10.1093/cercor/bhv183] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Both visual and vestibular sensory cues are important for perceiving one's direction of heading during self-motion. Previous studies have identified multisensory, heading-selective neurons in the dorsal medial superior temporal area (MSTd) and the ventral intraparietal area (VIP). Both MSTd and VIP have strong recurrent connections with the pursuit area of the frontal eye field (FEFsem), but whether FEFsem neurons may contribute to multisensory heading perception remain unknown. We characterized the tuning of macaque FEFsem neurons to visual, vestibular, and multisensory heading stimuli. About two-thirds of FEFsem neurons exhibited significant heading selectivity based on either vestibular or visual stimulation. These multisensory neurons shared many properties, including distributions of tuning strength and heading preferences, with MSTd and VIP neurons. Fisher information analysis also revealed that the average FEFsem neuron was almost as sensitive as MSTd or VIP cells. Visual and vestibular heading preferences in FEFsem tended to be either matched (congruent cells) or discrepant (opposite cells), such that combined stimulation strengthened heading selectivity for congruent cells but weakened heading selectivity for opposite cells. These findings demonstrate that, in addition to oculomotor functions, FEFsem neurons also exhibit properties that may allow them to contribute to a cortical network that processes multisensory heading cues.
Collapse
Affiliation(s)
- Yong Gu
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Zhixian Cheng
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Lihua Yang
- Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Institute of Neuroscience, Shanghai, China
| | - Gregory C. DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
54
|
Disentangling the External Reference Frames Relevant to Tactile Localization. PLoS One 2016; 11:e0158829. [PMID: 27391805 PMCID: PMC4938545 DOI: 10.1371/journal.pone.0158829] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Accepted: 06/22/2016] [Indexed: 12/03/2022] Open
Abstract
Different reference frames appear to be relevant for tactile spatial coding. When participants give temporal order judgments (TOJ) of two tactile stimuli, one on each hand, performance declines when the hands are crossed. This effect is attributed to a conflict between anatomical and external location codes: hand crossing places the anatomically right hand into the left side of external space. However, hand crossing alone does not specify the anchor of the external reference frame, such as gaze, trunk, or the stimulated limb. Experiments that used explicit localization responses, such as pointing to tactile stimuli rather than crossing manipulations, have consistently implicated gaze-centered coding for touch. To test whether crossing effects can be explained by gaze-centered coding alone, participants made TOJ while the position of the hands was manipulated relative to gaze and trunk. The two hands either lay on different sides of space relative to gaze or trunk, or they both lay on one side of the respective space. In the latter posture, one hand was on its "regular side of space" despite hand crossing, thus reducing overall conflict between anatomical and external codes. TOJ crossing effects were significantly reduced when the hands were both located on the same side of space relative to gaze, indicating gaze-centered coding. Evidence for trunk-centered coding was tentative, with an effect in reaction time but not in accuracy. These results link paradigms that use explicit localization and TOJ, and corroborate the relevance of gaze-related coding for touch. Yet, gaze and trunk-centered coding did not account for the total size of crossing effects, suggesting that tactile localization relies on additional, possibly limb-centered, reference frames. Thus, tactile location appears to be estimated by integrating multiple anatomical and external reference frames.
Collapse
|
55
|
Badde S, Heed T. Towards explaining spatial touch perception: Weighted integration of multiple location codes. Cogn Neuropsychol 2016; 33:26-47. [PMID: 27327353 PMCID: PMC4975087 DOI: 10.1080/02643294.2016.1168791] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Touch is bound to the skin – that is, to the boundaries of the body. Yet, the activity of neurons in primary somatosensory cortex just mirrors the spatial distribution of the sensors across the skin. To determine the location of a tactile stimulus on the body, the body's spatial layout must be considered. Moreover, to relate touch to the external world, body posture has to be evaluated. In this review, we argue that posture is incorporated, by default, for any tactile stimulus. However, the relevance of the external location and, thus, its expression in behaviour, depends on various sensory and cognitive factors. Together, these factors imply that an external representation of touch dominates over the skin-based, anatomical when our focus is on the world rather than on our own body. We conclude that touch localization is a reconstructive process that is adjusted to the context while maintaining all available spatial information.
Collapse
Affiliation(s)
- Stephanie Badde
- a Department of Psychology , New York University , New York , NY , USA
| | - Tobias Heed
- b Faculty of Psychology and Human Movement Science , University of Hamburg , Hamburg , Germany
| |
Collapse
|
56
|
Pfeiffer C, Grivaz P, Herbelin B, Serino A, Blanke O. Visual gravity contributes to subjective first-person perspective. Neurosci Conscious 2016; 2016:niw006. [PMID: 30109127 PMCID: PMC6084587 DOI: 10.1093/nc/niw006] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
A fundamental component of conscious experience involves a first-person perspective (1PP), characterized by the experience of being a subject and of being directed at the world. Extending earlier work on multisensory perceptual mechanisms of 1PP, we here asked whether the experienced direction of the 1PP (i.e. the spatial direction of subjective experience of the world) depends on visual-tactile-vestibular conflicts, including the direction of gravity. Sixteen healthy subjects in supine position received visuo-tactile synchronous or asynchronous stroking to induce a full-body illusion. In the critical manipulation, we presented gravitational visual object motion directed toward or away from the participant’s body and thus congruent or incongruent with respect to the direction of vestibular and somatosensory gravitational cues. The results showed that multisensory gravitational conflict induced within-subject changes of the experienced direction of the 1PP that depended on the direction of visual gravitational cues. Participants experienced more often a downward direction of their 1PP (incongruent with respect to the participant’s physical body posture) when visual object motion was directed away rather than towards the participant’s body. These downward-directed 1PP experiences positively correlated with measures of elevated self-location. Together, these results show that visual gravitational cues contribute to the experienced direction of the 1PP, defining the subjective location and perspective from where humans experience to perceive the world.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratoire de Recherche en Neuroimagerie (LREN), Department of Clinical Neuroscience, Lausanne University and University Hospital, Switzerland
| | - Petr Grivaz
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Bruno Herbelin
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Andrea Serino
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Laboratory of Cognitive Neuroscience, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland
| | - Olaf Blanke
- Center for Neuroprosthethics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL), Switzerland.,Department of Neurology, University Hospital Geneva, Switzerland
| |
Collapse
|
57
|
Üstün C. A Sensorimotor Model for Computing Intended Reach Trajectories. PLoS Comput Biol 2016; 12:e1004734. [PMID: 26985662 PMCID: PMC4795795 DOI: 10.1371/journal.pcbi.1004734] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 01/05/2016] [Indexed: 11/19/2022] Open
Abstract
The presumed role of the primate sensorimotor system is to transform reach targets from retinotopic to joint coordinates for producing motor output. However, the interpretation of neurophysiological data within this framework is ambiguous, and has led to the view that the underlying neural computation may lack a well-defined structure. Here, I consider a model of sensorimotor computation in which temporal as well as spatial transformations generate representations of desired limb trajectories, in visual coordinates. This computation is suggested by behavioral experiments, and its modular implementation makes predictions that are consistent with those observed in monkey posterior parietal cortex (PPC). In particular, the model provides a simple explanation for why PPC encodes reach targets in reference frames intermediate between the eye and hand, and further explains why these reference frames shift during movement. Representations in PPC are thus consistent with the orderly processing of information, provided we adopt the view that sensorimotor computation manipulates desired movement trajectories, and not desired movement endpoints. Does the brain explicitly plan entire movement trajectories or are these emergent properties of motor control? Although behavioral studies support the notion of trajectory planning for visually guided reaches, a neurobiologically plausible mechanism for this observation has been lacking. I discuss a model that generates representations of desired reach trajectories (i.e., paths and speed profiles) for point-to-point reaches. I show that the predictions of this model closely resemble the population responses of neurons in posterior parietal cortex, a visuomotor planning area of the monkey brain. Several aspects of population responses that are puzzling from the point of view of traditional sensorimotor models are coherently explained by this mechanism.
Collapse
Affiliation(s)
- Cevat Üstün
- Division of Biology, California Institute of Technology, Pasadena, California, United States of America
- * E-mail:
| |
Collapse
|
58
|
Brandt T, Dieterich M. Vestibular contribution to three-dimensional dynamic (allocentric) and two-dimensional static (egocentric) spatial memory. J Neurol 2016; 263:1015-1016. [PMID: 26946497 DOI: 10.1007/s00415-016-8067-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 02/10/2016] [Accepted: 02/11/2016] [Indexed: 10/22/2022]
Affiliation(s)
- Thomas Brandt
- Institute for Clinical Neuroscience, Ludwig-Maximilians University, Marchioninistr. 15, 81377, Munich, Germany. .,German Center for Vertigo and Balance Disorders, Ludwig-Maximilians University, Munich, Germany.
| | - Marianne Dieterich
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians University, Munich, Germany.,Department of Neurology, Ludwig-Maximilians University, Munich, Germany.,Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| |
Collapse
|
59
|
Brandes J, Heed T. Reach Trajectories Characterize Tactile Localization for Sensorimotor Decision Making. J Neurosci 2015; 35:13648-58. [PMID: 26446218 PMCID: PMC6605379 DOI: 10.1523/jneurosci.1873-14.2015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 08/24/2015] [Accepted: 08/27/2015] [Indexed: 11/21/2022] Open
Abstract
Spatial target information for movement planning appears to be coded in a gaze-centered reference frame. In touch, however, location is initially coded with reference to the skin. Therefore, the tactile spatial location must be derived by integrating skin location and posture. It has been suggested that this recoding is impaired when the limb is placed in the opposite hemispace, for example, by limb crossing. Here, human participants reached toward visual and tactile targets located at uncrossed and crossed feet in a sensorimotor decision task. We characterized stimulus recoding by analyzing the timing and spatial profile of hand reaches. For tactile targets at crossed feet, skin-based information implicates the incorrect side, and only recoded information points to the correct location. Participants initiated straight reaches and redirected the hand toward a target presented in midflight. Trajectories to visual targets were unaffected by foot crossing. In contrast, trajectories to tactile targets were redirected later with crossed than uncrossed feet. Reaches to crossed feet usually continued straight until they were directed toward the correct tactile target and were not biased toward the skin-based target location. Occasional, far deflections toward the incorrect target were most likely when this target was implicated by trial history. These results are inconsistent with the suggestion that spatial transformations in touch are impaired by limb crossing, but are consistent with tactile location being recoded rapidly and efficiently, followed by integration of skin-based and external information to specify the reach target. This process may be implemented in a bounded integrator framework. SIGNIFICANCE STATEMENT How do you touch yourself, for instance, to scratch an itch? The place you need to reach is defined by a sensation on the skin, but our bodies are flexible, so this skin location could be anywhere in 3D space. The movement toward the tactile sensation must therefore be specified by merging skin location and body posture. By investigating human hand reach trajectories toward tactile stimuli on the feet, we provide experimental evidence that this transformation process is quick and efficient, and that its output is integrated with the original skin location in a fashion consistent with bounded integrator decision-making frameworks.
Collapse
Affiliation(s)
- Janina Brandes
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, 20146 Hamburg, Germany
| | - Tobias Heed
- Biological Psychology and Neuropsychology, Faculty of Psychology and Human Movement Science, University of Hamburg, 20146 Hamburg, Germany
| |
Collapse
|
60
|
Crane BT. Coordinates of Human Visual and Inertial Heading Perception. PLoS One 2015; 10:e0135539. [PMID: 26267865 PMCID: PMC4534459 DOI: 10.1371/journal.pone.0135539] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 07/22/2015] [Indexed: 11/22/2022] Open
Abstract
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Collapse
Affiliation(s)
- Benjamin Thomas Crane
- Department of Otolaryngology, University of Rochester, Rochester, NY, United States of America
- Department of Bioengineering, University of Rochester, Rochester, NY, United States of America
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| |
Collapse
|
61
|
Oscillatory activity reflects differential use of spatial reference frames by sighted and blind individuals in tactile attention. Neuroimage 2015; 117:417-28. [DOI: 10.1016/j.neuroimage.2015.05.068] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Revised: 04/24/2015] [Accepted: 05/24/2015] [Indexed: 11/19/2022] Open
|
62
|
Abstract
Much of our understanding of the neuronal mechanisms of spatial navigation is derived from chronic recordings in rodents in which head-direction, place, and grid cells have all been described. However, despite the proposed importance of self-reference information to these internal representations of space, their congruence with vestibular signaling remains unclear. Here we have undertaken brain-wide functional mapping using both fMRI and electrophysiological methods to directly determine the spatial extent, strength, and time course of vestibular signaling across the rat forebrain. We find distributed activity throughout thalamic, limbic, and particularly primary sensory cortical areas in addition to known head-direction pathways. We also observe activation of frontal regions, including infralimbic and cingulate cortices, indicating integration of vestibular information throughout functionally diverse cortical regions. These whole-brain activity maps therefore suggest a widespread contribution of vestibular signaling to a self-centered framework for multimodal sensorimotor integration in support of movement planning, execution, spatial navigation, and autonomic responses to gravito-inertial changes.
Collapse
|
63
|
Heed T, Buchholz VN, Engel AK, Röder B. Tactile remapping: from coordinate transformation to integration in sensorimotor processing. Trends Cogn Sci 2015; 19:251-8. [DOI: 10.1016/j.tics.2015.03.001] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 03/04/2015] [Accepted: 03/05/2015] [Indexed: 10/23/2022]
|
64
|
Cléry J, Guipponi O, Wardak C, Ben Hamed S. Neuronal bases of peripersonal and extrapersonal spaces, their plasticity and their dynamics: Knowns and unknowns. Neuropsychologia 2015; 70:313-26. [PMID: 25447371 DOI: 10.1016/j.neuropsychologia.2014.10.022] [Citation(s) in RCA: 153] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Revised: 10/09/2014] [Accepted: 10/14/2014] [Indexed: 11/19/2022]
Affiliation(s)
- Justine Cléry
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Olivier Guipponi
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Claire Wardak
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France
| | - Suliann Ben Hamed
- Centre de Neuroscience Cognitive, UMR5229, CNRS-Université Claude Bernard Lyon I, 67 Boulevard Pinel, 69675 Bron, France.
| |
Collapse
|
65
|
Abstract
Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an "earth-vertical" direction.
Collapse
|
66
|
Chen X, DeAngelis GC, Angelaki DE. Eye-centered visual receptive fields in the ventral intraparietal area. J Neurophysiol 2014; 112:353-61. [PMID: 24790176 DOI: 10.1152/jn.00057.2014] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The ventral intraparietal area (VIP) processes multisensory visual, vestibular, tactile, and auditory signals in diverse reference frames. We recently reported that visual heading signals in VIP are represented in an approximately eye-centered reference frame when measured using large-field optic flow stimuli. No VIP neuron was found to have head-centered visual heading tuning, and only a small proportion of cells had reference frames that were intermediate between eye- and head-centered. In contrast, previous studies using moving bar stimuli have reported that visual receptive fields (RFs) in VIP are head-centered for a substantial proportion of neurons. To examine whether these differences in previous findings might be due to the neuronal property examined (heading tuning vs. RF measurements) or the type of visual stimulus used (full-field optic flow vs. a single moving bar), we have quantitatively mapped visual RFs of VIP neurons using a large-field, multipatch, random-dot motion stimulus. By varying eye position relative to the head, we tested whether visual RFs in VIP are represented in head- or eye-centered reference frames. We found that the vast majority of VIP neurons have eye-centered RFs with only a single neuron classified as head-centered and a small minority classified as intermediate between eye- and head-centered. Our findings suggest that the spatial reference frames of visual responses in VIP may depend on the visual stimulation conditions used to measure RFs and might also be influenced by how attention is allocated during stimulus presentation.
Collapse
Affiliation(s)
- Xiaodong Chen
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Anatomy & Neurobiology, Washington University, St. Louis, Missouri; and
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas; Department of Anatomy & Neurobiology, Washington University, St. Louis, Missouri; and
| |
Collapse
|
67
|
Pfeiffer C, Serino A, Blanke O. The vestibular system: a spatial reference for bodily self-consciousness. Front Integr Neurosci 2014; 8:31. [PMID: 24860446 PMCID: PMC4028995 DOI: 10.3389/fnint.2014.00031] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Accepted: 03/20/2014] [Indexed: 11/13/2022] Open
Abstract
Self-consciousness is the remarkable human experience of being a subject: the "I". Self-consciousness is typically bound to a body, and particularly to the spatial dimensions of the body, as well as to its location and displacement in the gravitational field. Because the vestibular system encodes head position and movement in three-dimensional space, vestibular cortical processing likely contributes to spatial aspects of bodily self-consciousness. We review here recent data showing vestibular effects on first-person perspective (the feeling from where "I" experience the world) and self-location (the feeling where "I" am located in space). We compare these findings to data showing vestibular effects on mental spatial transformation, self-motion perception, and body representation showing vestibular contributions to various spatial representations of the body with respect to the external world. Finally, we discuss the role for four posterior brain regions that process vestibular and other multisensory signals to encode spatial aspects of bodily self-consciousness: temporoparietal junction, parietoinsular vestibular cortex, ventral intraparietal region, and medial superior temporal region. We propose that vestibular processing in these cortical regions is critical in linking multisensory signals from the body (personal and peripersonal space) with external (extrapersonal) space. Therefore, the vestibular system plays a critical role for neural representations of spatial aspects of bodily self-consciousness.
Collapse
Affiliation(s)
- Christian Pfeiffer
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
| | - Andrea Serino
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Department of Psychology, Alma Mater Studiorum, University of Bologna Bologna, Italy
| | - Olaf Blanke
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Laboratory of Cognitive Neuroscience, Brain Mind Institute, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland ; Department of Neurology, University Hospital Geneva Geneva, Switzerland
| |
Collapse
|
68
|
Shinder ME, Newlands SD. Sensory convergence in the parieto-insular vestibular cortex. J Neurophysiol 2014; 111:2445-64. [PMID: 24671533 DOI: 10.1152/jn.00731.2013] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Vestibular signals are pervasive throughout the central nervous system, including the cortex, where they likely play different roles than they do in the better studied brainstem. Little is known about the parieto-insular vestibular cortex (PIVC), an area of the cortex with prominent vestibular inputs. Neural activity was recorded in the PIVC of rhesus macaques during combinations of head, body, and visual target rotations. Activity of many PIVC neurons was correlated with the motion of the head in space (vestibular), the twist of the neck (proprioceptive), and the motion of a visual target, but was not associated with eye movement. PIVC neurons responded most commonly to more than one stimulus, and responses to combined movements could often be approximated by a combination of the individual sensitivities to head, neck, and target motion. The pattern of visual, vestibular, and somatic sensitivities on PIVC neurons displayed a continuous range, with some cells strongly responding to one or two of the stimulus modalities while other cells responded to any type of motion equivalently. The PIVC contains multisensory convergence of self-motion cues with external visual object motion information, such that neurons do not represent a specific transformation of any one sensory input. Instead, the PIVC neuron population may define the movement of head, body, and external visual objects in space and relative to one another. This comparison of self and external movement is consistent with insular cortex functions related to monitoring and explains many disparate findings of previous studies.
Collapse
Affiliation(s)
- Michael E Shinder
- Department of Otolaryngology, University of Texas Medical Branch, Galveston, Texas
| | - Shawn D Newlands
- Department of Otolaryngology, University of Texas Medical Branch, Galveston, Texas
| |
Collapse
|
69
|
Abstract
Reference frames are important for understanding sensory processing in the cortex. Previous work showed that vestibular heading signals in the ventral intraparietal area (VIP) are represented in body-centered coordinates. In contrast, vestibular heading tuning in the medial superior temporal area (MSTd) is approximately head centered. We considered the hypothesis that visual heading signals (from optic flow) in VIP might also be transformed into a body-centered representation, unlike visual heading tuning in MSTd, which is approximately eye centered. We distinguished among eye-centered, head-centered, and body-centered spatial reference frames by systematically varying both eye and head positions while rhesus monkeys viewed optic flow stimuli depicting various headings. We found that heading tuning of VIP neurons based on optic flow generally shifted with eye position, indicating an eye-centered spatial reference frame. This is similar to the representation of visual heading signals in MSTd, but contrasts sharply with the body-centered representation of vestibular heading signals in VIP. These findings demonstrate a clear dissociation between the spatial reference frames of visual and vestibular signals in VIP, and emphasize that frames of reference for neurons in parietal cortex can depend on the type of sensory stimulation.
Collapse
|
70
|
Seilheimer RL, Rosenberg A, Angelaki DE. Models and processes of multisensory cue combination. Curr Opin Neurobiol 2013; 25:38-46. [PMID: 24709599 DOI: 10.1016/j.conb.2013.11.008] [Citation(s) in RCA: 68] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2013] [Revised: 09/26/2013] [Accepted: 11/18/2013] [Indexed: 01/13/2023]
Abstract
Fundamental to our perception of a unified and stable environment is the capacity to combine information across the senses. Although this process appears seamless as an adult, the brain's ability to successfully perform multisensory cue combination takes years to develop and relies on a number of complex processes including cue integration, cue calibration, causal inference, and reference frame transformations. Further complexities exist because multisensory cue combination is implemented across time by populations of noisy neurons. In this review, we discuss recent behavioral studies exploring how the brain combines information from different sensory systems, neurophysiological studies relating behavior to neuronal activity, and a theory of neural sensory encoding that can account for many of these experimental findings.
Collapse
Affiliation(s)
| | - Ari Rosenberg
- Baylor College of Medicine, Houston, TX, United States
| | | |
Collapse
|
71
|
Szczepanski SM, Saalmann YB. Human fronto-parietal and parieto-hippocampal pathways represent behavioral priorities in multiple spatial reference frames. BIOARCHITECTURE 2013; 3:147-52. [PMID: 24322829 PMCID: PMC3907462 DOI: 10.4161/bioa.27462] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
We represent behaviorally relevant information in different spatial reference frames in order to interact effectively with our environment. For example, we need an egocentric (e.g., body-centered) reference frame to specify limb movements and an allocentric (e.g., world-centered) reference frame to navigate from one location to another. Posterior parietal cortex (PPC) is vital for performing transformations between these different coordinate systems. Here, we review evidence for multiple pathways in the human brain, from PPC to motor, premotor, and supplementary motor areas, as well as to structures in the medial temporal lobe. These connections are important for transformations between egocentric reference frames to facilitate sensory-guided action, or from egocentric to allocentric reference frames to facilitate spatial navigation.
Collapse
Affiliation(s)
- Sara M Szczepanski
- Helen Wills Neuroscience Institute; University of California; Berkeley, Berkeley, CA USA
| | - Yuri B Saalmann
- Department of Psychology, University of Wisconsin-Madison; Madison, WI USA
| |
Collapse
|