1
|
Kong L, Zeng F, Zhang Y, Li L, Chen A. The influence of form on motion signal processing in the ventral intraparietal area of macaque monkeys. Heliyon 2024; 10:e36913. [PMID: 39286089 PMCID: PMC11402950 DOI: 10.1016/j.heliyon.2024.e36913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 07/26/2024] [Accepted: 08/23/2024] [Indexed: 09/19/2024] Open
Abstract
The visual system relies on both motion and form signals to perceive the direction of self-motion, yet the coordination mechanisms between these two elements in this process remain elusive. In the current study, we employed heading perception as a model to delve into the interaction characteristics between form and motion signals. We recorded the responses of neurons in the ventral intraparietal area (VIP), an area with strong heading selectivity, to motion-only, form-only, and combined stimuli of simulated self-motion. Intriguingly, VIP neurons responded to form-only cues defined by Glass patterns, although they exhibited no tuning selectivity. In combined condition, introducing a small offset between form and motion cues significantly enhanced neuronal sensitivity to motion cues. However, with a larger offset, the enhancement effect on sensitivity became comparatively smaller. Moreover, we observed that the influence of form cues on neuronal response to motion cues is more pronounced in the later stage (1-2 s) of stimulation, with a relatively smaller effect in the early stage (0-1 s). This suggests a dynamic interaction between motion and form cues over time for heading perception. In summary, our study uncovered that in area VIP, form information plays a role in constructing accurate self-motion perception. This adds valuable insights into the complex dynamics of how the brain integrates motion and form cues for the perception of one's own movements.
Collapse
Affiliation(s)
- Lingqi Kong
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Yingying Zhang
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
| | - Li Li
- Faculty of Arts and Science, New York University Shanghai, Shanghai, 200122, China
- New York University-East China Normal University Joint Research Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal University, Shanghai, 200062, China
- New York University-East China Normal University Joint Research Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, 200062, China
| |
Collapse
|
2
|
Sun Q, Wang SY, Zhan LZ, You FH, Sun Q. A Bayesian inference model can predict the effects of attention on the serial dependence in heading estimation from optic flow. J Vis 2024; 24:11. [PMID: 39269364 PMCID: PMC11407482 DOI: 10.1167/jov.24.9.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024] Open
Abstract
It has been demonstrated that observers can accurately estimate their self-motion direction (i.e., heading) from optic flow, which can be affected by attention. However, it remains unclear how attention affects the serial dependence in the estimation. In the current study, participants conducted two experiments. The results showed that the estimation accuracy decreased when attentional resources allocated to the heading estimation task were reduced. Additionally, the estimates of currently presented headings were biased toward the headings of previously seen headings, showing serial dependence. Especially, this effect decreased (increased) when the attentional resources allocated to the previously (currently) seen headings were reduced. Furthermore, importantly, we developed a Bayesian inference model, which incorporated attention-modulated likelihoods and qualitatively predicted changes in the estimation accuracy and serial dependence. In summary, the current study shows that attention affects the serial dependence in heading estimation from optic flow and reveals the Bayesian computational mechanism behind the heading estimation.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, P. R. China
- Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, P. R. China
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, P. R. China
| | - Si-Yu Wang
- Department of Psychology, Zhejiang Normal University, Jinhua, P. R. China
| | - Lin-Zhe Zhan
- Department of Psychology, Zhejiang Normal University, Jinhua, P. R. China
| | - Fan-Huan You
- Department of Psychology, Zhejiang Normal University, Jinhua, P. R. China
| | - Qian Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, P. R. China
- Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, P. R. China
| |
Collapse
|
3
|
Layton OW, Steinmetz ST. Accuracy optimized neural networks do not effectively model optic flow tuning in brain area MSTd. Front Neurosci 2024; 18:1441285. [PMID: 39286477 PMCID: PMC11403719 DOI: 10.3389/fnins.2024.1441285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Accepted: 08/09/2024] [Indexed: 09/19/2024] Open
Abstract
Accuracy-optimized convolutional neural networks (CNNs) have emerged as highly effective models at predicting neural responses in brain areas along the primate ventral stream, but it is largely unknown whether they effectively model neurons in the complementary primate dorsal stream. We explored how well CNNs model the optic flow tuning properties of neurons in dorsal area MSTd and we compared our results with the Non-Negative Matrix Factorization (NNMF) model, which successfully models many tuning properties of MSTd neurons. To better understand the role of computational properties in the NNMF model that give rise to optic flow tuning that resembles that of MSTd neurons, we created additional CNN model variants that implement key NNMF constraints - non-negative weights and sparse coding of optic flow. While the CNNs and NNMF models both accurately estimate the observer's self-motion from purely translational or rotational optic flow, NNMF and the CNNs with nonnegative weights yield substantially less accurate estimates than the other CNNs when tested on more complex optic flow that combines observer translation and rotation. Despite its poor accuracy, NNMF gives rise to tuning properties that align more closely with those observed in primate MSTd than any of the accuracy-optimized CNNs. This work offers a step toward a deeper understanding of the computational properties and constraints that describe the optic flow tuning of primate area MSTd.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, United States
| | - Scott T Steinmetz
- Center for Computing Research, Sandia National Labs, Albuquerque, NM, United States
| |
Collapse
|
4
|
Sun Q, Wang JY, Gong XM. Conflicts between short- and long-term experiences affect visual perception through modulating sensory or motor response systems: Evidence from Bayesian inference models. Cognition 2024; 246:105768. [PMID: 38479091 DOI: 10.1016/j.cognition.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 03/24/2024]
Abstract
The independent effects of short- and long-term experiences on visual perception have been discussed for decades. However, no study has investigated whether and how these experiences simultaneously affect our visual perception. To address this question, we asked participants to estimate their self-motion directions (i.e., headings) simulated from optic flow, in which a long-term experience learned in everyday life (i.e., straight-forward motion being more common than lateral motion) plays an important role. The headings were selected from three distributions that resembled a peak, a hill, and a flat line, creating different short-term experiences. Importantly, the proportions of headings deviating from the straight-forward motion gradually increased in the peak, hill, and flat distributions, leading to a greater conflict between long- and short-term experiences. The results showed that participants biased their heading estimates towards the straight-ahead direction and previously seen headings, which increased with the growing experience conflict. This suggests that both long- and short-term experiences simultaneously affect visual perception. Finally, we developed two Bayesian models (Model 1 vs. Model 2) based on two assumptions that the experience conflict altered the likelihood distribution of sensory representation or the motor response system. The results showed that both models accurately predicted participants' estimation biases. However, Model 1 predicted a higher variance of serial dependence compared to Model 2, while Model 2 predicted a higher variance of the bias towards the straight-ahead direction compared to Model 1. This suggests that the experience conflict can influence visual perception by affecting both sensory and motor response systems. Taken together, the current study systematically revealed the effects of long- and short-term experiences on visual perception and the underlying Bayesian processing mechanisms.
Collapse
Affiliation(s)
- Qi Sun
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China; Intelligent Laboratory of Zhejiang Province in Mental Health and Crisis Intervention for Children and Adolescents, Jinhua, PR China; Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, PR China.
| | - Jing-Yi Wang
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China
| | - Xiu-Mei Gong
- Department of Psychology, Zhejiang Normal University, Jinhua, PR China
| |
Collapse
|
5
|
Zheng Q, Gu Y. From Multisensory Integration to Multisensory Decision-Making. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:23-35. [PMID: 38270851 DOI: 10.1007/978-981-99-7611-9_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Organisms live in a dynamic environment in which sensory information from multiple sources is ever changing. A conceptually complex task for the organisms is to accumulate evidence across sensory modalities and over time, a process known as multisensory decision-making. This is a new concept, in terms of that previous researches have been largely conducted in parallel disciplines. That is, much efforts have been put either in sensory integration across modalities using activity summed over a duration of time, or in decision-making with only one sensory modality that evolves over time. Recently, a few studies with neurophysiological measurements emerge to study how different sensory modality information is processed, accumulated, and integrated over time in decision-related areas such as the parietal or frontal lobes in mammals. In this review, we summarize and comment on these studies that combine the long-existed two parallel fields of multisensory integration and decision-making. We show how the new findings provide insight into our understanding about neural mechanisms mediating multisensory information processing in a more complete way.
Collapse
Affiliation(s)
- Qihao Zheng
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| | - Yong Gu
- Systems Neuroscience, SInstitute of Neuroscience, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
6
|
Sun Q, Gong XM, Zhan LZ, Wang SY, Dong LL. Serial dependence bias can predict the overall estimation error in visual perception. J Vis 2023; 23:2. [PMID: 37917052 PMCID: PMC10627302 DOI: 10.1167/jov.23.13.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 10/07/2023] [Indexed: 11/03/2023] Open
Abstract
Although visual feature estimations are accurate and precise, overall estimation errors (i.e., the difference between estimates and actual values) tend to show systematic patterns. For example, estimates of orientations are systematically biased away from horizontal and vertical orientations, showing an oblique illusion. Additionally, many recent studies have demonstrated that estimations of current visual features are systematically biased toward previously seen features, showing a serial dependence. However, no study examined whether the overall estimation errors were correlated with the serial dependence bias. To address this question, we enrolled three groups of participants to estimate orientation, motion speed, and point-light-walker direction. The results showed that the serial dependence bias explained over 20% of overall estimation errors in the three tasks, indicating that we could use the serial dependence bias to predict the overall estimation errors. The current study first demonstrated that the serial dependence bias was not independent from the overall estimation errors. This finding could inspire researchers to investigate the neural bases underlying the visual feature estimation and serial dependence.
Collapse
Affiliation(s)
- Qi Sun
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
- Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province, Zhejiang Normal University, Jinhua, China, PRC
| | - Xiu-Mei Gong
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Lin-Zhe Zhan
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | - Si-Yu Wang
- School of Psychology, Zhejiang Normal University, Jinhua, PRC
| | | |
Collapse
|
7
|
Jerjian SJ, Harsch DR, Fetsch CR. Self-motion perception and sequential decision-making: where are we heading? Philos Trans R Soc Lond B Biol Sci 2023; 378:20220333. [PMID: 37545301 PMCID: PMC10404932 DOI: 10.1098/rstb.2022.0333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/18/2023] [Indexed: 08/08/2023] Open
Abstract
To navigate and guide adaptive behaviour in a dynamic environment, animals must accurately estimate their own motion relative to the external world. This is a fundamentally multisensory process involving integration of visual, vestibular and kinesthetic inputs. Ideal observer models, paired with careful neurophysiological investigation, helped to reveal how visual and vestibular signals are combined to support perception of linear self-motion direction, or heading. Recent work has extended these findings by emphasizing the dimension of time, both with regard to stimulus dynamics and the trade-off between speed and accuracy. Both time and certainty-i.e. the degree of confidence in a multisensory decision-are essential to the ecological goals of the system: terminating a decision process is necessary for timely action, and predicting one's accuracy is critical for making multiple decisions in a sequence, as in navigation. Here, we summarize a leading model for multisensory decision-making, then show how the model can be extended to study confidence in heading discrimination. Lastly, we preview ongoing efforts to bridge self-motion perception and navigation per se, including closed-loop virtual reality and active self-motion. The design of unconstrained, ethologically inspired tasks, accompanied by large-scale neural recordings, raise promise for a deeper understanding of spatial perception and decision-making in the behaving animal. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Steven J. Jerjian
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Devin R. Harsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
- Center for Neuroscience and Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Christopher R. Fetsch
- Solomon H. Snyder Department of Neuroscience, Zanvyl Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
8
|
Kumano H, Uka T. Representation of Motion Direction in Visual Area MT Accounts for High Sensitivity to Centripetal Motion, Aligning with Efficient Coding of Retinal Motion Statistics. J Neurosci 2023; 43:5893-5904. [PMID: 37495384 PMCID: PMC10436761 DOI: 10.1523/jneurosci.0451-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 07/28/2023] Open
Abstract
The overrepresentation of centrifugal motion in the middle temporal visual area (area MT) has long been thought to provide an efficient coding strategy for optic flow processing. However, this overrepresentation compromises the detection of approaching objects, which is essential for survival. In the present study, we revisited this long-held notion by reanalyzing motion selectivity in area MT of three macaque monkeys (two males, one female) using random-dot stimuli instead of spot stimuli. We found no differences in the number of neurons tuned to centrifugal versus centripetal motion; however, centrifugally tuned neurons showed stronger tuning than centripetally tuned neurons. This was attributed to the heightened suppression of responses in centrifugal neurons to centripetal motion compared with that of centripetal neurons to centrifugal motion. Our modeling implies that this intensified suppression accounts for superior detection performance for weak centripetal motion stimuli. Moreover, through Fisher information analysis, we establish that the population sensitivity to motion direction in peripheral vision corresponds well with retinal motion statistics during forward locomotion. While these results challenge established concepts, considering the interplay of logarithmic Gaussian receptive fields and spot stimuli can shed light on the previously documented overrepresentation of centrifugal motion. Significantly, our findings reconcile a previously found discrepancy between MT activity and human behavior, highlighting the proficiency of peripheral MT neurons in encoding motion direction efficiently.SIGNIFICANCE STATEMENT The efficient coding hypothesis states that sensory neurons are tuned to specific, frequently experienced stimuli. Whereas previous work has found that neurons in the middle temporal (MT) area favor centrifugal motion, which results from forward locomotion, we show here that there is no such bias. Moreover, we found that the response of centrifugal neurons for centripetal motion was more suppressed than that of centripetal neurons for centrifugal motion. Combined with modeling, this provides a solution to a previously known discrepancy between reported centrifugal bias in MT and better detection of centripetal motion by human observers. Additionally, we show that population sensitivity in peripheral MT neurons conforms to an efficient code of retinal motion statistics during forward locomotion.
Collapse
Affiliation(s)
- Hironori Kumano
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, Chuo-shi, Yamanashi 409-3898, Japan
| | - Takanori Uka
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, Chuo-shi, Yamanashi 409-3898, Japan
| |
Collapse
|
9
|
Park J, Kim S, Kim HR, Lee J. Prior expectation enhances sensorimotor behavior by modulating population tuning and subspace activity in sensory cortex. SCIENCE ADVANCES 2023; 9:eadg4156. [PMID: 37418521 PMCID: PMC10328413 DOI: 10.1126/sciadv.adg4156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
Prior knowledge facilitates our perception and goal-directed behaviors, particularly when sensory input is lacking or noisy. However, the neural mechanisms underlying the improvement in sensorimotor behavior by prior expectations remain unknown. In this study, we examine the neural activity in the middle temporal (MT) area of visual cortex while monkeys perform a smooth pursuit eye movement task with prior expectation of the visual target's motion direction. Prior expectations discriminately reduce the MT neural responses depending on their preferred directions, when the sensory evidence is weak. This response reduction effectively sharpens neural population direction tuning. Simulations with a realistic MT population demonstrate that sharpening the tuning can explain the biases and variabilities in smooth pursuit, suggesting that neural computations in the sensory area alone can underpin the integration of prior knowledge and sensory evidence. State-space analysis further supports this by revealing neural signals of prior expectations in the MT population activity that correlate with behavioral changes.
Collapse
Affiliation(s)
- JeongJun Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO 63110, United States of America
| | - Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - HyungGoo R. Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon 16419, Republic of Korea
- Department of Biomedical Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Department of Intelligent Precision Healthcare Convergence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
10
|
Zhao B, Wang R, Zhu Z, Yang Q, Chen A. The computational rules of cross-modality suppression in the visual posterior sylvian area. iScience 2023; 26:106973. [PMID: 37378331 PMCID: PMC10291470 DOI: 10.1016/j.isci.2023.106973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023] Open
Abstract
The macaque visual posterior sylvian area (VPS) is an area with neurons responding selectively to heading direction in both visual and vestibular modalities, but how VPS neurons combined these two sensory signals is still unknown. In contrast to the subadditive characteristics in the medial superior temporal area (MSTd), responses in VPS were dominated by vestibular signals, with approximately a winner-take-all competition. The conditional Fisher information analysis shows that VPS neural population encodes information from distinct sensory modalities under large and small offset conditions, which differs from MSTd whose neural population contains more information about visual stimuli in both conditions. However, the combined responses of single neurons in both areas can be well fit by weighted linear sums of unimodal responses. Furthermore, a normalization model captured most vestibular and visual interaction characteristics for both VPS and MSTd, indicating the divisive normalization mechanism widely exists in the cortex.
Collapse
Affiliation(s)
- Bin Zhao
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Rong Wang
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| | - Zhihua Zhu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Qianli Yang
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics, East China Normal University, Shanghai 200062, China
| |
Collapse
|
11
|
Muller KS, Matthis J, Bonnen K, Cormack LK, Huk AC, Hayhoe M. Retinal motion statistics during natural locomotion. eLife 2023; 12:82410. [PMID: 37133442 PMCID: PMC10156169 DOI: 10.7554/elife.82410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 04/09/2023] [Indexed: 05/04/2023] Open
Abstract
Walking through an environment generates retinal motion, which humans rely on to perform a variety of visual tasks. Retinal motion patterns are determined by an interconnected set of factors, including gaze location, gaze stabilization, the structure of the environment, and the walker's goals. The characteristics of these motion signals have important consequences for neural organization and behavior. However, to date, there are no empirical in situ measurements of how combined eye and body movements interact with real 3D environments to shape the statistics of retinal motion signals. Here, we collect measurements of the eyes, the body, and the 3D environment during locomotion. We describe properties of the resulting retinal motion patterns. We explain how these patterns are shaped by gaze location in the world, as well as by behavior, and how they may provide a template for the way motion sensitivity and receptive field properties vary across the visual field.
Collapse
Affiliation(s)
- Karl S Muller
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| | - Jonathan Matthis
- Department of Biology, Northeastern University, Boston, United States
| | - Kathryn Bonnen
- School of Optometry, Indiana University, Bloomington, United States
| | - Lawrence K Cormack
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| | - Alex C Huk
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| | - Mary Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, United States
| |
Collapse
|
12
|
Zeng F, Zaidel A, Chen A. Contrary neuronal recalibration in different multisensory cortical areas. eLife 2023; 12:82895. [PMID: 36877555 PMCID: PMC9988259 DOI: 10.7554/elife.82895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 02/21/2023] [Indexed: 03/07/2023] Open
Abstract
The adult brain demonstrates remarkable multisensory plasticity by dynamically recalibrating itself based on information from multiple sensory sources. After a systematic visual-vestibular heading offset is experienced, the unisensory perceptual estimates for subsequently presented stimuli are shifted toward each other (in opposite directions) to reduce the conflict. The neural substrate of this recalibration is unknown. Here, we recorded single-neuron activity from the dorsal medial superior temporal (MSTd), parietoinsular vestibular cortex (PIVC), and ventral intraparietal (VIP) areas in three male rhesus macaques during this visual-vestibular recalibration. Both visual and vestibular neuronal tuning curves in MSTd shifted - each according to their respective cues' perceptual shifts. Tuning of vestibular neurons in PIVC also shifted in the same direction as vestibular perceptual shifts (cells were not robustly tuned to the visual stimuli). By contrast, VIP neurons demonstrated a unique phenomenon: both vestibular and visual tuning shifted in accordance with vestibular perceptual shifts. Such that, visual tuning shifted, surprisingly, contrary to visual perceptual shifts. Therefore, while unsupervised recalibration (to reduce cue conflict) occurs in early multisensory cortices, higher-level VIP reflects only a global shift, in vestibular space.
Collapse
Affiliation(s)
- Fu Zeng
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| | - Adam Zaidel
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan UniversityRamat GanIsrael
| | - Aihua Chen
- Key Laboratory of Brain Functional Genomics (Ministry of Education), East China Normal UniversityShanghaiChina
| |
Collapse
|
13
|
Rineau AL, Bringoux L, Sarrazin JC, Berberian B. Being active over one's own motion: Considering predictive mechanisms in self-motion perception. Neurosci Biobehav Rev 2023; 146:105051. [PMID: 36669748 DOI: 10.1016/j.neubiorev.2023.105051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/16/2023] [Accepted: 01/16/2023] [Indexed: 01/19/2023]
Abstract
Self-motion perception is a key element guiding pilots' behavior. Its importance is mostly revealed when impaired, leading in most cases to spatial disorientation which is still today a major factor of accidents occurrence. Self-motion perception is known as mainly based on visuo-vestibular integration and can be modulated by the physical properties of the environment with which humans interact. For instance, several studies have shown that the respective weight of visual and vestibular information depends on their reliability. More recently, it has been suggested that the internal state of an operator can also modulate multisensory integration. Interestingly, the systems' automation can interfere with this internal state through the loss of the intentional nature of movements (i.e., loss of agency) and the modulation of associated predictive mechanisms. In this context, one of the new challenges is to better understand the relationship between automation and self-motion perception. The present review explains how linking the concepts of agency and self-motion is a first approach to address this issue.
Collapse
Affiliation(s)
- Anne-Laure Rineau
- Information Processing and Systems, ONERA, Salon de Provence, Base Aérienne 701, France.
| | | | | | - Bruno Berberian
- Information Processing and Systems, ONERA, Salon de Provence, Base Aérienne 701, France.
| |
Collapse
|
14
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
15
|
Campagner D, Vale R, Tan YL, Iordanidou P, Pavón Arocas O, Claudi F, Stempel AV, Keshavarzi S, Petersen RS, Margrie TW, Branco T. A cortico-collicular circuit for orienting to shelter during escape. Nature 2023; 613:111-119. [PMID: 36544025 PMCID: PMC7614651 DOI: 10.1038/s41586-022-05553-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 11/10/2022] [Indexed: 12/24/2022]
Abstract
When faced with predatory threats, escape towards shelter is an adaptive action that offers long-term protection against the attacker. Animals rely on knowledge of safe locations in the environment to instinctively execute rapid shelter-directed escape actions1,2. Although previous work has identified neural mechanisms of escape initiation3,4, it is not known how the escape circuit incorporates spatial information to execute rapid flights along the most efficient route to shelter. Here we show that the mouse retrosplenial cortex (RSP) and superior colliculus (SC) form a circuit that encodes the shelter-direction vector and is specifically required for accurately orienting to shelter during escape. Shelter direction is encoded in RSP and SC neurons in egocentric coordinates and SC shelter-direction tuning depends on RSP activity. Inactivation of the RSP-SC pathway disrupts the orientation to shelter and causes escapes away from the optimal shelter-directed route, but does not lead to generic deficits in orientation or spatial navigation. We find that the RSP and SC are monosynaptically connected and form a feedforward lateral inhibition microcircuit that strongly drives the inhibitory collicular network because of higher RSP input convergence and synaptic integration efficiency in inhibitory SC neurons. This results in broad shelter-direction tuning in inhibitory SC neurons and sharply tuned excitatory SC neurons. These findings are recapitulated by a biologically constrained spiking network model in which RSP input to the local SC recurrent ring architecture generates a circular shelter-direction map. We propose that this RSP-SC circuit might be specialized for generating collicular representations of memorized spatial goals that are readily accessible to the motor system during escape, or more broadly, during navigation when the goal must be reached as fast as possible.
Collapse
Affiliation(s)
- Dario Campagner
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
- UCL Gatsby Computational Neuroscience Unit, London, UK
| | - Ruben Vale
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
- MRC Laboratory of Molecular Biology, Cambridge, UK
| | - Yu Lin Tan
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
| | | | - Oriol Pavón Arocas
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
| | - Federico Claudi
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
| | - A Vanessa Stempel
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
| | | | | | - Troy W Margrie
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK
| | - Tiago Branco
- UCL Sainsbury Wellcome Centre for Neural Circuits and Behaviour, London, UK.
| |
Collapse
|
16
|
Abekawa N, Doya K, Gomi H. Body and visual instabilities functionally modulate implicit reaching corrections. iScience 2022; 26:105751. [PMID: 36590158 PMCID: PMC9800534 DOI: 10.1016/j.isci.2022.105751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 07/31/2022] [Accepted: 12/02/2022] [Indexed: 12/12/2022] Open
Abstract
Hierarchical brain-information-processing schemes have frequently assumed that the flexible but slow voluntary action modulates a direct sensorimotor process that can quickly generate a reaction in dynamical interaction. Here we show that the quick visuomotor process for manual movement is modulated by postural and visual instability contexts that are related but remote and prior states to manual movements. A preceding unstable postural context significantly enhanced the reflexive manual response induced by a large-field visual motion during hand reaching while the response was evidently weakened by imposing a preceding random-visual-motion context. These modulations are successfully explained by the Bayesian optimal formulation in which the manual response elicited by visual motion is ascribed to the compensatory response to the estimated self-motion affected by the preceding contextual situations. Our findings suggest an implicit and functional mechanism that links the variability and uncertainty of remote states to the quick sensorimotor transformation.
Collapse
Affiliation(s)
- Naotoshi Abekawa
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanawaga, 243-0198, Japan
| | - Kenji Doya
- Okinawa Institute of Science and Technology Graduate University, Okinawa 904-0495, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanawaga, 243-0198, Japan,Corresponding author
| |
Collapse
|
17
|
Zhou Y, Mohan K, Freedman DJ. Abstract Encoding of Categorical Decisions in Medial Superior Temporal and Lateral Intraparietal Cortices. J Neurosci 2022; 42:9069-9081. [PMID: 36261285 PMCID: PMC9732825 DOI: 10.1523/jneurosci.0017-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 10/04/2022] [Accepted: 10/06/2022] [Indexed: 01/05/2023] Open
Abstract
Categorization is an essential cognitive and perceptual process for decision-making and recognition. The posterior parietal cortex, particularly the lateral intraparietal (LIP) area has been suggested to transform visual feature encoding into abstract categorical representations. By contrast, areas closer to sensory input, such as the middle temporal (MT) area, encode stimulus features but not more abstract categorical information during categorization tasks. Here, we compare the contributions of the medial superior temporal (MST) and LIP areas in category computation by recording neuronal activity in both areas from two male rhesus macaques trained to perform a visual motion categorization task. MST is a core motion-processing region interconnected with MT and is often considered an intermediate processing stage between MT and LIP. We show that MST exhibits robust decision-correlated motion category encoding and working memory encoding similar to LIP, suggesting that MST plays a substantial role in cognitive computation, extending beyond its widely recognized role in visual motion processing.SIGNIFICANCE STATEMENT Categorization requires assigning incoming sensory stimuli into behaviorally relevant groups. Previous work found that parietal area LIP shows a strong encoding of the learned category membership of visual motion stimuli, while visual area MT shows strong direction tuning but not category tuning during a motion direction categorization task. Here we show that the medial superior temporal (MST) area, a visual motion-processing region interconnected with both LIP and MT, shows strong visual category encoding similar to that observed in LIP. This suggests that MST plays a greater role in abstract cognitive functions, extending beyond its well known role in visual motion processing.
Collapse
Affiliation(s)
- Yang Zhou
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
- PKU-IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, People's Republic of China
| | - Krithika Mohan
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
| | - David J Freedman
- Department of Neurobiology, The University of Chicago, Chicago, Illinois 60637
- The University of Chicago Neuroscience Institute, The University of Chicago, Chicago, Illinois 60637
| |
Collapse
|
18
|
Thurley K. Naturalistic neuroscience and virtual reality. Front Syst Neurosci 2022; 16:896251. [PMID: 36467978 PMCID: PMC9712202 DOI: 10.3389/fnsys.2022.896251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 10/31/2022] [Indexed: 04/04/2024] Open
Abstract
Virtual reality (VR) is one of the techniques that became particularly popular in neuroscience over the past few decades. VR experiments feature a closed-loop between sensory stimulation and behavior. Participants interact with the stimuli and not just passively perceive them. Several senses can be stimulated at once, large-scale environments can be simulated as well as social interactions. All of this makes VR experiences more natural than those in traditional lab paradigms. Compared to the situation in field research, a VR simulation is highly controllable and reproducible, as required of a laboratory technique used in the search for neural correlates of perception and behavior. VR is therefore considered a middle ground between ecological validity and experimental control. In this review, I explore the potential of VR in eliciting naturalistic perception and behavior in humans and non-human animals. In this context, I give an overview of recent virtual reality approaches used in neuroscientific research.
Collapse
Affiliation(s)
- Kay Thurley
- Faculty of Biology, Ludwig-Maximilians-Universität München, Munich, Germany
- Bernstein Center for Computational Neuroscience Munich, Munich, Germany
| |
Collapse
|
19
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
20
|
Perceptual Biases as the Side Effect of a Multisensory Adaptive System: Insights from Verticality and Self-Motion Perception. Vision (Basel) 2022; 6:vision6030053. [PMID: 36136746 PMCID: PMC9502132 DOI: 10.3390/vision6030053] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 07/22/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
Perceptual biases can be interpreted as adverse consequences of optimal processes which otherwise improve system performance. The review presented here focuses on the investigation of inaccuracies in multisensory perception by focusing on the perception of verticality and self-motion, where the vestibular sensory modality has a prominent role. Perception of verticality indicates how the system processes gravity. Thus, it represents an indirect measurement of vestibular perception. Head tilts can lead to biases in perceived verticality, interpreted as the influence of a vestibular prior set at the most common orientation relative to gravity (i.e., upright), useful for improving precision when upright (e.g., fall avoidance). Studies on the perception of verticality across development and in the presence of blindness show that prior acquisition is mediated by visual experience, thus unveiling the fundamental role of visuo-vestibular interconnections across development. Such multisensory interactions can be behaviorally tested with cross-modal aftereffect paradigms which test whether adaptation in one sensory modality induces biases in another, eventually revealing an interconnection between the tested sensory modalities. Such phenomena indicate the presence of multisensory neural mechanisms that constantly function to calibrate self-motion dedicated sensory modalities with each other as well as with the environment. Thus, biases in vestibular perception reveal how the brain optimally adapts to environmental requests, such as spatial navigation and steady changes in the surroundings.
Collapse
|
21
|
Layton OW, Fajen BR. Distributed encoding of curvilinear self-motion across spiral optic flow patterns. Sci Rep 2022; 12:13393. [PMID: 35927277 PMCID: PMC9352735 DOI: 10.1038/s41598-022-16371-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 07/08/2022] [Indexed: 11/09/2022] Open
Abstract
Self-motion along linear paths without eye movements creates optic flow that radiates from the direction of travel (heading). Optic flow-sensitive neurons in primate brain area MSTd have been linked to linear heading perception, but the neural basis of more general curvilinear self-motion perception is unknown. The optic flow in this case is more complex and depends on the gaze direction and curvature of the path. We investigated the extent to which signals decoded from a neural model of MSTd predict the observer's curvilinear self-motion. Specifically, we considered the contributions of MSTd-like units that were tuned to radial, spiral, and concentric optic flow patterns in "spiral space". Self-motion estimates decoded from units tuned to the full set of spiral space patterns were substantially more accurate and precise than those decoded from units tuned to radial expansion. Decoding only from units tuned to spiral subtypes closely approximated the performance of the full model. Only the full decoding model could account for human judgments when path curvature and gaze covaried in self-motion stimuli. The most predictive units exhibited bias in center-of-motion tuning toward the periphery, consistent with neurophysiology and prior modeling. Together, findings support a distributed encoding of curvilinear self-motion across spiral space.
Collapse
Affiliation(s)
- Oliver W Layton
- Department of Computer Science, Colby College, Waterville, ME, USA. .,Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA.
| | - Brett R Fajen
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
22
|
Chen K, Beyeler M, Krichmar JL. Cortical Motion Perception Emerges from Dimensionality Reduction with Evolved Spike-Timing-Dependent Plasticity Rules. J Neurosci 2022; 42:5882-5898. [PMID: 35732492 PMCID: PMC9337611 DOI: 10.1523/jneurosci.0384-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/16/2022] [Accepted: 06/08/2022] [Indexed: 01/29/2023] Open
Abstract
The nervous system is under tight energy constraints and must represent information efficiently. This is particularly relevant in the dorsal part of the medial superior temporal area (MSTd) in primates where neurons encode complex motion patterns to support a variety of behaviors. A sparse decomposition model based on a dimensionality reduction principle known as non-negative matrix factorization (NMF) was previously shown to account for a wide range of monkey MSTd visual response properties. This model resulted in sparse, parts-based representations that could be regarded as basis flow fields, a linear superposition of which accurately reconstructed the input stimuli. This model provided evidence that the seemingly complex response properties of MSTd may be a by-product of MSTd neurons performing dimensionality reduction on their input. However, an open question is how a neural circuit could carry out this function. In the current study, we propose a spiking neural network (SNN) model of MSTd based on evolved spike-timing-dependent plasticity and homeostatic synaptic scaling (STDP-H) learning rules. We demonstrate that the SNN model learns compressed and efficient representations of the input patterns similar to the patterns that emerge from NMF, resulting in MSTd-like receptive fields observed in monkeys. This SNN model suggests that STDP-H observed in the nervous system may be performing a similar function as NMF with sparsity constraints, which provides a test bed for mechanistic theories of how MSTd may efficiently encode complex patterns of visual motion to support robust self-motion perception.SIGNIFICANCE STATEMENT The brain may use dimensionality reduction and sparse coding to efficiently represent stimuli under metabolic constraints. Neurons in monkey area MSTd respond to complex optic flow patterns resulting from self-motion. We developed a spiking neural network model that showed MSTd-like response properties can emerge from evolving spike-timing-dependent plasticity with STDP-H parameters of the connections between then middle temporal area and MSTd. Simulated MSTd neurons formed a sparse, reduced population code capable of encoding perceptual variables important for self-motion perception. This model demonstrates that complex neuronal responses observed in MSTd may emerge from efficient coding and suggests that neurobiological plasticity, like STDP-H, may contribute to reducing the dimensions of input stimuli and allowing spiking neurons to learn sparse representations.
Collapse
Affiliation(s)
| | - Michael Beyeler
- Departments of Computer Science
- Psychological & Brain Sciences, University of California, Santa Barbara, California 93106
| | - Jeffrey L Krichmar
- Departments of Cognitive Sciences
- Computer Science, University of California, Irvine, California 92697
| |
Collapse
|
23
|
Cortical Mechanisms of Multisensory Linear Self-motion Perception. Neurosci Bull 2022; 39:125-137. [PMID: 35821337 PMCID: PMC9849545 DOI: 10.1007/s12264-022-00916-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/29/2022] [Indexed: 01/22/2023] Open
Abstract
Accurate self-motion perception, which is critical for organisms to survive, is a process involving multiple sensory cues. The two most powerful cues are visual (optic flow) and vestibular (inertial motion). Psychophysical studies have indicated that humans and nonhuman primates integrate the two cues to improve the estimation of self-motion direction, often in a statistically Bayesian-optimal way. In the last decade, single-unit recordings in awake, behaving animals have provided valuable neurophysiological data with a high spatial and temporal resolution, giving insight into possible neural mechanisms underlying multisensory self-motion perception. Here, we review these findings, along with new evidence from the most recent studies focusing on the temporal dynamics of signals in different modalities. We show that, in light of new data, conventional thoughts about the cortical mechanisms underlying visuo-vestibular integration for linear self-motion are challenged. We propose that different temporal component signals may mediate different functions, a possibility that requires future studies.
Collapse
|
24
|
Zhang LQ, Stocker AA. Prior Expectations in Visual Speed Perception Predict Encoding Characteristics of Neurons in Area MT. J Neurosci 2022; 42:2951-2962. [PMID: 35169018 PMCID: PMC8985856 DOI: 10.1523/jneurosci.1920-21.2022] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 11/21/2022] Open
Abstract
Bayesian inference provides an elegant theoretical framework for understanding the characteristic biases and discrimination thresholds in visual speed perception. However, the framework is difficult to validate because of its flexibility and the fact that suitable constraints on the structure of the sensory uncertainty have been missing. Here, we demonstrate that a Bayesian observer model constrained by efficient coding not only well explains human visual speed perception but also provides an accurate quantitative account of the tuning characteristics of neurons known for representing visual speed. Specifically, we found that the population coding accuracy for visual speed in area MT ("neural prior") is precisely predicted by the power-law, slow-speed prior extracted from fitting the Bayesian observer model to psychophysical data ("behavioral prior") to the point that the two priors are indistinguishable in a cross-validation model comparison. Our results demonstrate a quantitative validation of the Bayesian observer model constrained by efficient coding at both the behavioral and neural levels.SIGNIFICANCE STATEMENT Statistical regularities of the environment play an important role in shaping both neural representations and perceptual behavior. Most previous work addressed these two aspects independently. Here we present a quantitative validation of a theoretical framework that makes joint predictions for neural coding and behavior, based on the assumption that neural representations of sensory information are efficient but also optimally used in generating a percept. Specifically, we demonstrate that the neural tuning characteristics for visual speed in brain area MT are precisely predicted by the statistical prior expectations extracted from psychophysical data. As such, our results provide a normative link between perceptual behavior and the neural representation of sensory information in the brain.
Collapse
|
25
|
Rosenblum L, Grewe E, Churan J, Bremmer F. Influence of Tactile Flow on Visual Heading Perception. Multisens Res 2022; 35:291-308. [PMID: 35263712 DOI: 10.1163/22134808-bja10071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 02/10/2022] [Indexed: 11/19/2022]
Abstract
The integration of information from different sensory modalities is crucial for successful navigation through an environment. Among others, self-motion induces distinct optic flow patterns on the retina, vestibular signals and tactile flow, which contribute to determine traveled distance (path integration) or movement direction (heading). While the processing of combined visual-vestibular information is subject to a growing body of literature, the processing of visuo-tactile signals in the context of self-motion has received comparatively little attention. Here, we investigated whether visual heading perception is influenced by behaviorally irrelevant tactile flow. In the visual modality, we simulated an observer's self-motion across a horizontal ground plane (optic flow). Tactile self-motion stimuli were delivered by air flow from head-mounted nozzles (tactile flow). In blocks of trials, we presented only visual or tactile stimuli and subjects had to report their perceived heading. In another block of trials, tactile and visual stimuli were presented simultaneously, with the tactile flow within ±40° of the visual heading (bimodal condition). Here, importantly, participants had to report their perceived visual heading. Perceived self-motion direction in all conditions revealed a centripetal bias, i.e., heading directions were perceived as compressed toward straight ahead. In the bimodal condition, we found a small but systematic influence of task-irrelevant tactile flow on visually perceived headings as function of their directional offset. We conclude that tactile flow is more tightly linked to self-motion perception than previously thought.
Collapse
Affiliation(s)
- Lisa Rosenblum
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35032 Marburg, Germany
| | - Elisa Grewe
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany
| | - Jan Churan
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35032 Marburg, Germany
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Karl-von-Frisch-Straße 8a, 35043 Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, 35032 Marburg, Germany
| |
Collapse
|
26
|
Zheng Q, Zhou L, Gu Y. Temporal synchrony effects of optic flow and vestibular inputs on multisensory heading perception. Cell Rep 2021; 37:109999. [PMID: 34788608 DOI: 10.1016/j.celrep.2021.109999] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 08/21/2021] [Accepted: 10/21/2021] [Indexed: 11/25/2022] Open
Abstract
Precise heading perception requires integration of optic flow and vestibular cues, yet the two cues often carry distinct temporal dynamics that may confound cue integration benefit. Here, we varied temporal offset between the two sensory inputs while macaques discriminated headings around straight ahead. We find the best heading performance does not occur under natural condition of synchronous inputs with zero offset but rather when visual stimuli are artificially adjusted to lead vestibular by a few hundreds of milliseconds. This amount exactly matches the lag between the vestibular acceleration and visual speed signals as measured from single-unit-activity in frontal and posterior parietal cortices. Manually aligning cues in these areas best facilitates integration with some nonlinear gain modulation effects. These findings are consistent with predictions from a model by which the brain integrates optic flow speed with a faster vestibular acceleration signal for sensing instantaneous heading direction during self-motion in the environment.
Collapse
Affiliation(s)
- Qihao Zheng
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Luxin Zhou
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, 200031 Shanghai, China; University of Chinese Academy of Sciences, 100049 Beijing, China; Shanghai Center for Brain Science and Brain-Inspired Intelligence Technology, 201210 Shanghai, China.
| |
Collapse
|
27
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|
28
|
Modeling Physiological Sources of Heading Bias from Optic Flow. eNeuro 2021; 8:ENEURO.0307-21.2021. [PMID: 34642226 PMCID: PMC8607907 DOI: 10.1523/eneuro.0307-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/01/2021] [Accepted: 09/20/2021] [Indexed: 11/21/2022] Open
Abstract
Human heading perception from optic flow is accurate for directions close to the straight-ahead and systematic biases emerge in the periphery (Cuturi and Macneilage, 2013; Sun et al., 2020). In pursuit of the underlying neural mechanisms, primate brain dorsal medial superior temporal (MSTd) area has been a focus because of its causal link with heading perception (Gu et al., 2012). Computational models generally explain heading sensitivity in individual MSTd neurons as a feedforward integration of motion signals from medial temporal (MT) area that resemble full-field optic flow patterns consistent with the preferred heading direction (Britten, 2008; Mineault et al., 2012). In the present simulation study, we quantified within the structure of this feedforward model how physiological properties of MT and MSTd shape heading signals. We found that known physiological tuning characteristics generally supported the accuracy of heading estimation, but not always. A weak-to-moderate overrepresentation of peripheral headings in MSTd garnered the highest accuracy and precision out of the models that we tested. The model also performed well when noise corrupted high proportions of the optic flow vectors. Such a peripheral MSTd model performed well when units possessed a range of receptive field (RF) sizes and were strongly direction tuned. Physiological biases in MT direction tuning toward the radial direction also supported heading estimation, but the tendency for MT preferred speed and RF size to scale with eccentricity did not. Our findings help elucidate the extent to which different physiological tuning properties influence the accuracy and precision of neural heading signals.
Collapse
|
29
|
Rodriguez R, Crane BT. Effect of timing delay between visual and vestibular stimuli on heading perception. J Neurophysiol 2021; 126:304-312. [PMID: 34191637 DOI: 10.1152/jn.00351.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Heading direction is perceived based on visual and inertial cues. The current study examined the effect of their relative timing on the ability of offset visual headings to influence inertial perception. Seven healthy human subjects experienced 2 s of translation along a heading of 0°, ±35°, ±70°, ±105°, or ±140°. These inertial headings were paired with 2-s duration visual headings that were presented at relative offsets of 0°, ±30°, ±60°, ±90°, or ±120°. The visual stimuli were also presented at 17 temporal delays ranging from -500 ms (visual lead) to 2,000 ms (visual delay) relative to the inertial stimulus. After each stimulus, subjects reported the direction of the inertial stimulus using a dial. The bias of the inertial heading toward the visual heading was robust at ±250 ms when examined across subjects during this period: 8.0° ± 0.5° with a 30° offset, 12.2° ± 0.5° with a 60° offset, 11.7° ± 0.6° with a 90° offset, and 9.8° ± 0.7° with a 120° offset (mean bias toward visual ± SE). The mean bias was much diminished with temporal misalignments of ±500 ms, and there was no longer any visual influence on the inertial heading when the visual stimulus was delayed by 1,000 ms or more. Although the amount of bias varied between subjects, the effect of delay was similar.NEW & NOTEWORTHY The effect of timing on visual-inertial integration on heading perception has not been previously examined. This study finds that visual direction influence inertial heading perception when timing differences are within 250 ms. This suggests visual-inertial stimuli can be integrated over a wider range than reported for visual-auditory integration and may be due to the unique nature of inertial sensation, which can only sense acceleration while the visual system senses position but encodes velocity.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
30
|
Liu B, Tian Q, Gu Y. Robust vestibular self-motion signals in macaque posterior cingulate region. eLife 2021; 10:e64569. [PMID: 33827753 PMCID: PMC8032402 DOI: 10.7554/elife.64569] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/29/2021] [Indexed: 11/13/2022] Open
Abstract
Self-motion signals, distributed ubiquitously across parietal-temporal lobes, propagate to limbic hippocampal system for vector-based navigation via hubs including posterior cingulate cortex (PCC) and retrosplenial cortex (RSC). Although numerous studies have indicated posterior cingulate areas are involved in spatial tasks, it is unclear how their neurons represent self-motion signals. Providing translation and rotation stimuli to macaques on a 6-degree-of-freedom motion platform, we discovered robust vestibular responses in PCC. A combined three-dimensional spatiotemporal model captured data well and revealed multiple temporal components including velocity, acceleration, jerk, and position. Compared to PCC, RSC contained moderate vestibular temporal modulations and lacked significant spatial tuning. Visual self-motion signals were much weaker in both regions compared to the vestibular signals. We conclude that macaque posterior cingulate region carries vestibular-dominant self-motion signals with plentiful temporal components that could be useful for path integration.
Collapse
Affiliation(s)
- Bingyu Liu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Qingyang Tian
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yong Gu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Key Laboratory of Primate Neurobiology, Institute of Neuroscience, Chinese Academy of SciencesShanghaiChina
- University of Chinese Academy of SciencesBeijingChina
| |
Collapse
|
31
|
Wild B, Treue S. Primate extrastriate cortical area MST: a gateway between sensation and cognition. J Neurophysiol 2021; 125:1851-1882. [PMID: 33656951 DOI: 10.1152/jn.00384.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Primate visual cortex consists of dozens of distinct brain areas, each providing a highly specialized component to the sophisticated task of encoding the incoming sensory information and creating a representation of our visual environment that underlies our perception and action. One such area is the medial superior temporal cortex (MST), a motion-sensitive, direction-selective part of the primate visual cortex. It receives most of its input from the middle temporal (MT) area, but MST cells have larger receptive fields and respond to more complex motion patterns. The finding that MST cells are tuned for optic flow patterns has led to the suggestion that the area plays an important role in the perception of self-motion. This hypothesis has received further support from studies showing that some MST cells also respond selectively to vestibular cues. Furthermore, the area is part of a network that controls the planning and execution of smooth pursuit eye movements and its activity is modulated by cognitive factors, such as attention and working memory. This review of more than 90 studies focuses on providing clarity of the heterogeneous findings on MST in the macaque cortex and its putative homolog in the human cortex. From this analysis of the unique anatomical and functional position in the hierarchy of areas and processing steps in primate visual cortex, MST emerges as a gateway between perception, cognition, and action planning. Given this pivotal role, this area represents an ideal model system for the transition from sensation to cognition.
Collapse
Affiliation(s)
- Benedict Wild
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Goettingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), University of Goettingen, Goettingen, Germany
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center, Leibniz Institute for Primate Research, Goettingen, Germany.,Faculty of Biology and Psychology, University of Goettingen, Goettingen, Germany.,Leibniz-ScienceCampus Primate Cognition, Goettingen, Germany.,Bernstein Center for Computational Neuroscience, Goettingen, Germany
| |
Collapse
|
32
|
Chow HM, Knöll J, Madsen M, Spering M. Look where you go: Characterizing eye movements toward optic flow. J Vis 2021; 21:19. [PMID: 33735378 PMCID: PMC7991960 DOI: 10.1167/jov.21.3.19] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 02/08/2021] [Indexed: 11/24/2022] Open
Abstract
When we move through our environment, objects in the visual scene create optic flow patterns on the retina. Even though optic flow is ubiquitous in everyday life, it is not well understood how our eyes naturally respond to it. In small groups of human and non-human primates, optic flow triggers intuitive, uninstructed eye movements to the focus of expansion of the pattern (Knöll, Pillow, & Huk, 2018). Here, we investigate whether such intuitive oculomotor responses to optic flow are generalizable to a larger group of human observers and how eye movements are affected by motion signal strength and task instructions. Observers (N = 43) viewed expanding or contracting optic flow constructed by a cloud of moving dots radiating from or converging toward a focus of expansion that could randomly shift. Results show that 84% of observers tracked the focus of expansion with their eyes without being explicitly instructed to track. Intuitive tracking was tuned to motion signal strength: Saccades landed closer to the focus of expansion, and smooth tracking was more accurate when dot contrast, motion coherence, and translational speed were high. Under explicit tracking instruction, the eyes aligned with the focus of expansion more closely than without instruction. Our results highlight the sensitivity of intuitive eye movements as indicators of visual motion processing in dynamic contexts.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jonas Knöll
- Institute of Animal Welfare and Animal Husbandry, Friedrich-Loeffler-Institut, Celle, Germany
| | - Matthew Madsen
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
33
|
The Effects of Depth Cues and Vestibular Translation Signals on the Rotation Tolerance of Heading Tuning in Macaque Area MSTd. eNeuro 2020; 7:ENEURO.0259-20.2020. [PMID: 33127626 PMCID: PMC7688306 DOI: 10.1523/eneuro.0259-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Revised: 10/17/2020] [Accepted: 10/22/2020] [Indexed: 12/03/2022] Open
Abstract
When the eyes rotate during translational self-motion, the focus of expansion (FOE) in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a 3D cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
Collapse
|
34
|
Murphy AJ, Shaw L, Hasse JM, Goris RLT, Briggs F. Optogenetic activation of corticogeniculate feedback stabilizes response gain and increases information coding in LGN neurons. J Comput Neurosci 2020; 49:259-271. [PMID: 32632511 DOI: 10.1007/s10827-020-00754-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 06/10/2020] [Accepted: 06/24/2020] [Indexed: 11/24/2022]
Abstract
In spite of their anatomical robustness, it has been difficult to establish the functional role of corticogeniculate circuits connecting primary visual cortex with the lateral geniculate nucleus of the thalamus (LGN) in the feedback direction. Growing evidence suggests that corticogeniculate feedback does not directly shape the spatial receptive field properties of LGN neurons, but rather regulates the timing and precision of LGN responses and the information coding capacity of LGN neurons. We propose that corticogeniculate feedback specifically stabilizes the response gain of LGN neurons, thereby increasing their information coding capacity. Inspired by early work by McClurkin et al. (1994), we manipulated the activity of corticogeniculate neurons to test this hypothesis. We used optogenetic methods to selectively and reversibly enhance the activity of corticogeniculate neurons in anesthetized ferrets while recording responses of LGN neurons to drifting gratings and white noise stimuli. We found that optogenetic activation of corticogeniculate feedback systematically reduced LGN gain variability and increased information coding capacity among LGN neurons. Optogenetic activation of corticogeniculate neurons generated similar increases in information encoded in LGN responses to drifting gratings and white noise stimuli. Together, these findings suggest that the influence of corticogeniculate feedback on LGN response precision and information coding capacity could be mediated through reductions in gain variability.
Collapse
Affiliation(s)
- Allison J Murphy
- Neuroscience Graduate Program, University of Rochester, Rochester, NY, 14642, USA.,Center for Visual Science, University of Rochester, Rochester, NY, 14642, USA
| | - Luke Shaw
- Neuroscience Graduate Program, University of Rochester, Rochester, NY, 14642, USA
| | - J Michael Hasse
- Ernest J. Del Monte Institute for Neuroscience, University of Rochester School of Medicine, 601 Elmwood Ave., Box 603, Rochester, NY, 14642, USA.,Center for Neural Science, New York University, New York, NY, 10003, USA
| | - Robbe L T Goris
- Institute for Neuroscience, University of Texas at Austin, Austin, TX, 78712, USA.,Department of Psychology, University of Texas at Austin, Austin, TX, 78712, USA
| | - Farran Briggs
- Neuroscience Graduate Program, University of Rochester, Rochester, NY, 14642, USA. .,Center for Visual Science, University of Rochester, Rochester, NY, 14642, USA. .,Ernest J. Del Monte Institute for Neuroscience, University of Rochester School of Medicine, 601 Elmwood Ave., Box 603, Rochester, NY, 14642, USA. .,Department of Neuroscience, University of Rochester School of Medicine, Rochester, NY, 14642, USA. .,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14642, USA.
| |
Collapse
|
35
|
Gibson ME, Kim JJJ, McManus M, Harris LR. The effect of training on the perceived approach angle in visual vertical heading judgements in a virtual environment. Exp Brain Res 2020; 238:1861-1869. [PMID: 32514713 PMCID: PMC7438363 DOI: 10.1007/s00221-020-05841-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 05/25/2020] [Indexed: 11/29/2022]
Abstract
Past studies have found poorer performance on vertical heading judgement accuracy compared to horizontal heading judgement accuracy. In everyday life, precise vertical heading judgements are used less often than horizontal heading judgements as we cannot usually control our vertical direction. However, pilots judging a landing approach need to consistently discriminate vertical heading angles to land safely. This study addresses the impact of training on participants' ability to judge their touchdown point relative to a target in a virtual environment with a clearly defined ground plane and horizon. Thirty-one participants completed a touchdown point estimation task twice, using three angles of descent (3°, 6° and 9°). In between the two testing tasks, half of the participants completed a flight simulator landing training task which provided feedback on their vertical heading performance; while, the other half completed a two-dimensional puzzle game as a control. Overall, participants were more precise in their responses in the second testing compared to the first (from a SD of ± 0.91° to ± 0.67°), but only the experimental group showed improvement in accuracy (from a mean error of - 2.1° to - 0.6°). Our results suggest that with training, vertical heading judgments can be as accurate as horizontal heading judgments. This study is the first to show the effectiveness of training in vertical heading judgement in naïve individuals. The results are applicable in the field of aviation, informing possible strategies for pilot training.
Collapse
Affiliation(s)
- Molly E Gibson
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - John J-J Kim
- Centre for Vision Research, York University, Toronto, ON, Canada.,Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada
| | - Meaghan McManus
- Centre for Vision Research, York University, Toronto, ON, Canada. .,Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada.
| | - Laurence R Harris
- Centre for Vision Research, York University, Toronto, ON, Canada.,Department of Psychology, York University, 4700 Keele St, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
36
|
Romo R, Rossi-Pool R. Turning Touch into Perception. Neuron 2020; 105:16-33. [PMID: 31917952 DOI: 10.1016/j.neuron.2019.11.033] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 11/16/2019] [Accepted: 11/27/2019] [Indexed: 12/27/2022]
Abstract
Many brain areas modulate their activity during vibrotactile tasks. The activity from these areas may code the stimulus parameters, stimulus perception, or perceptual reports. Here, we discuss findings obtained in behaving monkeys aimed to understand these processes. In brief, neurons from the somatosensory thalamus and primary somatosensory cortex (S1) only code the stimulus parameters during the stimulation periods. In contrast, areas downstream of S1 code the stimulus parameters during not only the task components but also perception. Surprisingly, the midbrain dopamine system is an actor not considered before in perception. We discuss the evidence that it codes the subjective magnitude of a sensory percept. The findings reviewed here may help us to understand where and how sensation transforms into perception in the brain.
Collapse
Affiliation(s)
- Ranulfo Romo
- Instituto de Fisiología Celular - Neurociencias, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico; El Colegio Nacional, 06020 Mexico City, Mexico.
| | - Román Rossi-Pool
- Instituto de Fisiología Celular - Neurociencias, Universidad Nacional Autónoma de México, 04510 Mexico City, Mexico.
| |
Collapse
|
37
|
Rodriguez R, Crane BT. Common causation and offset effects in human visual-inertial heading direction integration. J Neurophysiol 2020; 123:1369-1379. [PMID: 32130052 DOI: 10.1152/jn.00019.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Movement direction can be determined from a combination of visual and inertial cues. Visual motion (optic flow) can represent self-motion through a fixed environment or environmental motion relative to an observer. Simultaneous visual and inertial heading cues present the question of whether the cues have a common cause (i.e., should be integrated) or whether they should be considered independent. This was studied in eight healthy human subjects who experienced 12 visual and inertial headings in the horizontal plane divided in 30° increments. The headings were estimated in two unisensory and six multisensory trial blocks. Each unisensory block included 72 stimulus presentations, while each multisensory block included 144 stimulus presentations, including every possible combination of visual and inertial headings in random order. After each multisensory stimulus, subjects reported their perception of visual and inertial headings as congruous (i.e., having common causation) or not. In the multisensory trial blocks, subjects also reported visual or inertial heading direction (3 trial blocks for each). For aligned visual-inertial headings, the rate of common causation was higher during alignment in cardinal than noncardinal directions. When visual and inertial stimuli were separated by 30°, the rate of reported common causation remained >50%, but it decreased to 15% or less for separation of ≥90°. The inertial heading was biased toward the visual heading by 11-20° for separations of 30-120°. Thus there was sensory integration even in conditions without reported common causation. The visual heading was minimally influenced by inertial direction. When trials with common causation perception were compared with those without, inertial heading perception had a stronger bias toward visual stimulus direction.NEW & NOTEWORTHY Optic flow ambiguously represents self-motion or environmental motion. When these are in different directions, it is uncertain whether these are integrated into a common perception or not. This study looks at that issue by determining whether the two modalities are consistent and by measuring their perceived directions to get a degree of influence. The visual stimulus can have significant influence on the inertial stimulus even when they are perceived as inconsistent.
Collapse
Affiliation(s)
- Raul Rodriguez
- Department of Biomedical Engineering, University of Rochester, Rochester, New York
| | - Benjamin T Crane
- Department of Biomedical Engineering, University of Rochester, Rochester, New York.,Department of Otolaryngology, University of Rochester, Rochester, New York.,Department of Neuroscience, University of Rochester, Rochester, New York
| |
Collapse
|
38
|
Hou H, Zheng Q, Zhao Y, Pouget A, Gu Y. Neural Correlates of Optimal Multisensory Decision Making under Time-Varying Reliabilities with an Invariant Linear Probabilistic Population Code. Neuron 2019; 104:1010-1021.e10. [DOI: 10.1016/j.neuron.2019.08.038] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 07/21/2019] [Accepted: 08/22/2019] [Indexed: 12/27/2022]
|
39
|
A model of how depth facilitates scene-relative object motion perception. PLoS Comput Biol 2019; 15:e1007397. [PMID: 31725723 PMCID: PMC6879150 DOI: 10.1371/journal.pcbi.1007397] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 11/26/2019] [Accepted: 09/12/2019] [Indexed: 12/02/2022] Open
Abstract
Many everyday interactions with moving objects benefit from an accurate perception of their movement. Self-motion, however, complicates object motion perception because it generates a global pattern of motion on the observer’s retina and radically influences an object’s retinal motion. There is strong evidence that the brain compensates by suppressing the retinal motion due to self-motion, however, this requires estimates of depth relative to the object—otherwise the appropriate self-motion component to remove cannot be determined. The underlying neural mechanisms are unknown, but neurons in brain areas MT and MST may contribute given their sensitivity to motion parallax and depth through joint direction, speed, and disparity tuning. We developed a neural model to investigate whether cells in areas MT and MST with well-established neurophysiological properties can account for human object motion judgments during self-motion. We tested the model by comparing simulated object motion signals to human object motion judgments in environments with monocular, binocular, and ambiguous depth. Our simulations show how precise depth information, such as that from binocular disparity, may improve estimates of the retinal motion pattern due the self-motion through increased selectivity among units that respond to the global self-motion pattern. The enhanced self-motion estimates emerged from recurrent feedback connections in MST and allowed the model to better suppress the appropriate direction, speed, and disparity signals from the object’s retinal motion, improving the accuracy of the object’s movement direction represented by motion signals. Research has shown that the accuracy with which humans perceive object motion during self-motion improves in the presence of stereo cues. Using a neural modelling approach, we explore whether this finding can be explained through improved estimation of the retinal motion induced by self-motion. Our results show that depth cues that provide information about scene structure may have a large effect on the specificity with which the neural mechanisms for motion perception represent the visual self-motion signal. This in turn enables effective removal of the retinal motion due to self-motion when the goal is to perceive object motion relative to the stationary world. These results reveal a hitherto unknown critical function of stereo tuning in the MT-MST complex, and shed important light on how the brain may recruit signals from upstream and downstream brain areas to simultaneously perceive self-motion and object motion.
Collapse
|
40
|
Mackrous I, Carriot J, Simoneau M. Learning to use vestibular sense for spatial updating is context dependent. Sci Rep 2019; 9:11154. [PMID: 31371770 PMCID: PMC6671975 DOI: 10.1038/s41598-019-47675-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 07/22/2019] [Indexed: 11/09/2022] Open
Abstract
As we move, perceptual stability is crucial to successfully interact with our environment. Notably, the brain must update the locations of objects in space using extra-retinal signals. The vestibular system is a strong candidate as a source of information for spatial updating as it senses head motion. The ability to use this cue is not innate but must be learned. To date, the mechanisms of vestibular spatial updating generalization are unknown or at least controversial. In this paper we examine generalization patterns within and between different conditions of vestibular spatial updating. Participants were asked to update the position of a remembered target following (offline) or during (online) passive body rotation. After being trained on a single spatial target position within a given task, we tested generalization of performance for different spatial targets and an unpracticed spatial updating task. The results demonstrated different patterns of generalization across the workspace depending on the task. Further, no transfer was observed from the practiced to the unpracticed task. We found that the type of mechanism involved during learning governs generalization. These findings provide new knowledge about how the brain uses vestibular information to preserve its spatial updating ability.
Collapse
Affiliation(s)
| | - Jérôme Carriot
- Department of Physiology, McGill University, Montreal, QC, Canada
| | - Martin Simoneau
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada. .,Département de kinésiologie, Faculté de médecine, Université Laval, Québec, QC, Canada.
| |
Collapse
|
41
|
de Winkel KN, Kurtz M, Bülthoff HH. Effects of visual stimulus characteristics and individual differences in heading estimation. J Vis 2019; 18:9. [PMID: 30347100 DOI: 10.1167/18.11.9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual heading estimation is subject to periodic patterns of constant (bias) and variable (noise) error. The nature of the errors, however, appears to differ between studies, showing underestimation in some, but overestimation in others. We investigated whether field of view (FOV), the availability of binocular disparity cues, motion profile, and visual scene layout can account for error characteristics, with a potential mediating effect of vection. Twenty participants (12 females) reported heading and rated vection for visual horizontal motion stimuli with headings ranging the full circle, while we systematically varied the above factors. Overall, the results show constant errors away from the fore-aft axis. Error magnitude was affected by FOV, disparity, and scene layout. Variable errors varied with heading angle, and depended on scene layout. Higher vection ratings were associated with smaller variable errors. Vection ratings depended on FOV, motion profile, and scene layout, with the highest ratings for a large FOV, cosine-bell velocity profile, and a ground plane scene rather than a dot cloud scene. Although the factors did affect error magnitude, differences in its direction were observed only between participants. We show that the observations are consistent with prior beliefs that headings align with the cardinal axes, where the attraction of each axis is an idiosyncratic property.
Collapse
Affiliation(s)
- Ksander N de Winkel
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Max Kurtz
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department of Human Factors and Engineering Psychology, University of Twente, Enschede, The Netherlands
| | - Heinrich H Bülthoff
- Department of Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
42
|
Gu Y. Vestibular signals in primate cortex for self-motion perception. Curr Opin Neurobiol 2018; 52:10-17. [DOI: 10.1016/j.conb.2018.04.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 03/12/2018] [Accepted: 04/07/2018] [Indexed: 10/17/2022]
|
43
|
Yu X, Gu Y. Probing Sensory Readout via Combined Choice-Correlation Measures and Microstimulation Perturbation. Neuron 2018; 100:715-727.e5. [PMID: 30244884 DOI: 10.1016/j.neuron.2018.08.034] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 01/19/2018] [Accepted: 08/22/2018] [Indexed: 12/18/2022]
Abstract
It is controversial whether covariation between neuronal activity and perceptual choice (i.e., choice correlation) reflects the functional readout of sensory signals. Here, we combined choice-correlation measures and electrical microstimulation on a site-to-site basis in the medial superior temporal area (MST), middle temporal area (MT), and ventral intraparietal area (VIP) when macaques discriminated between motion directions in both fine and coarse tasks. Microstimulation generated comparable effects between tasks but heterogeneous effects across and within brain regions. Within the MST and MT, microstimulation significantly biased an animal's choice toward the sensory preference instead of choice-related signals of the stimulated units. This was particularly evident for sites with conflict preference of sensory and choice-related signals. In the VIP, microstimulation failed to produce significant effects in either task despite strong choice correlations presented in this area. Our results suggest that sensory readout may not be inferred from choice-related signals during perceptual decision-making tasks.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| |
Collapse
|
44
|
Hinterecker T, Pretto P, de Winkel KN, Karnath HO, Bülthoff HH, Meilinger T. Body-relative horizontal-vertical anisotropy in human representations of traveled distances. Exp Brain Res 2018; 236:2811-2827. [PMID: 30030590 PMCID: PMC6153888 DOI: 10.1007/s00221-018-5337-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 07/17/2018] [Indexed: 01/14/2023]
Abstract
A growing number of studies investigated anisotropies in representations of horizontal and vertical spaces. In humans, compelling evidence for such anisotropies exists for representations of multi-floor buildings. In contrast, evidence regarding open spaces is indecisive. Our study aimed at further enhancing the understanding of horizontal and vertical spatial representations in open spaces utilizing a simple traveled distance estimation paradigm. Blindfolded participants were moved along various directions in the sagittal plane. Subsequently, participants passively reproduced the traveled distance from memory. Participants performed this task in an upright and in a 30° backward-pitch orientation. The accuracy of distance estimates in the upright orientation showed a horizontal–vertical anisotropy, with higher accuracy along the horizontal axis compared with the vertical axis. The backward-pitch orientation enabled us to investigate whether this anisotropy was body or earth-centered. The accuracy patterns of the upright condition were positively correlated with the body-relative (not the earth-relative) coordinate mapping of the backward-pitch condition, suggesting a body-centered anisotropy. Overall, this is consistent with findings on motion perception. It suggests that the distance estimation sub-process of path integration is subject to horizontal–vertical anisotropy. Based on the previous studies that showed isotropy in open spaces, we speculate that real physical self-movements or categorical versus isometric encoding are crucial factors for (an)isotropies in spatial representations.
Collapse
Affiliation(s)
- Thomas Hinterecker
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany. .,Graduate Training Centre of Neuroscience, Tübingen University, Tübingen, Germany.
| | - Paolo Pretto
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Ksander N de Winkel
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Hans-Otto Karnath
- Division of Neuropsychology, Center of Neurology, Tübingen University, Tübingen, Germany
| | - Heinrich H Bülthoff
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| | - Tobias Meilinger
- Max-Planck-Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076, Tübingen, Germany
| |
Collapse
|
45
|
Acerbi L, Dokka K, Angelaki DE, Ma WJ. Bayesian comparison of explicit and implicit causal inference strategies in multisensory heading perception. PLoS Comput Biol 2018; 14:e1006110. [PMID: 30052625 PMCID: PMC6063401 DOI: 10.1371/journal.pcbi.1006110] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 03/28/2018] [Indexed: 11/18/2022] Open
Abstract
The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers' performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
Collapse
Affiliation(s)
- Luigi Acerbi
- Center for Neural Science, New York University, New York, NY, United States of America
| | - Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Dora E. Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, United States of America
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, NY, United States of America
- Department of Psychology, New York University, New York, NY, United States of America
| |
Collapse
|
46
|
Effect of vibration during visual-inertial integration on human heading perception during eccentric gaze. PLoS One 2018; 13:e0199097. [PMID: 29902253 PMCID: PMC6002115 DOI: 10.1371/journal.pone.0199097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Accepted: 05/31/2018] [Indexed: 11/21/2022] Open
Abstract
Heading direction is determined from visual and inertial cues. Visual headings use retinal coordinates while inertial headings use body coordinates. Thus during eccentric gaze the same heading may be perceived differently by visual and inertial modalities. Stimulus weights depend on the relative reliability of these stimuli, but previous work suggests that the inertial heading may be given more weight than predicted. These experiments only varied the visual stimulus reliability, and it is unclear what occurs with variation in inertial reliability. Five human subjects completed a heading discrimination task using 2s of translation with a peak velocity of 16cm/s. Eye position was ±25° left/right with visual, inertial, or combined motion. The visual motion coherence was 50%. Inertial stimuli included 6 Hz vertical vibration with 0, 0.10, 0.15, or 0.20cm amplitude. Subjects reported perceived heading relative to the midline. With an inertial heading, perception was biased 3.6° towards the gaze direction. Visual headings biased perception 9.6° opposite gaze. The inertial threshold without vibration was 4.8° which increased significantly to 8.8° with vibration but the amplitude of vibration did not influence reliability. With visual-inertial headings, empirical stimulus weights were calculated from the bias and compared with the optimal weight calculated from the threshold. In 2 subjects empirical weights were near optimal while in the remaining 3 subjects the inertial stimuli were weighted greater than optimal predictions. On average the inertial stimulus was weighted greater than predicted. These results indicate multisensory integration may not be a function of stimulus reliability when inertial stimulus reliability is varied.
Collapse
|
47
|
Noel JP, Blanke O, Serino A. From multisensory integration in peripersonal space to bodily self-consciousness: from statistical regularities to statistical inference. Ann N Y Acad Sci 2018; 1426:146-165. [PMID: 29876922 DOI: 10.1111/nyas.13867] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/24/2018] [Accepted: 05/02/2018] [Indexed: 01/09/2023]
Abstract
Integrating information across sensory systems is a critical step toward building a cohesive representation of the environment and one's body, and as illustrated by numerous illusions, scaffolds subjective experience of the world and self. In the last years, classic principles of multisensory integration elucidated in the subcortex have been translated into the language of statistical inference understood by the neocortical mantle. Most importantly, a mechanistic systems-level description of multisensory computations via probabilistic population coding and divisive normalization is actively being put forward. In parallel, by describing and understanding bodily illusions, researchers have suggested multisensory integration of bodily inputs within the peripersonal space as a key mechanism in bodily self-consciousness. Importantly, certain aspects of bodily self-consciousness, although still very much a minority, have been recently casted under the light of modern computational understandings of multisensory integration. In doing so, we argue, the field of bodily self-consciousness may borrow mechanistic descriptions regarding the neural implementation of inference computations outlined by the multisensory field. This computational approach, leveraged on the understanding of multisensory processes generally, promises to advance scientific comprehension regarding one of the most mysterious questions puzzling humankind, that is, how our brain creates the experience of a self in interaction with the environment.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience (LNCO), Center for Neuroprosthetics (CNP), Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland
- Department of Neurology, University of Geneva, Geneva, Switzerland
| | - Andrea Serino
- MySpace Lab, Department of Clinical Neuroscience, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
48
|
Yu X, Hou H, Spillmann L, Gu Y. Causal Evidence of Motion Signals in Macaque Middle Temporal Area Weighted-Pooled for Global Heading Perception. Cereb Cortex 2018; 28:612-624. [PMID: 28057722 DOI: 10.1093/cercor/bhw402] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 12/13/2016] [Indexed: 11/14/2022] Open
Abstract
Accurate heading perception relies on visual information integrated across a wide field, that is, optic flow. Numerous computational studies have speculated how local visual information might be pooled by the brain to compute heading, but these hypotheses lack direct neurophysiological support. In the current study, we instructed human and monkey subjects to judge heading directions based on global optic flow. We showed that a local perturbation cue applied within only a small part of the visual field could bias the subjects' heading judgments, and shift the neuronal tuning in the macaque middle temporal (MT) area at the same time. Electrical microstimulation in MT significantly biased the animals' heading judgments predictable from the tuning of the stimulated neurons. Masking the visual stimuli within these neurons' receptive fields could not remove the stimulation effect, indicating a sufficient role of the MT signals pooled by downstream neurons for global heading estimation. Interestingly, this pooling is not homogeneous because stimulating neurons with excitatory surrounds produced relatively larger effects than stimulating neurons with inhibitory surrounds. Thus our data not only provide direct causal evidence, but also new insights into the neural mechanisms of pooling local motion information for global heading estimation.
Collapse
Affiliation(s)
- Xuefei Yu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Han Hou
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lothar Spillmann
- On leave of absence from Department of Neurology, University of Freiburg, Freiburg 79110, Germany
| | - Yong Gu
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and intelligence technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
49
|
Cortical Neural Activity Predicts Sensory Acuity Under Optogenetic Manipulation. J Neurosci 2018; 38:2094-2105. [PMID: 29367406 DOI: 10.1523/jneurosci.2457-17.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Revised: 11/14/2017] [Accepted: 12/15/2017] [Indexed: 11/21/2022] Open
Abstract
Excitatory and inhibitory neurons in the mammalian sensory cortex form interconnected circuits that control cortical stimulus selectivity and sensory acuity. Theoretical studies have predicted that suppression of inhibition in such excitatory-inhibitory networks can lead to either an increase or, paradoxically, a decrease in excitatory neuronal firing, with consequent effects on stimulus selectivity. We tested whether modulation of inhibition or excitation in the auditory cortex of male mice could evoke such a variety of effects in tone-evoked responses and in behavioral frequency discrimination acuity. We found that, indeed, the effects of optogenetic manipulation on stimulus selectivity and behavior varied in both magnitude and sign across subjects, possibly reflecting differences in circuitry or expression of optogenetic factors. Changes in neural population responses consistently predicted behavioral changes for individuals separately, including improvement and impairment in acuity. This correlation between cortical and behavioral change demonstrates that, despite the complex and varied effects that these manipulations can have on neuronal dynamics, the resulting changes in cortical activity account for accompanying changes in behavioral acuity.SIGNIFICANCE STATEMENT Excitatory and inhibitory interactions determine stimulus specificity and tuning in sensory cortex, thereby controlling perceptual discrimination acuity. Modeling has predicted that suppressing the activity of inhibitory neurons can lead to increased or, paradoxically, decreased excitatory activity depending on the architecture of the network. Here, we capitalized on differences between subjects to test whether suppressing/activating inhibition and excitation can in fact exhibit such paradoxical effects for both stimulus sensitivity and behavioral discriminability. Indeed, the same optogenetic manipulation in the auditory cortex of different mice could improve or impair frequency discrimination acuity, predictable from the effects on cortical responses to tones. The same manipulations sometimes produced opposite changes in the behavior of different individuals, supporting theoretical predictions for inhibition-stabilized networks.
Collapse
|
50
|
Dissociation of Self-Motion and Object Motion by Linear Population Decoding That Approximates Marginalization. J Neurosci 2017; 37:11204-11219. [PMID: 29030435 DOI: 10.1523/jneurosci.1177-17.2017] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 10/02/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
We use visual image motion to judge the movement of objects, as well as our own movements through the environment. Generally, image motion components caused by object motion and self-motion are confounded in the retinal image. Thus, to estimate heading, the brain would ideally marginalize out the effects of object motion (or vice versa), but little is known about how this is accomplished neurally. Behavioral studies suggest that vestibular signals play a role in dissociating object motion and self-motion, and recent computational work suggests that a linear decoder can approximate marginalization by taking advantage of diverse multisensory representations. By measuring responses of MSTd neurons in two male rhesus monkeys and by applying a recently-developed method to approximate marginalization by linear population decoding, we tested the hypothesis that vestibular signals help to dissociate self-motion and object motion. We show that vestibular signals stabilize tuning for heading in neurons with congruent visual and vestibular heading preferences, whereas they stabilize tuning for object motion in neurons with discrepant preferences. Thus, vestibular signals enhance the separability of joint tuning for object motion and self-motion. We further show that a linear decoder, designed to approximate marginalization, allows the population to represent either self-motion or object motion with good accuracy. Decoder weights are broadly consistent with a readout strategy, suggested by recent computational work, in which responses are decoded according to the vestibular preferences of multisensory neurons. These results demonstrate, at both single neuron and population levels, that vestibular signals help to dissociate self-motion and object motion.SIGNIFICANCE STATEMENT The brain often needs to estimate one property of a changing environment while ignoring others. This can be difficult because multiple properties of the environment may be confounded in sensory signals. The brain can solve this problem by marginalizing over irrelevant properties to estimate the property-of-interest. We explore this problem in the context of self-motion and object motion, which are inherently confounded in the retinal image. We examine how diversity in a population of multisensory neurons may be exploited to decode self-motion and object motion from the population activity of neurons in macaque area MSTd.
Collapse
|