1
|
Gorko B, Siwanowicz I, Close K, Christoforou C, Hibbard KL, Kabra M, Lee A, Park JY, Li SY, Chen AB, Namiki S, Chen C, Tuthill JC, Bock DD, Rouault H, Branson K, Ihrke G, Huston SJ. Motor neurons generate pose-targeted movements via proprioceptive sculpting. Nature 2024; 628:596-603. [PMID: 38509371 DOI: 10.1038/s41586-024-07222-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 02/22/2024] [Indexed: 03/22/2024]
Abstract
Motor neurons are the final common pathway1 through which the brain controls movement of the body, forming the basic elements from which all movement is composed. Yet how a single motor neuron contributes to control during natural movement remains unclear. Here we anatomically and functionally characterize the individual roles of the motor neurons that control head movement in the fly, Drosophila melanogaster. Counterintuitively, we find that activity in a single motor neuron rotates the head in different directions, depending on the starting posture of the head, such that the head converges towards a pose determined by the identity of the stimulated motor neuron. A feedback model predicts that this convergent behaviour results from motor neuron drive interacting with proprioceptive feedback. We identify and genetically2 suppress a single class of proprioceptive neuron3 that changes the motor neuron-induced convergence as predicted by the feedback model. These data suggest a framework for how the brain controls movements: instead of directly generating movement in a given direction by activating a fixed set of motor neurons, the brain controls movements by adding bias to a continuing proprioceptive-motor loop.
Collapse
Affiliation(s)
- Benjamin Gorko
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Molecular, Cellular and Developmental Biology, University of California, Santa Barbara, CA, USA
| | - Igor Siwanowicz
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Kari Close
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | | | - Karen L Hibbard
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Mayank Kabra
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Allen Lee
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Jin-Yong Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Si Ying Li
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- The Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Alex B Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Program in Neuroscience, Harvard Medical School, Boston, MA, USA
| | - Shigehiro Namiki
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Research Center for Advanced Science and Technology, University of Tokyo, Tokyo, Japan
| | - Chenghao Chen
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - John C Tuthill
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Davi D Bock
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Department of Neurological Sciences, University of Vermont, Burlington, VT, USA
| | - Hervé Rouault
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Turing Centre for Living systems, Aix-Marseille University, Université de Toulon, CNRS, CPT (UMR 7332), Marseille, France
| | - Kristin Branson
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Gudrun Ihrke
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Stephen J Huston
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| |
Collapse
|
2
|
Garau C, Hayes J, Chiacchierini G, McCutcheon JE, Apergis-Schoute J. Involvement of A13 dopaminergic neurons in prehensile movements but not reward in the rat. Curr Biol 2023; 33:4786-4797.e4. [PMID: 37816347 DOI: 10.1016/j.cub.2023.09.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 08/14/2023] [Accepted: 09/18/2023] [Indexed: 10/12/2023]
Abstract
Tyrosine hydroxylase (TH)-containing neurons of the dopamine (DA) cell group A13 are well positioned to impact known DA-related functions as their descending projections innervate target regions that regulate vigilance, sensory integration, and motor execution. Despite this connectivity, little is known regarding the functionality of A13-DA circuits. Using TH-specific loss-of-function methodology and techniques to monitor population activity in transgenic rats in vivo, we investigated the contribution of A13-DA neurons in reward and movement-related actions. Our work demonstrates a role for A13-DA neurons in grasping and handling of objects but not reward. A13-DA neurons responded strongly when animals grab and manipulate food items, whereas their inactivation or degeneration prevented animals from successfully doing so-a deficit partially attributed to a reduction in grip strength. By contrast, there was no relation between A13-DA activity and food-seeking behavior when animals were tested on a reward-based task that did not include a reaching/grasping response. Motivation for food was unaffected, as goal-directed behavior for food items was in general intact following A13 neuronal inactivation/degeneration. An anatomical investigation confirmed that A13-DA neurons project to the superior colliculus (SC) and also demonstrated a novel A13-DA projection to the reticular formation (RF). These results establish a functional role for A13-DA neurons in prehensile actions that are uncoupled from the motivational factors that contribute to the initiation of forelimb movements and help position A13-DA circuits into the functional framework regarding centrally located DA populations and their ability to coordinate movement.
Collapse
Affiliation(s)
- Celia Garau
- Department of Neuroscience, Psychology & Behaviour, University of Leicester, University Road, Leicester LE1 9HN, UK.
| | - Jessica Hayes
- Department of Neuroscience, Psychology & Behaviour, University of Leicester, University Road, Leicester LE1 9HN, UK
| | - Giulia Chiacchierini
- Department of Neuroscience, Psychology & Behaviour, University of Leicester, University Road, Leicester LE1 9HN, UK; Department of Physiology and Pharmacology, La Sapienza University of Rome, 00185 Rome, Italy; Laboratory of Neuropsychopharmacology, Santa Lucia Foundation, 00143 Rome, Italy
| | - James E McCutcheon
- Department of Neuroscience, Psychology & Behaviour, University of Leicester, University Road, Leicester LE1 9HN, UK; Department of Psychology, UiT The Arctic University of Norway, Huginbakken 32, 9037 Tromsø, Norway
| | - John Apergis-Schoute
- Department of Neuroscience, Psychology & Behaviour, University of Leicester, University Road, Leicester LE1 9HN, UK; Department of Biological and Experimental Psychology, Queen Mary University of London, London E1 4NS, UK.
| |
Collapse
|
3
|
Schütz A, Bharmauria V, Yan X, Wang H, Bremmer F, Crawford JD. Integration of landmark and saccade target signals in macaque frontal cortex visual responses. Commun Biol 2023; 6:938. [PMID: 37704829 PMCID: PMC10499799 DOI: 10.1038/s42003-023-05291-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/26/2023] [Indexed: 09/15/2023] Open
Abstract
Visual landmarks influence spatial cognition and behavior, but their influence on visual codes for action is poorly understood. Here, we test landmark influence on the visual response to saccade targets recorded from 312 frontal and 256 supplementary eye field neurons in rhesus macaques. Visual response fields are characterized by recording neural responses to various target-landmark combinations, and then we test against several candidate spatial models. Overall, frontal/supplementary eye fields response fields preferentially code either saccade targets (40%/40%) or landmarks (30%/4.5%) in gaze fixation-centered coordinates, but most cells show multiplexed target-landmark coding within intermediate reference frames (between fixation-centered and landmark-centered). Further, these coding schemes interact: neurons with near-equal target and landmark coding show the biggest shift from fixation-centered toward landmark-centered target coding. These data show that landmark information is preserved and influences target coding in prefrontal visual responses, likely to stabilize movement goals in the presence of noisy egocentric signals.
Collapse
Affiliation(s)
- Adrian Schütz
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Vishal Bharmauria
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Xiaogang Yan
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Hongying Wang
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada
| | - Frank Bremmer
- Department of Neurophysics, Phillips Universität Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior - CMBB, Philipps-Universität Marburg, Marburg, Germany & Justus-Liebig-Universität Giessen, Giessen, Germany
| | - J Douglas Crawford
- York Centre for Vision Research and Vision: Science to Applications Program, York University, Toronto, Canada.
- Departments of Psychology, Biology, Kinesiology & Health Sciences, York University, Toronto, Canada.
| |
Collapse
|
4
|
Dombrovski M, Peek MY, Park JY, Vaccari A, Sumathipala M, Morrow C, Breads P, Zhao A, Kurmangaliyev YZ, Sanfilippo P, Rehan A, Polsky J, Alghailani S, Tenshaw E, Namiki S, Zipursky SL, Card GM. Synaptic gradients transform object location to action. Nature 2023; 613:534-542. [PMID: 36599984 PMCID: PMC9849133 DOI: 10.1038/s41586-022-05562-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 11/11/2022] [Indexed: 01/06/2023]
Abstract
To survive, animals must convert sensory information into appropriate behaviours1,2. Vision is a common sense for locating ethologically relevant stimuli and guiding motor responses3-5. How circuitry converts object location in retinal coordinates to movement direction in body coordinates remains largely unknown. Here we show through behaviour, physiology, anatomy and connectomics in Drosophila that visuomotor transformation occurs by conversion of topographic maps formed by the dendrites of feature-detecting visual projection neurons (VPNs)6,7 into synaptic weight gradients of VPN outputs onto central brain neurons. We demonstrate how this gradient motif transforms the anteroposterior location of a visual looming stimulus into the fly's directional escape. Specifically, we discover that two neurons postsynaptic to a looming-responsive VPN type promote opposite takeoff directions. Opposite synaptic weight gradients onto these neurons from looming VPNs in different visual field regions convert localized looming threats into correctly oriented escapes. For a second looming-responsive VPN type, we demonstrate graded responses along the dorsoventral axis. We show that this synaptic gradient motif generalizes across all 20 primary VPN cell types and most often arises without VPN axon topography. Synaptic gradients may thus be a general mechanism for conveying spatial features of sensory information into directed motor outputs.
Collapse
Affiliation(s)
- Mark Dombrovski
- Department of Biological Chemistry, Howard Hughes Medical Institute, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Martin Y Peek
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Jin-Yong Park
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Andrea Vaccari
- Department of Computer Science, Middlebury College, Middlebury, VT, USA
| | | | - Carmen Morrow
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Patrick Breads
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Arthur Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Yerbol Z Kurmangaliyev
- Department of Biological Chemistry, Howard Hughes Medical Institute, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Piero Sanfilippo
- Department of Biological Chemistry, Howard Hughes Medical Institute, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Aadil Rehan
- Department of Biological Chemistry, Howard Hughes Medical Institute, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Jason Polsky
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Shada Alghailani
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Emily Tenshaw
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Shigehiro Namiki
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA.,Research Center for Advanced Science and Technology, University of Tokyo, Tokyo, Japan
| | - S Lawrence Zipursky
- Department of Biological Chemistry, Howard Hughes Medical Institute, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA.
| | - Gwyneth M Card
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA. .,Department of Neuroscience, Howard Hughes Medical Institute, The Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| |
Collapse
|
5
|
Cruz KG, Leow YN, Le NM, Adam E, Huda R, Sur M. Cortical-subcortical interactions in goal-directed behavior. Physiol Rev 2023; 103:347-389. [PMID: 35771984 PMCID: PMC9576171 DOI: 10.1152/physrev.00048.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 06/21/2022] [Accepted: 06/26/2022] [Indexed: 11/22/2022] Open
Abstract
Flexibly selecting appropriate actions in response to complex, ever-changing environments requires both cortical and subcortical regions, which are typically described as participating in a strict hierarchy. In this traditional view, highly specialized subcortical circuits allow for efficient responses to salient stimuli, at the cost of adaptability and context specificity, which are attributed to the neocortex. Their interactions are often described as the cortex providing top-down command signals for subcortical structures to implement; however, as available technologies develop, studies increasingly demonstrate that behavior is represented by brainwide activity and that even subcortical structures contain early signals of choice, suggesting that behavioral functions emerge as a result of different regions interacting as truly collaborative networks. In this review, we discuss the field's evolving understanding of how cortical and subcortical regions in placental mammals interact cooperatively, not only via top-down cortical-subcortical inputs but through bottom-up interactions, especially via the thalamus. We describe our current understanding of the circuitry of both the cortex and two exemplar subcortical structures, the superior colliculus and striatum, to identify which information is prioritized by which regions. We then describe the functional circuits these regions form with one another, and the thalamus, to create parallel loops and complex networks for brainwide information flow. Finally, we challenge the classic view that functional modules are contained within specific brain regions; instead, we propose that certain regions prioritize specific types of information over others, but the subnetworks they form, defined by their anatomical connections and functional dynamics, are the basis of true specialization.
Collapse
Affiliation(s)
- K Guadalupe Cruz
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Yi Ning Leow
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nhat Minh Le
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Elie Adam
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Rafiq Huda
- W. M. Keck Center for Collaborative Neuroscience, Department of Cell Biology and Neuroscience, Rutgers University, Piscataway, New Jersey
| | - Mriganka Sur
- Picower Institute for Learning and Memory, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
6
|
Ventral premotor cortex encodes task relevant features during eye and head movements. Sci Rep 2022; 12:22093. [PMID: 36543870 PMCID: PMC9772313 DOI: 10.1038/s41598-022-26479-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022] Open
Abstract
Visual exploration of the environment is achieved through gaze shifts or coordinated movements of the eyes and the head. The kinematics and contributions of each component can be decoupled to fit the context of the required behavior, such as redirecting the visual axis without moving the head or rotating the head without changing the line of sight. A neural controller of these effectors, therefore, must show code relating to multiple muscle groups, and it must also differentiate its code based on context. In this study we tested whether the ventral premotor cortex (PMv) in monkey exhibits a population code relating to various features of eye and head movements. We constructed three different behavioral tasks or contexts, each with four variables to explore whether PMv modulates its activity in accordance with these factors. We found that task related population code in PMv differentiates between all task related features and conclude that PMv carries information about task relevant features during eye and head movements. Furthermore, this code represents both lower-level (effector and movement direction) and higher-level (context) information.
Collapse
|
7
|
Head Orientation Influences Saccade Directions during Free Viewing. eNeuro 2022; 9:ENEURO.0273-22.2022. [PMID: 36351820 PMCID: PMC9787809 DOI: 10.1523/eneuro.0273-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/01/2022] [Accepted: 11/03/2022] [Indexed: 11/11/2022] Open
Abstract
When looking around a visual scene, humans make saccadic eye movements to fixate objects of interest. While the extraocular muscles can execute saccades in any direction, not all saccade directions are equally likely: saccades in horizontal and vertical directions are most prevalent. Here, we asked whether head orientation plays a role in determining saccade direction biases. Study participants (n = 14) viewed natural scenes and abstract fractals (radially symmetric patterns) through a virtual reality headset equipped with eye tracking. Participants' heads were stabilized and tilted at -30°, 0°, or 30° while viewing the images, which could also be tilted by -30°, 0°, and 30° relative to the head. To determine whether the biases in saccade direction changed with head tilt, we calculated polar histograms of saccade directions and cross-correlated pairs of histograms to find the angular displacement resulting in the maximum correlation. During free viewing of fractals, saccade biases largely followed the orientation of the head with an average displacement value of 24° when comparing head upright to head tilt in world-referenced coordinates (t (13) = 17.63, p < 0.001). There was a systematic offset of 2.6° in saccade directions, likely reflecting ocular counter roll (OCR; t (13) = 3.13, p = 0.008). When participants viewed an Earth upright natural scene during head tilt, we found that the orientation of the head still influenced saccade directions (t (13) = 3.7, p = 0.001). These results suggest that nonvisual information about head orientation, such as that acquired by vestibular sensors, likely plays a role in saccade generation.
Collapse
|
8
|
Abedi Khoozani P, Bharmauria V, Schütz A, Wildes RP, Crawford JD. Integration of allocentric and egocentric visual information in a convolutional/multilayer perceptron network model of goal-directed gaze shifts. Cereb Cortex Commun 2022; 3:tgac026. [PMID: 35909704 PMCID: PMC9334293 DOI: 10.1093/texcom/tgac026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 06/14/2022] [Accepted: 06/21/2022] [Indexed: 11/13/2022] Open
Abstract
Allocentric (landmark-centered) and egocentric (eye-centered) visual codes are fundamental for spatial cognition, navigation, and goal-directed movement. Neuroimaging and neurophysiology suggest these codes are initially segregated, but then reintegrated in frontal cortex for movement control. We created and validated a theoretical framework for this process using physiologically constrained inputs and outputs. To implement a general framework, we integrated a convolutional neural network (CNN) of the visual system with a multilayer perceptron (MLP) model of the sensorimotor transformation. The network was trained on a task where a landmark shifted relative to the saccade target. These visual parameters were input to the CNN, the CNN output and initial gaze position to the MLP, and a decoder transformed MLP output into saccade vectors. Decoded saccade output replicated idealized training sets with various allocentric weightings and actual monkey data where the landmark shift had a partial influence (R2 = 0.8). Furthermore, MLP output units accurately simulated prefrontal response field shifts recorded from monkeys during the same paradigm. In summary, our model replicated both the general properties of the visuomotor transformations for gaze and specific experimental results obtained during allocentric–egocentric integration, suggesting it can provide a general framework for understanding these and other complex visuomotor behaviors.
Collapse
Affiliation(s)
- Parisa Abedi Khoozani
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Vishal Bharmauria
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
| | - Adrian Schütz
- Department of Neurophysics Phillips-University Marburg , Marburg 35037 , Germany
| | - Richard P Wildes
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Department of Electrical Engineering and Computer Science , York University, Toronto, ON M3J 1P3 , Canada
| | - J Douglas Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program , York University, Toronto, Ontario M3J 1P3 , Canada
- Departments of Psychology, Biology and Kinesiology & Health Sciences, York University , Toronto, Ontario M3J 1P3 , Canada
| |
Collapse
|
9
|
Gerb J, Brandt T, Dieterich M. Different strategies in pointing tasks and their impact on clinical bedside tests of spatial orientation. J Neurol 2022; 269:5738-5745. [PMID: 35258851 PMCID: PMC9553832 DOI: 10.1007/s00415-022-11015-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 01/27/2022] [Accepted: 02/05/2022] [Indexed: 11/24/2022]
Abstract
Deficits in spatial memory, orientation, and navigation are often early or neglected signs of degenerative and vestibular neurological disorders. A simple and reliable bedside test of these functions would be extremely relevant for diagnostic routine. Pointing at targets in the 3D environment is a basic well-trained common sensorimotor ability that provides a suitable measure. We here describe a smartphone-based pointing device using the built-in inertial sensors for analysis of pointing performance in azimuth and polar spatial coordinates. Interpretation of the vectors measured in this way is not trivial, since the individuals tested may use at least two different strategies: first, they may perform the task in an egocentric eye-based reference system by aligning the fingertip with the target retinotopically or second, by aligning the stretched arm and the index finger with the visual line of sight in allocentric world-based coordinates similar to using a rifle. The two strategies result in considerable differences of target coordinates. A pilot test with a further developed design of the device and an app for a standardized bedside utilization in five healthy volunteers revealed an overall mean deviation of less than 5° between the measured and the true coordinates. Future investigations of neurological patients comparing their performance before and after changes in body position (chair rotation) may allow differentiation of distinct orientational deficits in peripheral (vestibulopathy) or central (hippocampal or cortical) disorders.
Collapse
Affiliation(s)
- J Gerb
- Department of Neurology, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany. .,German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.
| | - T Brandt
- Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.,Hertie Senior Professor for Clinical Neuroscience, Ludwig-Maximilians University, Munich, Germany
| | - M Dieterich
- Department of Neurology, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.,Graduate School of Systemic Neuroscience, Ludwig-Maximilians University, Munich, Germany.,German Center for Vertigo and Balance Disorders, University Hospital, Ludwig-Maximilians University, Marchioninistrasse 15, 81377, Munich, Germany.,Munich Cluster for Systems Neurology (SyNergy), Munich, Germany
| |
Collapse
|
10
|
Abstract
Blindsight is the residual visuo-motor ability without subjective awareness observed after lesions of the primary visual cortex (V1). Various visual functions are retained, however, instrumental visual associative learning remains to be investigated. Here we examined the secondary reinforcing properties of visual cues presented to the hemianopic field of macaque monkeys with unilateral V1 lesions. Our aim was to test the potential role of visual pathways bypassing V1 in reinforcing visual instrumental learning. When learning the location of a hidden area in an oculomotor search task, conditioned visual cues presented to the lesion-affected hemifield operated as an effective secondary reinforcer. We noted that not only the hidden area location, but also the vector of the saccade entering the target area was reinforced. Importantly, when the visual reinforcement signal was presented in the lesion-affected field, the monkeys continued searching, as opposed to stopping when the cue was presented in the intact field. This suggests the monkeys were less confident that the target location had been discovered when the reinforcement cue was presented in the affected field. These results indicate that the visual signals mediated by the residual visual pathways after V1 lesions can access fundamental reinforcement mechanisms but with impaired visual awareness.
Collapse
|
11
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Compensating for a shifting world: evolving reference frames of visual and auditory signals across three multimodal brain areas. J Neurophysiol 2021; 126:82-94. [PMID: 33852803 DOI: 10.1152/jn.00385.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Psychiatry, University of Michigan, Ann Arbor, Michigan
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina.,Center for Cognitive Neuroscience, Duke University, Durham, North Carolina.,Department of Psychology and Neuroscience, Duke University, Durham, North Carolina.,Department of Neurobiology, Duke University, Durham, North Carolina.,Department of Biomedical Engineering, Duke University, Durham, North Carolina
| |
Collapse
|
12
|
Spatiotemporal Coding in the Macaque Supplementary Eye Fields: Landmark Influence in the Target-to-Gaze Transformation. eNeuro 2021; 8:ENEURO.0446-20.2020. [PMID: 33318073 PMCID: PMC7877461 DOI: 10.1523/eneuro.0446-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022] Open
Abstract
Eye-centered (egocentric) and landmark-centered (allocentric) visual signals influence spatial cognition, navigation, and goal-directed action, but the neural mechanisms that integrate these signals for motor control are poorly understood. A likely candidate for egocentric/allocentric integration in the gaze control system is the supplementary eye fields (SEF), a mediofrontal structure with high-level “executive” functions, spatially tuned visual/motor response fields, and reciprocal projections with the frontal eye fields (FEF). To test this hypothesis, we trained two head-unrestrained monkeys (Macaca mulatta) to saccade toward a remembered visual target in the presence of a visual landmark that shifted during the delay, causing gaze end points to shift partially in the same direction. A total of 256 SEF neurons were recorded, including 68 with spatially tuned response fields. Model fits to the latter established that, like the FEF and superior colliculus (SC), spatially tuned SEF responses primarily showed an egocentric (eye-centered) target-to-gaze position transformation. However, the landmark shift influenced this default egocentric transformation: during the delay, motor neurons (with no visual response) showed a transient but unintegrated shift (i.e., not correlated with the target-to-gaze transformation), whereas during the saccade-related burst visuomotor (VM) neurons showed an integrated shift (i.e., correlated with the target-to-gaze transformation). This differed from our simultaneous FEF recordings (Bharmauria et al., 2020), which showed a transient shift in VM neurons, followed by an integrated response in all motor responses. Based on these findings and past literature, we propose that prefrontal cortex incorporates landmark-centered information into a distributed, eye-centered target-to-gaze transformation through a reciprocal prefrontal circuit.
Collapse
|
13
|
Abstract
To achieve visual space constancy, our brain remaps eye-centered projections of visual objects across saccades. Here, we measured saccade trajectory curvature following the presentation of visual, auditory, and audiovisual distractors in a double-step saccade task to investigate if this stability mechanism also accounts for localized sounds. We found that saccade trajectories systematically curved away from the position at which either a light or a sound was presented, suggesting that both modalities are represented in eye-centered oculomotor centers. Importantly, the same effect was observed when the distractor preceded the execution of the first saccade. These results suggest that oculomotor centers keep track of visual, auditory and audiovisual objects by remapping their eye-centered representations across saccades. Furthermore, they argue for the existence of a supra-modal map which keeps track of multi-sensory object locations across our movements to create an impression of space constancy.
Collapse
|
14
|
Sajad A, Sadeh M, Crawford JD. Spatiotemporal transformations for gaze control. Physiol Rep 2020; 8:e14533. [PMID: 32812395 PMCID: PMC7435051 DOI: 10.14814/phy2.14533] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 12/13/2022] Open
Abstract
Sensorimotor transformations require spatiotemporal coordination of signals, that is, through both time and space. For example, the gaze control system employs signals that are time-locked to various sensorimotor events, but the spatial content of these signals is difficult to assess during ordinary gaze shifts. In this review, we describe the various models and methods that have been devised to test this question, and their limitations. We then describe a new method that can (a) simultaneously test between all of these models during natural, head-unrestrained conditions, and (b) track the evolving spatial continuum from target (T) to future gaze coding (G, including errors) through time. We then summarize some applications of this technique, comparing spatiotemporal coding in the primate frontal eye field (FEF) and superior colliculus (SC). The results confirm that these areas preferentially encode eye-centered, effector-independent parameters, and show-for the first time in ordinary gaze shifts-a spatial transformation between visual and motor responses from T to G coding. We introduce a new set of spatial models (T-G continuum) that revealed task-dependent timing of this transformation: progressive during a memory delay between vision and action, and almost immediate without such a delay. We synthesize the results from our studies and supplement it with previous knowledge of anatomy and physiology to propose a conceptual model where cumulative transformation noise is realized as inaccuracies in gaze behavior. We conclude that the spatiotemporal transformation for gaze is both local (observed within and across neurons in a given area) and distributed (with common signals shared across remote but interconnected structures).
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Psychology DepartmentVanderbilt UniversityNashvilleTNUSA
| | - Morteza Sadeh
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Department of NeurosurgeryUniversity of Illinois at ChicagoChicagoILUSA
| | - John Douglas Crawford
- Centre for Vision ResearchYork UniversityTorontoONCanada
- Vision: Science to Applications Program (VISTA)Neuroscience Graduate Diploma ProgramDepartments of Psychology, Biology, Kinesiology & Health SciencesYork UniversityTorontoONCanada
| |
Collapse
|
15
|
Timing Determines Tuning: A Rapid Spatial Transformation in Superior Colliculus Neurons during Reactive Gaze Shifts. eNeuro 2020; 7:ENEURO.0359-18.2019. [PMID: 31792117 PMCID: PMC6944480 DOI: 10.1523/eneuro.0359-18.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 10/12/2019] [Accepted: 10/14/2019] [Indexed: 11/21/2022] Open
Abstract
Gaze saccades, rapid shifts of the eyes and head toward a goal, have provided fundamental insights into the neural control of movement. For example, it has been shown that the superior colliculus (SC) transforms a visual target (T) code to future gaze (G) location commands after a memory delay. However, this transformation has not been observed in "reactive" saccades made directly to a stimulus, so its contribution to normal gaze behavior is unclear. Here, we tested this using a quantitative measure of the intermediate codes between T and G, based on variable errors in gaze endpoints. We demonstrate that a rapid spatial transformation occurs within the primate's SC (Macaca mulatta) during reactive saccades, involving a shift in coding from T, through intermediate codes, to G. This spatial shift progressed continuously both across and within cell populations [visual, visuomotor (VM), motor], rather than relaying discretely between populations with fixed spatial codes. These results suggest that the SC produces a rapid, noisy, and distributed transformation that contributes to variable errors in reactive gaze shifts.
Collapse
|
16
|
Marma V, Bulatov A, Bulatova N. Dependence of the filled-space illusion on the size and location of contextual distractors. Acta Neurobiol Exp (Wars) 2020. [DOI: 10.21307/ane-2020-014] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
17
|
Arora HK, Bharmauria V, Yan X, Sun S, Wang H, Crawford JD. Eye-head-hand coordination during visually guided reaches in head-unrestrained macaques. J Neurophysiol 2019; 122:1946-1961. [PMID: 31533015 DOI: 10.1152/jn.00072.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Nonhuman primates have been used extensively to study eye-head coordination and eye-hand coordination, but the combination-eye-head-hand coordination-has not been studied. Our goal was to determine whether reaching influences eye-head coordination (and vice versa) in rhesus macaques. Eye, head, and hand motion were recorded in two animals with search coil and touch screen technology, respectively. Animals were seated in a customized "chair" that allowed unencumbered head motion and reaching in depth. In the reach condition, animals were trained to touch a central LED at waist level while maintaining central gaze and were then rewarded if they touched a target appearing at 1 of 15 locations in a 40° × 20° (visual angle) array. In other variants, initial hand or gaze position was varied in the horizontal plane. In similar control tasks, animals were rewarded for gaze accuracy in the absence of reach. In the Reach task, animals made eye-head gaze shifts toward the target followed by reaches that were accompanied by prolonged head motion toward the target. This resulted in significantly higher head velocities and amplitudes (and lower eye-in-head ranges) compared with the gaze control condition. Gaze shifts had shorter latencies and higher velocities and were more precise, despite the lack of gaze reward. Initial hand position did not influence gaze, but initial gaze position influenced reach latency. These results suggest that eye-head coordination is optimized for visually guided reach, first by quickly and accurately placing gaze at the target to guide reach transport and then by centering the eyes in the head, likely to improve depth vision as the hand approaches the target.NEW & NOTEWORTHY Eye-head and eye-hand coordination have been studied in nonhuman primates but not the combination of all three effectors. Here we examined the timing and kinematics of eye-head-hand coordination in rhesus macaques during a simple reach-to-touch task. Our most novel finding was that (compared with hand-restrained gaze shifts) reaching produced prolonged, increased head rotation toward the target, tending to center the binocular field of view on the target/hand.
Collapse
Affiliation(s)
- Harbandhan Kaur Arora
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada
| | - Vishal Bharmauria
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Xiaogang Yan
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - Saihong Sun
- Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Hongying Wang
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada
| | - John Douglas Crawford
- Centre for Vision Research, York University, Toronto, Ontario, Canada.,Vision: Science to Applications (VISTA), York University, Toronto, Ontario, Canada.,Department of Biology, York University, Toronto, Ontario, Canada.,Department of Psychology, York University, Toronto, Ontario, Canada.,School of Kinesiology and Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
18
|
Helmbrecht TO, dal Maschio M, Donovan JC, Koutsouli S, Baier H. Topography of a Visuomotor Transformation. Neuron 2018; 100:1429-1445.e4. [DOI: 10.1016/j.neuron.2018.10.021] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 08/31/2018] [Accepted: 10/09/2018] [Indexed: 01/07/2023]
|
19
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. The Influence of a Memory Delay on Spatial Coding in the Superior Colliculus: Is Visual Always Visual and Motor Always Motor? Front Neural Circuits 2018; 12:74. [PMID: 30405361 PMCID: PMC6204359 DOI: 10.3389/fncir.2018.00074] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Accepted: 08/29/2018] [Indexed: 11/13/2022] Open
Abstract
The memory-delay saccade task is often used to separate visual and motor responses in oculomotor structures such as the superior colliculus (SC), with the assumption that these same responses would sum with a short delay during immediate "reactive" saccades to visual stimuli. However, it is also possible that additional signals (suppression, delay) alter visual and/or motor response in the memory delay task. Here, we compared the spatiotemporal properties of visual and motor responses of the same SC neurons recorded during both the reactive and memory-delay tasks in two head-unrestrained monkeys. Comparing tasks, visual (aligned with target onset) and motor (aligned on saccade onset) responses were highly correlated across neurons, but the peak response of visual neurons and peak motor responses (of both visuomotor (VM) and motor neurons) were significantly higher in the reactive task. Receptive field organization was generally similar in both tasks. Spatial coding (along a Target-Gaze (TG) continuum) was also similar, with the exception that pure motor cells showed a stronger tendency to code future gaze location in the memory delay task, suggesting a more complete transformation. These results suggest that the introduction of a trained memory delay alters both the vigor and spatial coding of SC visual and motor responses, likely due to a combination of saccade suppression signals and greater signal noise accumulation during the delay in the memory delay task.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, York University, Toronto, ON, Canada
- Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, Canada
- York Neuroscience Graduate Diploma Program, York University, Toronto, ON, Canada
- Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada
- Departments of Psychology, Biology and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
20
|
Schut MJ, Van der Stoep N, Van der Stigchel S. Auditory spatial attention is encoded in a retinotopic reference frame across eye-movements. PLoS One 2018; 13:e0202414. [PMID: 30125311 PMCID: PMC6101386 DOI: 10.1371/journal.pone.0202414] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 08/02/2018] [Indexed: 11/21/2022] Open
Abstract
The retinal location of visual information changes each time we move our eyes. Although it is now known that visual information is remapped in retinotopic coordinates across eye-movements (saccades), it is currently unclear how head-centered auditory information is remapped across saccades. Keeping track of the location of a sound source in retinotopic coordinates requires a rapid multi-modal reference frame transformation when making saccades. To reveal this reference frame transformation, we designed an experiment where participants attended an auditory or visual cue and executed a saccade. After the saccade had landed, an auditory or visual target could be presented either at the prior retinotopic location or at an uncued location. We observed that both auditory and visual targets presented at prior retinotopic locations were reacted to faster than targets at other locations. In a second experiment, we observed that spatial attention pointers obtained via audition are available in retinotopic coordinates immediately after an eye-movement is made. In a third experiment, we found evidence for an asymmetric cross-modal facilitation of information that is presented at the retinotopic location. In line with prior single cell recording studies, this study provides the first behavioral evidence for immediate auditory and cross-modal transsaccadic updating of spatial attention. These results indicate that our brain has efficient solutions for solving the challenges in localizing sensory input that arise in a dynamic context.
Collapse
Affiliation(s)
- Martijn Jan Schut
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | |
Collapse
|
21
|
Wacker D, Ludwig M. The role of vasopressin in olfactory and visual processing. Cell Tissue Res 2018; 375:201-215. [PMID: 29951699 PMCID: PMC6335376 DOI: 10.1007/s00441-018-2867-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 05/31/2018] [Indexed: 12/23/2022]
Abstract
Neural vasopressin is a potent modulator of behaviour in vertebrates. It acts at both sensory processing regions and within larger regulatory networks to mediate changes in social recognition, affiliation, aggression, communication and other social behaviours. There are multiple populations of vasopressin neurons within the brain, including groups in olfactory and visual processing regions. Some of these vasopressin neurons, such as those in the main and accessory olfactory bulbs, anterior olfactory nucleus, piriform cortex and retina, were recently identified using an enhanced green fluorescent protein-vasopressin (eGFP-VP) transgenic rat. Based on the interconnectivity of vasopressin-producing and sensitive brain areas and in consideration of autocrine, paracrine and neurohormone-like actions associated with somato-dendritic release, we discuss how these different neuronal populations may interact to impact behaviour.
Collapse
Affiliation(s)
- Douglas Wacker
- School of STEM (Division of Biological Sciences), University of Washington Bothell, Bothell, WA, USA.
| | - Mike Ludwig
- Centre for Discovery Brain Sciences, University of Edinburgh, Edinburgh, UK.,Centre for Neuroendocrinology, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
22
|
Wilson JJ, Alexandre N, Trentin C, Tripodi M. Three-Dimensional Representation of Motor Space in the Mouse Superior Colliculus. Curr Biol 2018; 28:1744-1755.e12. [PMID: 29779875 PMCID: PMC5988568 DOI: 10.1016/j.cub.2018.04.021] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2018] [Revised: 03/16/2018] [Accepted: 04/05/2018] [Indexed: 11/23/2022]
Abstract
From the act of exploring an environment to that of grasping a cup of tea, animals must put in register their motor acts with their surrounding space. In the motor domain, this is likely to be defined by a register of three-dimensional (3D) displacement vectors, whose recruitment allows motion in the direction of a target. One such spatially targeted action is seen in the head reorientation behavior of mice, yet the neural mechanisms underlying these 3D behaviors remain unknown. Here, by developing a head-mounted inertial sensor for studying 3D head rotations and combining it with electrophysiological recordings, we show that neurons in the mouse superior colliculus are either individually or conjunctively tuned to the three Eulerian components of head rotation. The average displacement vectors associated with motor-tuned colliculus neurons remain stable over time and are unaffected by changes in firing rate or the duration of spike trains. Finally, we show that the motor tuning of collicular neurons is largely independent from visual or landmark cues. By describing the 3D nature of motor tuning in the superior colliculus, we contribute to long-standing debate on the dimensionality of collicular motor decoding; furthermore, by providing an experimental paradigm for the study of the metric of motor tuning in mice, this study also paves the way to the genetic dissection of the circuits underlying spatially targeted motion. Development of inertial sensor system for monitoring 3D head movements in real time Neurons in the superior colliculus code for the full dimensionality of head rotations Firing rate correlates with velocity, but not head displacement angle The spatial tuning of collicular units is largely independent of visual or landmark cues
Collapse
|
23
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Beyond the labeled line: variation in visual reference frames from intraparietal cortex to frontal eye fields and the superior colliculus. J Neurophysiol 2018; 119:1411-1421. [PMID: 29357464 PMCID: PMC5966730 DOI: 10.1152/jn.00584.2017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 12/16/2017] [Accepted: 12/18/2017] [Indexed: 11/22/2022] Open
Abstract
We accurately perceive the visual scene despite moving our eyes ~3 times per second, an ability that requires incorporation of eye position and retinal information. In this study, we assessed how this neural computation unfolds across three interconnected structures: frontal eye fields (FEF), intraparietal cortex (LIP/MIP), and the superior colliculus (SC). Single-unit activity was assessed in head-restrained monkeys performing visually guided saccades from different initial fixations. As previously shown, the receptive fields of most LIP/MIP neurons shifted to novel positions on the retina for each eye position, and these locations were not clearly related to each other in either eye- or head-centered coordinates (defined as hybrid coordinates). In contrast, the receptive fields of most SC neurons were stable in eye-centered coordinates. In FEF, visual signals were intermediate between those patterns: around 60% were eye-centered, whereas the remainder showed changes in receptive field location, boundaries, or responsiveness that rendered the response patterns hybrid or occasionally head-centered. These results suggest that FEF may act as a transitional step in an evolution of coordinates between LIP/MIP and SC. The persistence across cortical areas of mixed representations that do not provide unequivocal location labels in a consistent reference frame has implications for how these representations must be read out. NEW & NOTEWORTHY How we perceive the world as stable using mobile retinas is poorly understood. We compared the stability of visual receptive fields across different fixation positions in three visuomotor regions. Irregular changes in receptive field position were ubiquitous in intraparietal cortex, evident but less common in the frontal eye fields, and negligible in the superior colliculus (SC), where receptive fields shifted reliably across fixations. Only the SC provides a stable labeled-line code for stimuli across saccades.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
- Department of Biomedical Engineering, Duke University , Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University , Durham, North Carolina
- Center for Cognitive Neuroscience, Duke University , Durham, North Carolina
- Department of Psychology and Neuroscience, Duke University , Durham, North Carolina
- Department of Neurobiology, Duke University , Durham, North Carolina
| |
Collapse
|
24
|
Hoke KL, Hebets EA, Shizuka D. Neural Circuitry for Target Selection and Action Selection in Animal Behavior. Integr Comp Biol 2017; 57:808-819. [DOI: 10.1093/icb/icx109] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
|
25
|
Fracasso A, Koenraads Y, Porro GL, Dumoulin SO. Bilateral population receptive fields in congenital hemihydranencephaly. Ophthalmic Physiol Opt 2017; 36:324-34. [PMID: 27112226 DOI: 10.1111/opo.12294] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Accepted: 02/22/2016] [Indexed: 12/16/2022]
Abstract
PURPOSE Congenital hemihydranencephaly (HH) is a very rare disorder characterised by prenatal near-complete unilateral loss of the cerebral cortex. We investigated a patient affected by congenital right HH whose visual field extended significantly into the both visual hemifields, suggesting a reorganisation of the remaining left visual hemisphere. We examined the early visual cortex reorganisation using functional MRI (7T) and population receptive field (pRF) modelling. METHODS Data were acquired by means of a 7T MRI while the patient affected by HH viewed conventional population receptive field mapping stimuli. Two possible pRF reorganisation schemes were evaluated: where every cortical location processed information from either (i) a single region of the visual field or (ii) from two bilateral regions of the visual field. RESULTS In the patient affected by HH, bilateral pRFs in single cortical locations of the remaining hemisphere were found. In addition, using this specific pRF reorganisation scheme, the biologically known relationship between pRF size and eccentricity was found. CONCLUSIONS Bilateral pRFs were found in the remaining left hemisphere of the patient affected by HH, indicating reorganisation of intra-cortical wiring of the early visual cortex and confirming brain plasticity and reorganisation after an early cerebral damage in humans.
Collapse
Affiliation(s)
- Alessio Fracasso
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.,Department of Radiology, Imaging Division, University Medical Centre, Utrecht, The Netherlands.,Spinoza Centre for Neuroimaging, Amsterdam, The Netherlands
| | - Yvonne Koenraads
- Department of Ophthalmology, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Giorgio L Porro
- Department of Ophthalmology, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Serge O Dumoulin
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.,Spinoza Centre for Neuroimaging, Amsterdam, The Netherlands
| |
Collapse
|
26
|
Chen Y, Crawford JD. Cortical Activation during Landmark-Centered vs. Gaze-Centered Memory of Saccade Targets in the Human: An FMRI Study. Front Syst Neurosci 2017; 11:44. [PMID: 28690501 PMCID: PMC5481872 DOI: 10.3389/fnsys.2017.00044] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2017] [Accepted: 06/06/2017] [Indexed: 11/13/2022] Open
Abstract
A remembered saccade target could be encoded in egocentric coordinates such as gaze-centered, or relative to some external allocentric landmark that is independent of the target or gaze (landmark-centered). In comparison to egocentric mechanisms, very little is known about such a landmark-centered representation. Here, we used an event-related fMRI design to identify brain areas supporting these two types of spatial coding (i.e., landmark-centered vs. gaze-centered) for target memory during the Delay phase where only target location, not saccade direction, was specified. The paradigm included three tasks with identical display of visual stimuli but different auditory instructions: Landmark Saccade (remember target location relative to a visual landmark, independent of gaze), Control Saccade (remember original target location relative to gaze fixation, independent of the landmark), and a non-spatial control, Color Report (report target color). During the Delay phase, the Control and Landmark Saccade tasks activated overlapping areas in posterior parietal cortex (PPC) and frontal cortex as compared to the color control, but with higher activation in PPC for target coding in the Control Saccade task and higher activation in temporal and occipital cortex for target coding in Landmark Saccade task. Gaze-centered directional selectivity was observed in superior occipital gyrus and inferior occipital gyrus, whereas landmark-centered directional selectivity was observed in precuneus and midposterior intraparietal sulcus. During the Response phase after saccade direction was specified, the parietofrontal network in the left hemisphere showed higher activation for rightward than leftward saccades. Our results suggest that cortical activation for coding saccade target direction relative to a visual landmark differs from gaze-centered directional selectivity for target memory, from the mechanisms for other types of allocentric tasks, and from the directionally selective mechanisms for saccade planning and execution.
Collapse
Affiliation(s)
- Ying Chen
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada
| | - J D Crawford
- Center for Vision Research, York University, TorontoON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, TorontoON, Canada.,Canadian Action and Perception Network, TorontoON, Canada.,Vision: Science to Applications Program, York University, TorontoON, Canada
| |
Collapse
|
27
|
Abstract
The superior colliculus is one of the most well-studied structures in the brain, and with each new report, its proposed role in behavior seems to increase in complexity. Forty years of evidence show that the colliculus is critical for reorienting an organism toward objects of interest. In monkeys, this involves saccadic eye movements. Recent work in the monkey colliculus and in the homologous optic tectum of the bird extends our understanding of the role of the colliculus in higher mental functions, such as attention and decision making. In this review, we highlight some of these recent results, as well as those capitalizing on circuit-based methodologies using transgenic mice models, to understand the contribution of the colliculus to attention and decision making. The wealth of information we have about the colliculus, together with new tools, provides a unique opportunity to obtain a detailed accounting of the neurons, circuits, and computations that underlie complex behavior.
Collapse
Affiliation(s)
- Michele A Basso
- Fuster Laboratory of Cognitive Neuroscience, Department of Psychiatry and Biobehavioral Sciences and Neurobiology, Semel Institute for Neuroscience and Human Behavior, Brain Research Institute, David Geffen School of Medicine, University of California, Los Angeles, California 90095;
| | - Paul J May
- Department of Neurobiology and Anatomical Sciences, University of Mississippi Medical Center, Jackson, Mississippi 39216
| |
Collapse
|
28
|
Piserchia V, Breveglieri R, Hadjidimitrakis K, Bertozzi F, Galletti C, Fattori P. Mixed Body/Hand Reference Frame for Reaching in 3D Space in Macaque Parietal Area PEc. Cereb Cortex 2017; 27:1976-1990. [PMID: 26941385 DOI: 10.1093/cercor/bhw039] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The neural correlates of coordinate transformations from vision to action are expressed in the activity of posterior parietal cortex (PPC). It has been demonstrated that among the medial-most areas of the PPC, reaching targets are represented mainly in hand-centered coordinates in area PE, and in eye-centered, body-centered, and mixed body/hand-centered coordinates in area V6A. Here, we assessed whether neurons of area PEc, located between V6A and PE in the medial PPC, encode targets in body-centered, hand-centered, or mixed frame of reference during planning and execution of reaching. We studied 104 PEc cells in 3 Macaca fascicularis. The animals performed a reaching task toward foveated targets located at different depths and directions in darkness, starting with the hand from 2 positions located at different depths, one next to the trunk and the other far from it. We show that most PEc neurons encoded targets in a mixed body/hand-centered frame of reference. Although the effect of hand position was often rather strong, it was not as strong as reported previously in area PE. Our results suggest that area PEc represents an intermediate node in the gradual transformation from vision to action that takes place in the reaching network of the dorsomedial PPC.
Collapse
Affiliation(s)
- Valentina Piserchia
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Kostas Hadjidimitrakis
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy.,Department of Physiology, Monash University, Clayton, Victoria 3800, Australia
| | - Federica Bertozzi
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Patrizia Fattori
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| |
Collapse
|
29
|
Sadeh M, Sajad A, Wang H, Yan X, Crawford JD. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts. Eur J Neurosci 2016; 42:2934-51. [PMID: 26448341 DOI: 10.1111/ejn.13093] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2014] [Revised: 09/14/2015] [Accepted: 09/30/2015] [Indexed: 11/27/2022]
Abstract
We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure.
Collapse
Affiliation(s)
- Morteza Sadeh
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Amirsaman Sajad
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Hongying Wang
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - Xiaogang Yan
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| | - John Douglas Crawford
- York Centre for Vision Research, Room 0009A LAS, 4700 Keele Street, Toronto, ON, M3J 1P3, Canada.,York Neuroscience Graduate Diploma Program, Toronto, ON, Canada.,Canadian Action and Perception Network (CAPnet), York University, Toronto, ON, Canada.,Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, ON, Canada
| |
Collapse
|
30
|
Lu KH, Hung SC, Wen H, Marussich L, Liu Z. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli. PLoS One 2016; 11:e0161797. [PMID: 27564573 PMCID: PMC5001718 DOI: 10.1371/journal.pone.0161797] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2016] [Accepted: 08/11/2016] [Indexed: 12/03/2022] Open
Abstract
Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision.
Collapse
Affiliation(s)
- Kun-Han Lu
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, United States of America
| | - Shao-Chin Hung
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States of America
| | - Haiguang Wen
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, United States of America
| | - Lauren Marussich
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States of America
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, United States of America
| | - Zhongming Liu
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States of America
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, United States of America
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, United States of America
- * E-mail:
| |
Collapse
|
31
|
Muhammad W, Spratling MW. A Neural Model of Coordinated Head and Eye Movement Control. J INTELL ROBOT SYST 2016. [DOI: 10.1007/s10846-016-0410-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
32
|
Daemi M, Harris LR, Crawford JD. Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework. Front Comput Neurosci 2016; 10:62. [PMID: 27445780 PMCID: PMC4917558 DOI: 10.3389/fncom.2016.00062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 06/09/2016] [Indexed: 11/25/2022] Open
Abstract
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York UniversityToronto, ON, Canada; Centre for Vision Research, York UniversityToronto, ON, Canada; Canadian Action and Perception NetworkToronto, ON, Canada; Department of Psychology, York UniversityToronto, ON, Canada
| | - Laurence R Harris
- Department of Biology and Neuroscience Graduate Diploma, York UniversityToronto, ON, Canada; Centre for Vision Research, York UniversityToronto, ON, Canada; Department of Psychology, York UniversityToronto, ON, Canada; School of Kinesiology and Health Sciences, York UniversityToronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York UniversityToronto, ON, Canada; Centre for Vision Research, York UniversityToronto, ON, Canada; Canadian Action and Perception NetworkToronto, ON, Canada; Department of Psychology, York UniversityToronto, ON, Canada; School of Kinesiology and Health Sciences, York UniversityToronto, ON, Canada; NSERC Brain and Action Program, York UniversityToronto, Canada
| |
Collapse
|
33
|
Hafed Z, Chen CY. Sharper, Stronger, Faster Upper Visual Field Representation in Primate Superior Colliculus. Curr Biol 2016; 26:1647-1658. [DOI: 10.1016/j.cub.2016.04.059] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Revised: 03/23/2016] [Accepted: 04/22/2016] [Indexed: 10/21/2022]
|
34
|
Abstract
How, why, and when consciousness evolved remain hotly debated topics. Addressing these issues requires considering the distribution of consciousness across the animal phylogenetic tree. Here we propose that at least one invertebrate clade, the insects, has a capacity for the most basic aspect of consciousness: subjective experience. In vertebrates the capacity for subjective experience is supported by integrated structures in the midbrain that create a neural simulation of the state of the mobile animal in space. This integrated and egocentric representation of the world from the animal's perspective is sufficient for subjective experience. Structures in the insect brain perform analogous functions. Therefore, we argue the insect brain also supports a capacity for subjective experience. In both vertebrates and insects this form of behavioral control system evolved as an efficient solution to basic problems of sensory reafference and true navigation. The brain structures that support subjective experience in vertebrates and insects are very different from each other, but in both cases they are basal to each clade. Hence we propose the origins of subjective experience can be traced to the Cambrian.
Collapse
|
35
|
Reference frames for reaching when decoupling eye and target position in depth and direction. Sci Rep 2016; 6:21646. [PMID: 26876496 PMCID: PMC4753502 DOI: 10.1038/srep21646] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 01/28/2016] [Indexed: 11/23/2022] Open
Abstract
Spatial representations in cortical areas involved in reaching movements were traditionally studied in a frontoparallel plane where the two-dimensional target location and the movement direction were the only variables to consider in neural computations. No studies so far have characterized the reference frames for reaching considering both depth and directional signals. Here we recorded from single neurons of the medial posterior parietal area V6A during a reaching task where fixation point and reaching targets were decoupled in direction and depth. We found a prevalent mixed encoding of target position, with eye-centered and spatiotopic representations differently balanced in the same neuron. Depth was stronger in defining the reference frame of eye-centered cells, while direction was stronger in defining that of spatiotopic cells. The predominant presence of various typologies of mixed encoding suggests that depth and direction signals are processed on the basis of flexible coordinate systems to ensure optimal motor response.
Collapse
|
36
|
Godfroy-Cooper M, Sandor PMB, Miller JD, Welch RB. The interaction of vision and audition in two-dimensional space. Front Neurosci 2015; 9:311. [PMID: 26441492 PMCID: PMC4585004 DOI: 10.3389/fnins.2015.00311] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2014] [Accepted: 08/19/2015] [Indexed: 11/29/2022] Open
Abstract
Using a mouse-driven visual pointer, 10 participants made repeated open-loop egocentric localizations of memorized visual, auditory, and combined visual-auditory targets projected randomly across the two-dimensional frontal field (2D). The results are reported in terms of variable error, constant error and local distortion. The results confirmed that auditory and visual maps of the egocentric space differ in their precision (variable error) and accuracy (constant error), both from one another and as a function of eccentricity and direction within a given modality. These differences were used, in turn, to make predictions about the precision and accuracy within which spatially and temporally congruent bimodal visual-auditory targets are localized. Overall, the improvement in precision for bimodal relative to the best unimodal target revealed the presence of optimal integration well-predicted by the Maximum Likelihood Estimation (MLE) model. Conversely, the hypothesis that accuracy in localizing the bimodal visual-auditory targets would represent a compromise between auditory and visual performance in favor of the most precise modality was rejected. Instead, the bimodal accuracy was found to be equivalent to or to exceed that of the best unimodal condition. Finally, we described how the different types of errors could be used to identify properties of the internal representations and coordinate transformations within the central nervous system (CNS). The results provide some insight into the structure of the underlying sensorimotor processes employed by the brain and confirm the usefulness of capitalizing on naturally occurring differences between vision and audition to better understand their interaction and their contribution to multimodal perception.
Collapse
Affiliation(s)
- Martine Godfroy-Cooper
- Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center Moffett Field, CA, USA ; San Jose State University Research Foundation San José, CA, USA
| | - Patrick M B Sandor
- Institut de Recherche Biomédicale des Armées, Département Action et Cognition en Situation Opérationnelle Brétigny-sur-Orge, France ; Aix Marseille Université, Centre National de la Recherche Scientifique, ISM UMR 7287 Marseille, France
| | - Joel D Miller
- Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center Moffett Field, CA, USA ; San Jose State University Research Foundation San José, CA, USA
| | - Robert B Welch
- Advanced Controls and Displays Group, Human Systems Integration Division, NASA Ames Research Center Moffett Field, CA, USA
| |
Collapse
|
37
|
Daemi M, Crawford JD. A kinematic model for 3-D head-free gaze-shifts. Front Comput Neurosci 2015; 9:72. [PMID: 26113816 PMCID: PMC4461827 DOI: 10.3389/fncom.2015.00072] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2014] [Accepted: 05/27/2015] [Indexed: 11/13/2022] Open
Abstract
Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision.
Collapse
Affiliation(s)
- Mehdi Daemi
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada
| | - J Douglas Crawford
- Department of Biology and Neuroscience Graduate Diploma, York University Toronto, ON, Canada ; Centre for Vision Research, York University Toronto, ON, Canada ; CAN-ACT NSERC CREATE Program Toronto, ON, Canada ; Canadian Action and Perception Network Toronto, ON, Canada ; Department of Psychology, York University Toronto, ON, Canada ; School of Kinesiology and Health Sciences, York University Toronto, ON, Canada ; Brain in Action NSERC CREATE/DFG IRTG Program Canada/Germany
| |
Collapse
|
38
|
Bianco IH, Engert F. Visuomotor transformations underlying hunting behavior in zebrafish. Curr Biol 2015; 25:831-46. [PMID: 25754638 PMCID: PMC4386024 DOI: 10.1016/j.cub.2015.01.042] [Citation(s) in RCA: 140] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2014] [Revised: 01/18/2015] [Accepted: 01/19/2015] [Indexed: 11/28/2022]
Abstract
Visuomotor circuits filter visual information and determine whether or not to engage downstream motor modules to produce behavioral outputs. However, the circuit mechanisms that mediate and link perception of salient stimuli to execution of an adaptive response are poorly understood. We combined a virtual hunting assay for tethered larval zebrafish with two-photon functional calcium imaging to simultaneously monitor neuronal activity in the optic tectum during naturalistic behavior. Hunting responses showed mixed selectivity for combinations of visual features, specifically stimulus size, speed, and contrast polarity. We identified a subset of tectal neurons with similar highly selective tuning, which show non-linear mixed selectivity for visual features and are likely to mediate the perceptual recognition of prey. By comparing neural dynamics in the optic tectum during response versus non-response trials, we discovered premotor population activity that specifically preceded initiation of hunting behavior and exhibited anatomical localization that correlated with motor variables. In summary, the optic tectum contains non-linear mixed selectivity neurons that are likely to mediate reliable detection of ethologically relevant sensory stimuli. Recruitment of small tectal assemblies appears to link perception to action by providing the premotor commands that release hunting responses. These findings allow us to propose a model circuit for the visuomotor transformations underlying a natural behavior. Zebrafish hunting responses are triggered by conjunctions of visual features Tectal neurons show non-linear mixed selectivity for prey-like visual stimuli Tectal assemblies show premotor activity specifically preceding hunting responses
Collapse
Affiliation(s)
- Isaac H Bianco
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| | - Florian Engert
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA
| |
Collapse
|
39
|
Sajad A, Sadeh M, Keith GP, Yan X, Wang H, Crawford JD. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey. Cereb Cortex 2014; 25:3932-52. [PMID: 25491118 PMCID: PMC4585524 DOI: 10.1093/cercor/bhu279] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas.
Collapse
Affiliation(s)
- Amirsaman Sajad
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology
| | - Morteza Sadeh
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program School of Kinesiology and Health Sciences
| | - Gerald P Keith
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| | - Xiaogang Yan
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - Hongying Wang
- Centre for Vision Research Canadian Action and Perception Network (CAPnet)
| | - John Douglas Crawford
- Centre for Vision Research Canadian Action and Perception Network (CAPnet) Neuroscience Graduate Diploma Program Department of Biology School of Kinesiology and Health Sciences Department of Psychology, York University, Toronto, ON, Canada M3J 1P3
| |
Collapse
|
40
|
Takahashi M, Sugiuchi Y, Shinoda Y. Convergent synaptic inputs from the caudal fastigial nucleus and the superior colliculus onto pontine and pontomedullary reticulospinal neurons. J Neurophysiol 2013; 111:849-67. [PMID: 24285869 DOI: 10.1152/jn.00634.2013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The caudal fastigial nucleus (FN) is known to be related to the control of eye movements and projects mainly to the contralateral reticular nuclei where excitatory and inhibitory burst neurons for saccades exist [the caudal portion of the nucleus reticularis pontis caudalis (NRPc), and the rostral portion of the nucleus reticularis gigantocellularis (NRG) respectively]. However, the exact reticular neurons targeted by caudal fastigioreticular cells remain unknown. We tried to determine the target reticular neurons of the caudal FN and superior colliculus (SC) by recording intracellular potentials from neurons in the NRPc and NRG of anesthetized cats. Neurons in the rostral NRG received bilateral, monosynaptic excitation from the caudal FNs, with contralateral predominance. They also received strong monosynaptic excitation from the rostral and caudal contralateral SC, and disynaptic excitation from the rostral ipsilateral SC. These reticular neurons with caudal fastigial monosynaptic excitation were not activated antidromically from the contralateral abducens nucleus, but most of them were reticulospinal neurons (RSNs) that were activated antidromically from the cervical cord. RSNs in the caudal NRPc received very weak monosynaptic excitation from only the contralateral caudal FN, and received either monosynaptic excitation only from the contralateral caudal SC, or monosynaptic and disynaptic excitation from the contralateral caudal and ipsilateral rostral SC, respectively. These results suggest that the caudal FN helps to control also head movements via RSNs targeted by the SC, and these RSNs with SC topographic input play different functional roles in head movements.
Collapse
Affiliation(s)
- Mayu Takahashi
- Department of Systems Neurophysiology, Graduate School of Medicine, Tokyo Medical and Dental University, Tokyo, Japan
| | | | | |
Collapse
|
41
|
Larsson M. The optic chiasm: a turning point in the evolution of eye/hand coordination. Front Zool 2013; 10:41. [PMID: 23866932 PMCID: PMC3729728 DOI: 10.1186/1742-9994-10-41] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 07/09/2013] [Indexed: 01/23/2023] Open
Abstract
The primate visual system has a uniquely high proportion of ipsilateral retinal projections, retinal ganglial cells that do not cross the midline in the optic chiasm. The general assumption is that this developed due to the selective advantage of accurate depth perception through stereopsis. Here, the hypothesis that the need for accurate eye-forelimb coordination substantially influenced the evolution of the primate visual system is presented. Evolutionary processes may change the direction of retinal ganglial cells. Crossing, or non-crossing, in the optic chiasm determines which hemisphere receives visual feedback in reaching tasks. Each hemisphere receives little tactile and proprioceptive information about the ipsilateral hand. The eye-forelimb hypothesis proposes that abundant ipsilateral retinal projections developed in the primate brain to synthesize, in a single hemisphere, visual, tactile, proprioceptive, and motor information about a given hand, and that this improved eye-hand coordination and optimized the size of the brain. If accurate eye-hand coordination was a major factor in the evolution of stereopsis, stereopsis is likely to be highly developed for activity in the area where the hands most often operate.The primate visual system is ideally suited for tasks within arm's length and in the inferior visual field, where most manual activity takes place. Altering of ocular dominance in reaching tasks, reduced cross-modal cuing effects when arms are crossed, response of neurons in the primary motor cortex to viewed actions of a hand, multimodal neuron response to tactile as well as visual events, and extensive use of multimodal sensory information in reaching maneuvers support the premise that benefits of accurate limb control influenced the evolution of the primate visual system. The eye-forelimb hypothesis implies that evolutionary change toward hemidecussation in the optic chiasm provided parsimonious neural pathways in animals developing frontal vision and visually guided forelimbs, and also suggests a new perspective on vision convergence in prey and predatory animals.
Collapse
Affiliation(s)
- Matz Larsson
- The Cardiology Clinic, Örebro University Hospital, SE - 701 85, Örebro, Sweden.
| |
Collapse
|
42
|
A biologically constrained architecture for developmental learning of eye–head gaze control on a humanoid robot. Auton Robots 2013. [DOI: 10.1007/s10514-013-9335-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
43
|
Monteon JA, Wang H, Martinez-Trujillo J, Crawford JD. Frames of reference for eye-head gaze shifts evoked during frontal eye field stimulation. Eur J Neurosci 2013; 37:1754-65. [PMID: 23489744 DOI: 10.1111/ejn.12175] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Revised: 01/14/2013] [Accepted: 01/30/2013] [Indexed: 11/29/2022]
Abstract
The frontal eye field (FEF), in the prefrontal cortex, participates in the transformation of visual signals into saccade motor commands and in eye-head gaze control. The FEF is thought to show eye-fixed visual codes in head-restrained monkeys, but it is not known how it transforms these inputs into spatial codes for head-unrestrained gaze commands. Here, we tested if the FEF influences desired gaze commands within a simple eye-fixed frame, like the superior colliculus (SC), or in more complex egocentric frames like the supplementary eye fields (SEFs). We electrically stimulated 95 FEF sites in two head-unrestrained monkeys to evoke 3D eye-head gaze shifts and then mathematically rotated these trajectories into various reference frames. In theory, each stimulation site should specify a specific spatial goal when the evoked gaze shifts are plotted in the appropriate frame. We found that these motor output frames varied site by site, mainly within the eye-to-head frame continuum. Thus, consistent with the intermediate placement of the FEF within the high-level circuits for gaze control, its stimulation-evoked output showed an intermediate trend between the multiple reference frame codes observed in SEF-evoked gaze shifts and the simpler eye-fixed reference frame observed in SC-evoked movements. These results suggest that, although the SC, FEF and SEF carry eye-fixed information at the level of their unit response fields, this information is transformed differently in their output projections to the eye and head controllers.
Collapse
Affiliation(s)
- Jachin A Monteon
- Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | | |
Collapse
|
44
|
Damasio A, Carvalho GB. The nature of feelings: evolutionary and neurobiological origins. Nat Rev Neurosci 2013; 14:143-52. [PMID: 23329161 DOI: 10.1038/nrn3403] [Citation(s) in RCA: 502] [Impact Index Per Article: 45.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Feelings are mental experiences of body states. They signify physiological need (for example, hunger), tissue injury (for example, pain), optimal function (for example, well-being), threats to the organism (for example, fear or anger) or specific social interactions (for example, compassion, gratitude or love). Feelings constitute a crucial component of the mechanisms of life regulation, from simple to complex. Their neural substrates can be found at all levels of the nervous system, from individual neurons to subcortical nuclei and cortical regions.
Collapse
Affiliation(s)
- Antonio Damasio
- Brain and Creativity Institute, University of Southern California, 3620 A McClintock Avenue, Suite 265, Los Angeles, California 90089-2921, USA.
| | | |
Collapse
|
45
|
Monteon JA, Avillac M, Yan X, Wang H, Crawford JD. Neural mechanisms for predictive head movement strategies during sequential gaze shifts. J Neurophysiol 2012; 108:2689-707. [PMID: 22933720 DOI: 10.1152/jn.00222.2012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans adopt very different head movement strategies for different gaze behaviors, for example, when playing sports versus watching sports on television. Such strategy switching appears to depend on both context and expectation of future gaze positions. Here, we explored the neural mechanisms for such behaviors by training three monkeys to make head-unrestrained gaze shifts toward eccentric radial targets. A randomized color cue provided predictive information about whether that target would be followed by either a return gaze shift to center or another, more eccentric gaze shift, but otherwise animals were allowed to develop their own eye-head coordination strategy. In the first two animals we then stimulated the frontal eye fields (FEF) in conjunction with the color cue, and in the third animal we recorded from neurons in the superior colliculus (SC). Our results show that 1) monkeys can optimize eye-head coordination strategies from trial to trial, based on learned associations between color cues and future gaze sequences, 2) these cue-dependent coordination strategies were preserved in gaze saccades evoked during electrical stimulation of the FEF, and 3) two types of SC responses (the saccade burst and a more prolonged response related to head movement) modulated with these cue-dependent strategies, although only one (the saccade burst) varied in a predictive fashion. These data show that from one moment to the next, the brain can use contextual sensory cues to set up internal "coordination states" that convert fixed cortical gaze commands into the brain stem signals required for predictive head motion.
Collapse
Affiliation(s)
- Jachin A Monteon
- York Centre for Vision Research, York University, Toronto, Ontario, Canada
| | | | | | | | | |
Collapse
|
46
|
Inhibition of return: a "depth-blind" mechanism? Acta Psychol (Amst) 2012; 140:75-80. [PMID: 22465912 DOI: 10.1016/j.actpsy.2012.02.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2011] [Revised: 02/23/2012] [Accepted: 02/28/2012] [Indexed: 11/21/2022] Open
Abstract
When attention is oriented to a peripheral visual event, observers respond faster to stimuli presented at a cued location than at an uncued location. Following initial reaction time facilitation responses are slower to stimuli subsequently displayed at the cued location, an effect known as inhibition of return (IOR). Both facilitatory and inhibitory effects have been extensively investigated in two-dimensional space. Facilitation has also been documented in three-dimensional space, however the presence of IOR in 3D space is unclear, possibly because IOR has not been evaluated in an empty 3D space. Determining if IOR is sensitive to the depth plane of stimuli or if only their bi-dimensional location is inhibited may clarify the nature of the IOR. To address this issue, we used an attentional cueing paradigm in three-dimensional (3D) space. Results were obtained from fourteen participants showed IOR components in 3D space when binocular disparity was used to induce depth. We conclude that attentional orienting in depth operates as efficiently as in the bi-dimensional space.
Collapse
|
47
|
Lee J, Groh JM. Auditory signals evolve from hybrid- to eye-centered coordinates in the primate superior colliculus. J Neurophysiol 2012; 108:227-42. [PMID: 22514295 DOI: 10.1152/jn.00706.2011] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Visual and auditory spatial signals initially arise in different reference frames. It has been postulated that auditory signals are translated from a head-centered to an eye-centered frame of reference compatible with the visual spatial maps, but, to date, only various forms of hybrid reference frames for sound have been identified. Here, we show that the auditory representation of space in the superior colliculus involves a hybrid reference frame immediately after the sound onset but evolves to become predominantly eye centered, and more similar to the visual representation, by the time of a saccade to that sound. Specifically, during the first 500 ms after the sound onset, auditory response patterns (N = 103) were usually neither head nor eye centered: 64% of neurons showed such a hybrid pattern, whereas 29% were more eye centered and 8% were more head centered. This differed from the pattern observed for visual targets (N = 156): 86% were eye centered, <1% were head centered, and only 13% exhibited a hybrid of both reference frames. For auditory-evoked activity observed within 20 ms of the saccade (N = 154), the proportion of eye-centered response patterns increased to 69%, whereas the hybrid and head-centered response patterns dropped to 30% and <1%, respectively. This pattern approached, although did not quite reach, that observed for saccade-related activity for visual targets: 89% were eye centered, 11% were hybrid, and <1% were head centered (N = 162). The plainly eye-centered visual response patterns and predominantly eye-centered auditory motor response patterns lie in marked contrast to our previous study of the intraparietal cortex, where both visual and auditory sensory and motor-related activity used a predominantly hybrid reference frame (Mullette-Gillman et al. 2005, 2009). Our present findings indicate that auditory signals are ultimately translated into a reference frame roughly similar to that used for vision, but suggest that such signals might emerge only in motor areas responsible for directing gaze to visual and auditory stimuli.
Collapse
Affiliation(s)
- Jungah Lee
- Center for Cognitive Neuroscience, Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA.
| | | |
Collapse
|
48
|
Intrinsic reference frames of superior colliculus visuomotor receptive fields during head-unrestrained gaze shifts. J Neurosci 2012; 31:18313-26. [PMID: 22171035 DOI: 10.1523/jneurosci.0990-11.2011] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
A sensorimotor neuron's receptive field and its frame of reference are easily conflated within the natural variability of spatial behavior. Here, we capitalized on such natural variations in 3-D eye and head positions during head-unrestrained gaze shifts to visual targets in two monkeys: to determine whether intermediate/deep layer superior colliculus (SC) receptive fields code visual targets or gaze kinematics, within four different frames of reference. Visuomotor receptive fields were either characterized during gaze shifts to visual targets from a central fixation position (32 U) or were partially characterized from each of three initial fixation points (31 U). Natural variations of initial 3-D gaze and head orientation (including torsion) provided spatial separation between four different coordinate frame models (space, head, eye, fixed-vector relative to fixation), whereas natural saccade errors provided spatial separation between target and gaze positions. Using a new statistical method based on predictive sum-of-squares, we found that in our population of 63 neurons (1) receptive field fits to target positions were significantly better than fits to actual gaze shift locations and (2) eye-centered models gave significantly better fits than the head or space frame. An intermediate frames analysis confirmed that individual neuron fits were distributed target-in-eye coordinates. Gaze position "gain" effects with the spatial tuning required for a 3-D reference frame transformation were significant in 23% (7/31) of neurons tested. We conclude that the SC primarily represents gaze targets relative to the eye but also carries early signatures of the 3-D sensorimotor transformation.
Collapse
|
49
|
Saeb S, Weber C, Triesch J. Learning the optimal control of coordinated eye and head movements. PLoS Comput Biol 2011; 7:e1002253. [PMID: 22072953 PMCID: PMC3207939 DOI: 10.1371/journal.pcbi.1002253] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2010] [Accepted: 09/13/2011] [Indexed: 11/20/2022] Open
Abstract
Various optimality principles have been proposed to explain the characteristics of coordinated eye and head movements during visual orienting behavior. At the same time, researchers have suggested several neural models to underly the generation of saccades, but these do not include online learning as a mechanism of optimization. Here, we suggest an open-loop neural controller with a local adaptation mechanism that minimizes a proposed cost function. Simulations show that the characteristics of coordinated eye and head movements generated by this model match the experimental data in many aspects, including the relationship between amplitude, duration and peak velocity in head-restrained and the relative contribution of eye and head to the total gaze shift in head-free conditions. Our model is a first step towards bringing together an optimality principle and an incremental local learning mechanism into a unified control scheme for coordinated eye and head movements. Human beings and many other species redirect their gaze towards targets of interest through rapid gaze shifts known as saccades. These are made approximately three to four times every second, and larger saccades result from fast and concurrent movement of the animal's eyes and head. Experimental studies have revealed that during saccades, the motor system follows certain principles such as respecting a specific relationship between the relative contribution of eye and head motor systems to total gaze shift. Various researchers have hypothesized that these principles are implications of some optimality criteria in the brain, but it remains unclear how the brain can learn such an optimal behavior. We propose a new model that uses a plausible learning mechanism to satisfy an optimality criterion. We show that after learning, the model is able to reproduce motor behavior with biologically plausible properties. In addition, it predicts the nature of the learning signals. Further experimental research is necessary to test the validity of our model.
Collapse
Affiliation(s)
- Sohrab Saeb
- Frankfurt Institute for Advanced Studies (FIAS), Goethe University Frankfurt, Germany.
| | | | | |
Collapse
|
50
|
Crawford JD, Henriques DYP, Medendorp WP. Three-dimensional transformations for goal-directed action. Annu Rev Neurosci 2011; 34:309-31. [PMID: 21456958 DOI: 10.1146/annurev-neuro-061010-113749] [Citation(s) in RCA: 124] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Much of the central nervous system is involved in visuomotor transformations for goal-directed gaze and reach movements. These transformations are often described in terms of stimulus location, gaze fixation, and reach endpoints, as viewed through the lens of translational geometry. Here, we argue that the intrinsic (primarily rotational) 3-D geometry of the eye-head-reach systems determines the spatial relationship between extrinsic goals and effector commands, and therefore the required transformations. This approach provides a common theoretical framework for understanding both gaze and reach control. Combined with an assessment of the behavioral, neurophysiological, imaging, and neuropsychological literature, this framework leads us to conclude that (a) the internal representation and updating of visual goals are dominated by gaze-centered mechanisms, but (b) these representations must then be transformed as a function of eye and head orientation signals into effector-specific 3-D movement commands.
Collapse
Affiliation(s)
- J Douglas Crawford
- York Centre for Vision Research, Canadian Action and Perception Network, and Departments of Psychology, Toronto, Ontario, Canada, M3J 1P3.
| | | | | |
Collapse
|