1
|
Bayer M, Zimmermann E. Serial dependencies in visual stability during self-motion. J Neurophysiol 2023; 130:447-457. [PMID: 37465870 DOI: 10.1152/jn.00157.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/14/2023] [Accepted: 07/18/2023] [Indexed: 07/20/2023] Open
Abstract
Every time we move our head, the brain must decide whether the displacement of the visual scene is the result of external or self-produced motion. Gaze shifts generate the biggest and most frequent disturbance of vision. Visual stability during gaze shifts is necessary for both, dissociating self-produced from external motion and retaining bodily balance. Here, we asked participants to perform an eye-head gaze shift to a target that was briefly presented in a head-mounted display. We manipulated the velocity of the scene displacement across trials such that the background moved either too fast or too slow in relation to the head movement speed. Participants were required to report whether they perceived the gaze-contingent visual motion as faster or slower than what they would expect from their head movement velocity. We found that the point of visual stability was attracted to the velocity presented in the previous trial. Our data reveal that serial dependencies in visual stability calibrate the mapping between motor-related signals coding head movement velocity and visual motion velocity. This process is likely to aid in visual stability as the accuracy of this mapping is crucial to maintain visual stability during self-motion.NEW & NOTEWORTHY We report that visual stability during self-motion is maintained by serial dependencies between the current and the previous gaze-contingent visual velocity that was experienced during a head movement. The gaze-contingent scene displacement velocity that appears normal to us thus depends on what we have registered in the recent history of gaze shifts. Serial dependencies provide an efficient means to maintain visual stability during self-motion.
Collapse
Affiliation(s)
- Manuel Bayer
- Institute for Experimental Psychology, Heinrich-Heine-University Düsseldorf, Germany
| | - Eckart Zimmermann
- Institute for Experimental Psychology, Heinrich-Heine-University Düsseldorf, Germany
| |
Collapse
|
2
|
Roth MJ, Lindner A, Hesse K, Wildgruber D, Wong HY, Buehner MJ. Impaired perception of temporal contiguity between action and effect is associated with disorders of agency in schizophrenia. Proc Natl Acad Sci U S A 2023; 120:e2214327120. [PMID: 37186822 PMCID: PMC10214164 DOI: 10.1073/pnas.2214327120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 03/28/2023] [Indexed: 05/17/2023] Open
Abstract
Delusions of control in schizophrenia are characterized by the striking feeling that one's actions are controlled by external forces. We here tested qualitative predictions inspired by Bayesian causal inference models, which suggest that such misattributions of agency should lead to decreased intentional binding. Intentional binding refers to the phenomenon that subjects perceive a compression of time between their intentional actions and consequent sensory events. We demonstrate that patients with delusions of control perceived less self-agency in our intentional binding task. This effect was accompanied by significant reductions of intentional binding as compared to healthy controls and patients without delusions. Furthermore, the strength of delusions of control tightly correlated with decreases in intentional binding. Our study validated a critical prediction of Bayesian accounts of intentional binding, namely that a pathological reduction of the prior likelihood of a causal relation between one's actions and consequent sensory events-here captured by delusions of control-should lead to lesser intentional binding. Moreover, our study highlights the import of an intact perception of temporal contiguity between actions and their effects for the sense of agency.
Collapse
Affiliation(s)
- Manuel J. Roth
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3 72076Tübingen, Germany
- International Max Planck Research School for Cognitive and Systems Neuroscience, University of Tübingen, Otfried-Müller-Str. 27 72076Tübingen, Germany
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
- Dynamic Cognition Group, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 11 72076Tübingen, Germany
| | - Axel Lindner
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3 72076Tübingen, Germany
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
- Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3 72076Tübingen, Germany
| | - Klaus Hesse
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health, University of Tübingen, Calwerstraße 14 72076Tübingen, Germany
| | - Hong Yu Wong
- Philosophy of Neuroscience, Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, Otfried-Müller-Str. 25 72076Tübingen, Germany
- Department of Philosophy, University of Tübingen, Bursagasse 1 72070Tübingen, Germany
| | - Marc J. Buehner
- School of Psychology, Cardiff University, Park Place, CardiffCF10 3AT, Wales, United Kingdom
| |
Collapse
|
3
|
Luna R, Serrano-Pedraza I, Gegenfurtner KR, Schütz AC, Souto D. Achieving visual stability during smooth pursuit eye movements: Directional and confidence judgements favor a recalibration model. Vision Res 2021; 184:58-73. [PMID: 33873123 DOI: 10.1016/j.visres.2021.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 03/05/2021] [Accepted: 03/10/2021] [Indexed: 11/17/2022]
Abstract
During smooth pursuit eye movements, the visual system is faced with the task of telling apart reafferent retinal motion from motion in the world. While an efference copy signal can be used to predict the amount of reafference to subtract from the image, an image-based adaptive mechanism can ensure the continued accuracy of this computation. Indeed, repeatedly exposing observers to background motion with a fixed direction relative to that of the target that is pursued leads to a shift in their point of subjective stationarity (PSS). We asked whether the effect of exposure reflects adaptation to motion contingent on pursuit direction, recalibration of a reference signal or both. A recalibration account predicts a shift in reference signal (i.e. predicted reafference), resulting in a shift of PSS, but no change in sensitivity. Results show that both directional judgements and confidence judgements about them favor a recalibration account, whereby there is an adaptive shift in the reference signal caused by the prevailing retinal motion during pursuit. We also found that the recalibration effect is specific to the exposed visual hemifield.
Collapse
Affiliation(s)
- Raúl Luna
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain; School of Psychology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Ignacio Serrano-Pedraza
- Department of Experimental Psychology, Faculty of Psychology, Universidad Complutense de Madrid, Madrid, Spain
| | | | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Phillips-Universität Marburg, Giessen, Germany
| | - David Souto
- Neuroscience, Psychology and Behaviour, University of Leicester, Leicester, United Kingdom.
| |
Collapse
|
4
|
Gallivan JP, Chapman CS, Gale DJ, Flanagan JR, Culham JC. Selective Modulation of Early Visual Cortical Activity by Movement Intention. Cereb Cortex 2020; 29:4662-4678. [PMID: 30668674 DOI: 10.1093/cercor/bhy345] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 11/21/2018] [Accepted: 12/22/2018] [Indexed: 12/22/2022] Open
Abstract
The primate visual system contains myriad feedback projections from higher- to lower-order cortical areas, an architecture that has been implicated in the top-down modulation of early visual areas during working memory and attention. Here we tested the hypothesis that these feedback projections also modulate early visual cortical activity during the planning of visually guided actions. We show, across three separate human functional magnetic resonance imaging (fMRI) studies involving object-directed movements, that information related to the motor effector to be used (i.e., limb, eye) and action goal to be performed (i.e., grasp, reach) can be selectively decoded-prior to movement-from the retinotopic representation of the target object(s) in early visual cortex. We also find that during the planning of sequential actions involving objects in two different spatial locations, that motor-related information can be decoded from both locations in retinotopic cortex. Together, these findings indicate that movement planning selectively modulates early visual cortical activity patterns in an effector-specific, target-centric, and task-dependent manner. These findings offer a neural account of how motor-relevant target features are enhanced during action planning and suggest a possible role for early visual cortex in instituting a sensorimotor estimate of the visual consequences of movement.
Collapse
Affiliation(s)
- Jason P Gallivan
- Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada.,Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Craig S Chapman
- Faculty of Physical Education and Recreation, University of Alberta, Alberta, Canada
| | - Daniel J Gale
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Jody C Culham
- Department of Psychology, University of Western Ontario, London, Ontario, Canada.,Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
5
|
McNamee D, Wolpert DM. Internal Models in Biological Control. ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS 2019; 2:339-364. [PMID: 31106294 PMCID: PMC6520231 DOI: 10.1146/annurev-control-060117-105206] [Citation(s) in RCA: 96] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Rationality principles such as optimal feedback control and Bayesian inference underpin a probabilistic framework that has accounted for a range of empirical phenomena in biological sensorimotor control. To facilitate the optimization of flexible and robust behaviors consistent with these theories, the ability to construct internal models of the motor system and environmental dynamics can be crucial. In the context of this theoretic formalism, we review the computational roles played by such internal models and the neural and behavioral evidence for their implementation in the brain.
Collapse
Affiliation(s)
- Daniel McNamee
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
- Institute of Neurology, University College London, London WC1E 6BT, United Kingdom
| | - Daniel M. Wolpert
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge CB2 1PZ, United Kingdom
- Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York 10027, United States
| |
Collapse
|
6
|
Souto D, Chudasama J, Kerzel D, Johnston A. Motion integration is anisotropic during smooth pursuit eye movements. J Neurophysiol 2019; 121:1787-1797. [PMID: 30840536 DOI: 10.1152/jn.00591.2018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world's global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object's motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object's global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.
Collapse
Affiliation(s)
- David Souto
- Department of Neuroscience, Psychology and Behaviour, University of Leicester , Leicester , United Kingdom
| | - Jayesha Chudasama
- Department of Neuroscience, Psychology and Behaviour, University of Leicester , Leicester , United Kingdom
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Education, University of Geneva , Geneva , Switzerland
| | - Alan Johnston
- School of Psychology, University of Nottingham , Nottingham , United Kingdom
| |
Collapse
|
7
|
Kumar N, Mutha PK. Adaptive reliance on the most stable sensory predictions enhances perceptual feature extraction of moving stimuli. J Neurophysiol 2016; 115:1654-63. [PMID: 26823516 PMCID: PMC4808085 DOI: 10.1152/jn.00850.2015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Accepted: 01/26/2016] [Indexed: 11/22/2022] Open
Abstract
The prediction of the sensory outcomes of action is thought to be useful for distinguishing self- vs. externally generated sensations, correcting movements when sensory feedback is delayed, and learning predictive models for motor behavior. Here, we show that aspects of another fundamental function—perception—are enhanced when they entail the contribution of predicted sensory outcomes and that this enhancement relies on the adaptive use of the most stable predictions available. We combined a motor-learning paradigm that imposes new sensory predictions with a dynamic visual search task to first show that perceptual feature extraction of a moving stimulus is poorer when it is based on sensory feedback that is misaligned with those predictions. This was possible because our novel experimental design allowed us to override the “natural” sensory predictions present when any action is performed and separately examine the influence of these two sources on perceptual feature extraction. We then show that if the new predictions induced via motor learning are unreliable, rather than just relying on sensory information for perceptual judgments, as is conventionally thought, then subjects adaptively transition to using other stable sensory predictions to maintain greater accuracy in their perceptual judgments. Finally, we show that when sensory predictions are not modified at all, these judgments are sharper when subjects combine their natural predictions with sensory feedback. Collectively, our results highlight the crucial contribution of sensory predictions to perception and also suggest that the brain intelligently integrates the most stable predictions available with sensory information to maintain high fidelity in perceptual decisions.
Collapse
Affiliation(s)
- Neeraj Kumar
- Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India; and
| | - Pratik K Mutha
- Centre for Cognitive Science, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India; and Department of Biological Engineering, Indian Institute of Technology Gandhinagar, Ahmedabad, Gujarat, India
| |
Collapse
|
8
|
Gallivan JP, Johnsrude IS, Flanagan JR. Planning Ahead: Object-Directed Sequential Actions Decoded from Human Frontoparietal and Occipitotemporal Networks. Cereb Cortex 2015; 26:708-30. [PMID: 25576538 DOI: 10.1093/cercor/bhu302] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions-those that occur after an object has been grasped-are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements.
Collapse
Affiliation(s)
- Jason P Gallivan
- Centre for Neuroscience Studies Department of Psychology, Queen's University, Kingston, ON, Canada K7L 3N6
| | - Ingrid S Johnsrude
- Brain and Mind Institute School of Communication Sciences and Disorders, University of Western Ontario, London, ON, Canada N6A 5B7
| | - J Randall Flanagan
- Centre for Neuroscience Studies Department of Psychology, Queen's University, Kingston, ON, Canada K7L 3N6
| |
Collapse
|
9
|
Brooks JX, Cullen KE. Early vestibular processing does not discriminate active from passive self-motion if there is a discrepancy between predicted and actual proprioceptive feedback. J Neurophysiol 2014; 111:2465-78. [PMID: 24671531 DOI: 10.1152/jn.00600.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
Most of our sensory experiences are gained by active exploration of the world. While the ability to distinguish sensory inputs resulting of our own actions (termed reafference) from those produced externally (termed exafference) is well established, the neural mechanisms underlying this distinction are not fully understood. We have previously proposed that vestibular signals arising from self-generated movements are inhibited by a mechanism that compares the internal prediction of the proprioceptive consequences of self-motion to the actual feedback. Here we directly tested this proposal by recording from single neurons in monkey during vestibular stimulation that was externally produced and/or self-generated. We show for the first time that vestibular reafference is equivalently canceled for self-generated sensory stimulation produced by activation of the neck musculature (head-on-body motion), or axial musculature (combined head and body motion), when there is no discrepancy between the predicted and actual proprioceptive consequences of self-motion. However, if a discrepancy does exist, central vestibular neurons no longer preferentially encode vestibular exafference. Specifically, when simultaneous active and passive motion resulted in activation of the same muscle proprioceptors, neurons robustly encoded the total vestibular input (i.e., responses to vestibular reafference and exafference were equally strong), rather than exafference alone. Taken together, our results show that the cancellation of vestibular reafference in early vestibular processing requires an explicit match between expected and actual proprioceptive feedback. We propose that this vital neuronal computation, necessary for both accurate sensory perception and motor control, has important implications for a variety of sensory systems that suppress self-generated signals.
Collapse
Affiliation(s)
- Jessica X Brooks
- Aerospace Medical Research Unit, Department of Physiology, McGill University, Montreal, Quebec, Canada
| | - Kathleen E Cullen
- Aerospace Medical Research Unit, Department of Physiology, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
10
|
Caligiore D, Pezzulo G, Miall RC, Baldassarre G. The contribution of brain sub-cortical loops in the expression and acquisition of action understanding abilities. Neurosci Biobehav Rev 2013; 37:2504-15. [PMID: 23911926 PMCID: PMC3878436 DOI: 10.1016/j.neubiorev.2013.07.016] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 07/17/2013] [Accepted: 07/24/2013] [Indexed: 11/26/2022]
Abstract
Focusing on cortical areas is too restrictive to explain action understanding ability. We propose that sub-cortical areas support action understanding ability. Cortical and sub-cortical processes allow acquisition of action understanding ability.
Research on action understanding in cognitive neuroscience has led to the identification of a wide “action understanding network” mainly encompassing parietal and premotor cortical areas. Within this cortical network mirror neurons are critically involved implementing a neural mechanism according to which, during action understanding, observed actions are reflected in the motor patterns for the same actions of the observer. We suggest that focusing only on cortical areas and processes could be too restrictive to explain important facets of action understanding regarding, for example, the influence of the observer's motor experience, the multiple levels at which an observed action can be understood, and the acquisition of action understanding ability. In this respect, we propose that aside from the cortical action understanding network, sub-cortical processes pivoting on cerebellar and basal ganglia cortical loops could crucially support both the expression and the acquisition of action understanding abilities. Within the paper we will discuss how this extended view can overcome some limitations of the “pure” cortical perspective, supporting new theoretical predictions on the brain mechanisms underlying action understanding that could be tested by future empirical investigations.
Collapse
Affiliation(s)
- Daniele Caligiore
- Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerche (ISTC-CNR), Via San Martino della Battaglia 44, I-00185, Rome, Italy.
| | | | | | | |
Collapse
|
11
|
The Cerebellum Optimizes Perceptual Predictions about External Sensory Events. Curr Biol 2013; 23:930-5. [PMID: 23664970 DOI: 10.1016/j.cub.2013.04.027] [Citation(s) in RCA: 80] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2012] [Revised: 03/06/2013] [Accepted: 04/10/2013] [Indexed: 11/23/2022]
|
12
|
Gallivan JP, Chapman CS, McLean DA, Flanagan JR, Culham JC. Activity patterns in the category-selective occipitotemporal cortex predict upcoming motor actions. Eur J Neurosci 2013; 38:2408-24. [PMID: 23581683 DOI: 10.1111/ejn.12215] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2013] [Accepted: 03/10/2013] [Indexed: 12/01/2022]
Abstract
Converging lines of evidence point to the occipitotemporal cortex (OTC) as a critical structure in visual perception. For instance, human functional magnetic resonance imaging (fMRI) has revealed a modular organisation of object-selective, face-selective, body-selective and scene-selective visual areas in the OTC, and disruptions to the processing within these regions, either in neuropsychological patients or through transcranial magnetic stimulation, can produce category-specific deficits in visual recognition. Here we show, using fMRI and pattern classification methods, that the activity in the OTC also represents how stimuli will be interacted with by the body--a level of processing more traditionally associated with the preparatory activity in sensorimotor circuits of the brain. Combining functional mapping of different OTC areas with a real object-directed delayed movement task, we found that the pre-movement spatial activity patterns across the OTC could be used to predict both the action of an upcoming hand movement (grasping vs. reaching) and the effector (left hand vs. right hand) to be used. Interestingly, we were able to extract this wide range of predictive movement information even though nearly all OTC areas showed either baseline-level or below baseline-level activity prior to action onset. Our characterisation of different OTC areas according to the features of upcoming movements that they could predict also revealed a general gradient of effector-to-action-dependent movement representations along the posterior-anterior OTC axis. These findings suggest that the ventral visual pathway, which is well known to be involved in object recognition and perceptual processing, plays a larger than previously expected role in preparing object-directed hand actions.
Collapse
Affiliation(s)
- Jason P Gallivan
- Centre for Neuroscience Studies, Department of Psychology, Queen's University, Kingston, ON, Canada.
| | | | | | | | | |
Collapse
|
13
|
Dunkley BT, Freeman TC, Muthukumaraswamy SD, Singh KD. Cortical oscillatory changes in human middle temporal cortex underlying smooth pursuit eye movements. Hum Brain Mapp 2013; 34:837-51. [PMID: 22110021 PMCID: PMC6869956 DOI: 10.1002/hbm.21478] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2011] [Revised: 09/01/2011] [Accepted: 09/05/2011] [Indexed: 11/06/2022] Open
Abstract
Extra-striate regions are thought to receive non-retinal signals from the pursuit system to maintain perceptual stability during eye movements. Here, we used magnetoencephalography (MEG) to study changes in oscillatory power related to smooth pursuit in extra-striate visual areas under three conditions: 'pursuit' of a small target, 'retinal motion' of a large background and 'pursuit + retinal motion' combined. All stimuli moved sinusoidally. MEG source reconstruction was performed using synthetic aperture magnetometry. Broadband alpha-beta suppression (5-25 Hz) was observed over bilateral extra-striate cortex (consistent with middle temporal cortex (MT+)) during all conditions. A functional magnetic resonance imaging study using the same experimental protocols confirmed an MT+ localisation of this extra-striate response. The alpha-beta envelope power in the 'pursuit' condition showed a hemifield-dependent eye-position signal, such that the global minimum in the alpha-beta suppression recorded in extra-striate cortex was greatest when the eyes were at maximum contralateral eccentricity. The 'retinal motion' condition produced sustained alpha-beta power decreases for the duration of stimulus motion, while the 'pursuit + retinal motion' condition revealed a double-dip 'W' shaped alpha-beta envelope profile with the peak suppression contiguous with eye position when at opposing maximum eccentricity. These results suggest that MT+ receives retinal as well as extra-retinal signals from the pursuit system as part of the process that enables the visual system to compensate for retinal motion during eye movement. We speculate that the suppression of the alpha-beta rhythm reflects either the integration of an eye position-dependent signal or one that lags the peak velocity of the sinusoidally moving target.
Collapse
Affiliation(s)
- Benjamin T. Dunkley
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| | - Tom C.A. Freeman
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| | - Suresh D. Muthukumaraswamy
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| | - Krish D. Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Park Place, Cardiff, United Kingdom
| |
Collapse
|
14
|
Synofzik M, Vosgerau G, Voss M. The experience of agency: an interplay between prediction and postdiction. Front Psychol 2013; 4:127. [PMID: 23508565 PMCID: PMC3597983 DOI: 10.3389/fpsyg.2013.00127] [Citation(s) in RCA: 154] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Accepted: 02/28/2013] [Indexed: 01/05/2023] Open
Abstract
The experience of agency, i.e., the registration that I am the initiator of my actions, is a basic and constant underpinning of our interaction with the world. Whereas several accounts have underlined predictive processes as the central mechanism (e.g., the comparator model by C. Frith), others emphasized postdictive inferences (e.g., post-hoc inference account by D. Wegner). Based on increasing evidence that both predictive and postdictive processes contribute to the experience of agency, we here present a unifying but at the same time parsimonious approach that reconciles these accounts: predictive and postdictive processes are both integrated by the brain according to the principles of optimal cue integration. According to this framework, predictive and postdictive processes each serve as authorship cues that are continuously integrated and weighted depending on their availability and reliability in a given situation. Both sensorimotor and cognitive signals can serve as predictive cues (e.g., internal predictions based on an efferency copy of the motor command or cognitive anticipations based on priming). Similarly, other sensorimotor and cognitive cues can each serve as post-hoc cues (e.g., visual feedback of the action or the affective valence of the action outcome). Integration and weighting of these cues might not only differ between contexts and individuals, but also between different subject and disease groups. For example, schizophrenia patients with delusions of influence seem to rely less on (probably imprecise) predictive motor signals of the action and more on post-hoc action cues like e.g., visual feedback and, possibly, the affective valence of the action outcome. Thus, the framework of optimal cue integration offers a promising approach that directly stimulates a wide range of experimentally testable hypotheses on agency processing in different subject groups.
Collapse
Affiliation(s)
- Matthis Synofzik
- Department of Neurodegenerative Diseases, Hertie-Institute for Clinical Brain Research, University of TübingenTübingen, Germany
- German Research Center for Neurodegenerative Diseases (DZNE)Tübingen, Germany
| | | | - Martin Voss
- Department of Psychiatry and Psychotherapy, Charité University Hospital and St. Hedwig HospitalBerlin, Germany
| |
Collapse
|
15
|
Wilke C, Synofzik M, Lindner A. Sensorimotor recalibration depends on attribution of sensory prediction errors to internal causes. PLoS One 2013; 8:e54925. [PMID: 23359818 PMCID: PMC3554678 DOI: 10.1371/journal.pone.0054925] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2012] [Accepted: 12/20/2012] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor learning critically depends on error signals. Learning usually tries to minimise these error signals to guarantee optimal performance. Errors can, however, have both internal causes, resulting from one’s sensorimotor system, and external causes, resulting from external disturbances. Does learning take into account the perceived cause of error information? Here, we investigated the recalibration of internal predictions about the sensory consequences of one’s actions. Since these predictions underlie the distinction of self- and externally produced sensory events, we assumed them to be recalibrated only by prediction errors attributed to internal causes. When subjects were confronted with experimentally induced visual prediction errors about their pointing movements in virtual reality, they recalibrated the predicted visual consequences of their movements. Recalibration was not proportional to the externally generated prediction error, but correlated with the error component which subjects attributed to internal causes. We also revealed adaptation in subjects’ motor performance which reflected their recalibrated sensory predictions. Thus, causal attribution of error information is essential for sensorimotor learning.
Collapse
Affiliation(s)
- Carlo Wilke
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Matthis Synofzik
- Department of Neurodegeneration, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- German Centre for Neurodegenerative Diseases, University of Tübingen, Tübingen, Germany
| | - Axel Lindner
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- * E-mail:
| |
Collapse
|
16
|
Kranick SM, Hallett M. Neurology of volition. Exp Brain Res 2013; 229:313-27. [PMID: 23329204 DOI: 10.1007/s00221-013-3399-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2012] [Accepted: 12/30/2012] [Indexed: 01/07/2023]
Abstract
Neurological disorders of volition may be characterized by deficits in willing and/or agency. When we move our bodies through space, it is the sense that we intended to move (willing) and that our actions were a consequence of this intention (self-agency) that gives us the sense of voluntariness and a general feeling of being "in control." While it is possible to have movements that share executive machinery ordinarily used for voluntary movement but lack a sense of voluntariness, such as psychogenic movement disorders, it is also possible to claim volition for presumed involuntary movements (early chorea) or even when no movement is produced (anosognosia). The study of such patients should enlighten traditional models of how the percepts of volition are generated in the brain with regard to movement. We discuss volition and its components as multi-leveled processes with feedforward and feedback information flow, and dependence on prior expectations as well as external and internal cues.
Collapse
Affiliation(s)
- Sarah M Kranick
- Human Motor Control Section, Medical Neurology Branch, National Institutes of Neurological Disorders and Stroke, National Institutes of Health, Building 10/6-5700, 10 Center Drive, MSC 1430, Bethesda, MD 20892-1430, USA.
| | | |
Collapse
|
17
|
Prsa M, Gale S, Blanke O. Self-motion leads to mandatory cue fusion across sensory modalities. J Neurophysiol 2012; 108:2282-91. [DOI: 10.1152/jn.00439.2012] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
When perceiving properties of the world, we effortlessly combine multiple sensory cues into optimal estimates. Estimates derived from the individual cues are generally retained once the multisensory estimate is produced and discarded only if the cues stem from the same sensory modality (i.e., mandatory fusion). Does multisensory integration differ in that respect when the object of perception is one's own body, rather than an external variable? We quantified how humans combine visual and vestibular information for perceiving own-body rotations and specifically tested whether such idiothetic cues are subjected to mandatory fusion. Participants made extensive size comparisons between successive whole body rotations using only visual, only vestibular, and both senses together. Probabilistic descriptions of the subjects' perceptual estimates were compared with a Bayes-optimal integration model. Similarity between model predictions and experimental data echoed a statistically optimal mechanism of multisensory integration. Most importantly, size discrimination data for rotations composed of both stimuli was best accounted for by a model in which only the bimodal estimator is accessible for perceptual judgments as opposed to an independent or additive use of all three estimators (visual, vestibular, and bimodal). Indeed, subjects' thresholds for detecting two multisensory rotations as different from one another were, in pertinent cases, larger than those measured using either single-cue estimate alone. Rotations different in terms of the individual visual and vestibular inputs but quasi-identical in terms of the integrated bimodal estimate became perceptual metamers. This reveals an exceptional case of mandatory fusion of cues stemming from two different sensory modalities.
Collapse
Affiliation(s)
- Mario Prsa
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; and
| | - Steven Gale
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; and
| | - Olaf Blanke
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
- Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; and
- Department of Neurology, University Hospital Geneva, Geneva, Switzerland
| |
Collapse
|
18
|
Unravelling cerebellar pathways with high temporal precision targeting motor and extensive sensory and parietal networks. Nat Commun 2012; 3:924. [PMID: 22735452 DOI: 10.1038/ncomms1912] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Accepted: 05/17/2012] [Indexed: 11/09/2022] Open
Abstract
Increasing evidence has implicated the cerebellum in providing forward models of motor plants predicting the sensory consequences of actions. Assuming that cerebellar input to the cerebral cortex contributes to the cerebro-cortical processing by adding forward model signals, we would expect to find projections emphasising motor and sensory cortical areas. However, this expectation is only partially met by studies of cerebello-cerebral connections. Here we show that by electrically stimulating the cerebellar output and imaging responses with functional magnetic resonance imaging, evoked blood oxygen level-dependant activity is observed not only in the classical cerebellar projection target, the primary motor cortex, but also in a number of additional areas in insular, parietal and occipital cortex, including sensory cortical representations. Further probing of the responses reveals a projection system that has been optimized to mediate fast and temporarily precise information. In conclusion, both the topography of the stimulation effects and its emphasis on temporal precision are in full accordance with the concept of cerebellar forward model information modulating cerebro-cortical processing.
Collapse
|
19
|
Davies JR, Freeman TCA. Simultaneous adaptation to non-collinear retinal motion and smooth pursuit eye movement. Vision Res 2011; 51:1637-47. [PMID: 21605588 DOI: 10.1016/j.visres.2011.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2010] [Revised: 05/03/2011] [Accepted: 05/06/2011] [Indexed: 10/18/2022]
Abstract
Simultaneously adapting to retinal motion and non-collinear pursuit eye movement produces a motion aftereffect (MAE) that moves in a different direction to either of the individual adapting motions. Mack, Hill and Kahn (1989, Perception, 18, 649-655) suggested that the MAE was determined by the perceived motion experienced during adaptation. We tested the perceived-motion hypothesis by having observers report perceived direction during simultaneous adaptation. For both central and peripheral retinal motion adaptation, perceived direction did not predict the direction of subsequent MAE. To explain the findings we propose that the MAE is based on the vector sum of two components, one corresponding to a retinal MAE opposite to the adapting retinal motion and the other corresponding to an extra-retina MAE opposite to the eye movement. A vector model of this component hypothesis showed that the MAE directions reported in our experiments were the result of an extra-retinal component that was substantially larger in magnitude than the retinal component when the adapting retinal motion was positioned centrally. However, when retinal adaptation was peripheral, the model suggested the magnitude of the components should be about the same. These predictions were tested in a final experiment that used a magnitude estimation technique. Contrary to the predictions, the results showed no interaction between type of adaptation (retinal or pursuit) and the location of adapting retinal motion. Possible reasons for the failure of component hypothesis to fully explain the data are discussed.
Collapse
Affiliation(s)
- J Rhys Davies
- School of Psychology, Tower Building, Park Place, Cardiff University, CF10 3AT, UK
| | | |
Collapse
|
20
|
Wilke C, Synofzik M, Lindner A. The valence of action outcomes modulates the perception of one's actions. Conscious Cogn 2011; 21:18-29. [PMID: 21757377 DOI: 10.1016/j.concog.2011.06.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2011] [Revised: 06/02/2011] [Accepted: 06/06/2011] [Indexed: 01/05/2023]
Abstract
When interacting with the world, we need to distinguish whether sensory information results from external events or from our own actions. The nervous system most likely draws this distinction by comparing the actual sensory input with an internal prediction about the sensory consequences of one's actions. However, interacting with the world also requires an evaluation of the outcomes of self-action, e.g. in terms of their affective valence. Here we show that subjects' perceived pointing direction does not only depend on predictive and sensory signals related to the performed action itself, but also on the affective valence of the action outcome: subjects perceived their movements as directed towards positive and away from negative outcomes. Our findings suggest that the non-conceptual perception of the sensory consequences of self-action builds on both sensorimotor information related directly to self-action and a post hoc evaluation of the affective action outcome.
Collapse
Affiliation(s)
- Carlo Wilke
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Germany
| | | | | |
Collapse
|
21
|
Jain A, Backus BT. Experience affects the use of ego-motion signals during 3D shape perception. J Vis 2010; 10:10.14.30. [PMID: 21191132 DOI: 10.1167/10.14.30] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Experience has long-term effects on perceptual appearance (Q. Haijiang, J. A. Saunders, R. W. Stone, & B. T. Backus, 2006). We asked whether experience affects the appearance of structure-from-motion stimuli when the optic flow is caused by observer ego-motion. Optic flow is an ambiguous depth cue: a rotating object and its oppositely rotating, depth-inverted dual generate similar flow. However, the visual system exploits ego-motion signals to prefer the percept of an object that is stationary over one that rotates (M. Wexler, F. Panerai, I. Lamouret, & J. Droulez, 2001). We replicated this finding and asked whether this preference for stationarity, the "stationarity prior," is modulated by experience. During training, two groups of observers were exposed to objects with identical flow, but that were either stationary or moving as determined by other cues. The training caused identical test stimuli to be seen preferentially as stationary or moving by the two groups, respectively. We then asked whether different priors can exist independently at different locations in the visual field. Observers were trained to see objects either as stationary or as moving at two different locations. Observers' stationarity bias at the two respective locations was modulated in the directions consistent with training. Thus, the utilization of extraretinal ego-motion signals for disambiguating optic flow signals can be updated as the result of experience, consistent with the updating of a Bayesian prior for stationarity.
Collapse
Affiliation(s)
- Anshul Jain
- SUNY Eye Institute and Graduate Center for Vision Research, SUNY College of Optometry, New York, NY 10036, USA.
| | | |
Collapse
|
22
|
Lee B, Pesaran B, Andersen RA. Area MSTd neurons encode visual stimuli in eye coordinates during fixation and pursuit. J Neurophysiol 2010; 105:60-8. [PMID: 20980545 DOI: 10.1152/jn.00495.2009] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Visual signals generated by self-motion are initially represented in retinal coordinates in the early parts of the visual system. Because this information can be used by an observer to navigate through the environment, it must be transformed into body or world coordinates at later stations of the visual-motor pathway. Neurons in the dorsal aspect of the medial superior temporal area (MSTd) are tuned to the focus of expansion (FOE) of the visual image. We performed experiments to determine whether focus tuning curves in area MSTd are represented in eye coordinates or in screen coordinates (which could be head, body, or world-centered in the head-fixed paradigm used). Because MSTd neurons adjust their FOE tuning curves during pursuit eye movements to compensate for changes in pursuit and translation speed that distort the visual image, the coordinate frame was determined while the eyes were stationary (fixed gaze or simulated pursuit conditions) and while the eyes were moving (real pursuit condition). We recorded extracellular responses from 80 MSTd neurons in two rhesus monkeys (Macaca mulatta). We found that the FOE tuning curves of the overwhelming majority of neurons were aligned in an eye-centered coordinate frame in each of the experimental conditions [fixed gaze: 77/80 (96%); real pursuit: 77/80 (96%); simulated pursuit 74/80 (93%); t-test, P < 0.05]. These results indicate that MSTd neurons represent heading in an eye-centered coordinate frame both when the eyes are stationary and when they are moving. We also found that area MSTd demonstrates significant eye position gain modulation of response fields much like its posterior parietal neighbors.
Collapse
Affiliation(s)
- Brian Lee
- Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA
| | | | | |
Collapse
|
23
|
O'Connor E, Margrain TH, Freeman TCA. Age, eye movement and motion discrimination. Vision Res 2010; 50:2588-99. [PMID: 20732343 DOI: 10.1016/j.visres.2010.08.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2010] [Revised: 08/12/2010] [Accepted: 08/13/2010] [Indexed: 11/27/2022]
Abstract
Age is known to affect sensitivity to retinal motion. However, little is known about how age might affect sensitivity to motion during pursuit. We therefore investigated direction discrimination and speed discrimination when moving stimuli were either fixated or pursued. Our experiments showed: (1) age influences direction discrimination at slow speeds but has little affect on speed discrimination; (2) the faster eye movements made in the pursuit conditions produced poorer direction discrimination at slower speeds, and poorer speed discrimination at all speeds; (3) regardless of eye-movement condition, observers always combined retinal and extra-retinal motion signals to make their judgements. Our results support the idea that performance in these tasks is limited by the internal noise associated with retinal and extra-retinal motion signals, both of which feed into a stage responsible for estimating head-centred motion. Imprecise eye movement, or later noise introduced at the combination stage, could not explain the results.
Collapse
Affiliation(s)
- Emer O'Connor
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff CF10 3YT, UK
| | | | | |
Collapse
|
24
|
Freeman TCA, Champion RA, Warren PA. A Bayesian model of perceived head-centered velocity during smooth pursuit eye movement. Curr Biol 2010; 20:757-62. [PMID: 20399096 PMCID: PMC2861164 DOI: 10.1016/j.cub.2010.02.059] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2009] [Revised: 02/18/2010] [Accepted: 02/18/2010] [Indexed: 11/30/2022]
Abstract
During smooth pursuit eye movement, observers often misperceive velocity. Pursued stimuli appear slower (Aubert-Fleishl phenomenon [1, 2]), stationary objects appear to move (Filehne illusion [3]), the perceived direction of moving objects is distorted (trajectory misperception [4]), and self-motion veers away from its true path (e.g., the slalom illusion [5]). Each illusion demonstrates that eye speed is underestimated with respect to image speed, a finding that has been taken as evidence of early sensory signals that differ in accuracy [4, 6-11]. Here we present an alternative Bayesian account, based on the idea that perceptual estimates are increasingly influenced by prior expectations as signals become more uncertain [12-15]. We show that the speeds of pursued stimuli are more difficult to discriminate than fixated stimuli. Observers are therefore less certain about motion signals encoding the speed of pursued stimuli, a finding we use to quantify the Aubert-Fleischl phenomenon based on the assumption that the prior for motion is centered on zero [16-20]. In doing so, we reveal an important property currently overlooked by Bayesian models of motion perception. Two Bayes estimates are needed at a relatively early stage in processing, one for pursued targets and one for image motion.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Park Place, Cardiff CF10 3AT, UK.
| | | | | |
Collapse
|
25
|
Abstract
The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions.
Collapse
Affiliation(s)
- Richard A Andersen
- Division of Biology, California Institute of Technology, Pasadena, California 91125, USA.
| | | | | |
Collapse
|
26
|
Synofzik M, Thier P, Leube DT, Schlotterbeck P, Lindner A. Misattributions of agency in schizophrenia are based on imprecise predictions about the sensory consequences of one's actions. ACTA ACUST UNITED AC 2009; 133:262-71. [PMID: 19995870 DOI: 10.1093/brain/awp291] [Citation(s) in RCA: 211] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The experience of being the initiator of one's own actions seems to be infallible at first glance. Misattributions of agency of one's actions in certain neurological or psychiatric patients reveal, however, that the central mechanisms underlying this experience can go astray. In particular, delusions of influence in schizophrenia might result from deficits in an inferential mechanism that allows distinguishing whether or not a sensory event has been self-produced. This distinction is made by comparing the actual sensory information with the consequences of one's action as predicted on the basis of internal action-related signals such as efference copies. If this internal prediction matches the actual sensory event, an action is registered as self-caused; in case of a mismatch, the difference is interpreted as externally produced. We tested the hypothesis that delusions of influence are based on deficits in this comparator mechanism. In particular, we tested whether patients' impairments in action attribution tasks are caused by imprecise predictions about the sensory consequences of self-action. Schizophrenia patients and matched controls performed pointing movements in a virtual-reality setup in which the visual consequences of movements could be rotated with respect to the actual movement. Experiment 1 revealed higher thresholds for detecting experimental feedback rotations in the patient group. The size of these thresholds correlated positively with patients' delusions of influence. Experiment 2 required subjects to estimate their direction of pointing visually in the presence of constantly rotated visual feedback. When compared to controls, patients' estimates were significantly better adapted to the feedback rotation and exhibited an increased variability. In interleaved trials without visual feedback, i.e. when pointing estimates relied solely on internal action-related signals, this variability was likewise increased and correlated with both delusions of influence and the size of patients' detection thresholds as assessed in the first experiment. These findings support the notion that delusions of influence are based on imprecise internal predictions about the sensory consequences of one's actions. Moreover, we suggest that such imprecise predictions prompt patients to rely more strongly on (and thus adapt to) external agency cues, in this case vision. Such context-dependent weighted integration of imprecise internal predictions and alternative agency cues might thus reflect the common basis for the various misattributions of agency in schizophrenia patients.
Collapse
Affiliation(s)
- Matthis Synofzik
- Department of Neurodegeneration, Hertie-Institute for Clinical Brain Research, Hoppe-Seyler-Strasse 3, 72076 Tübingen, Germany.
| | | | | | | | | |
Collapse
|
27
|
Andersen RA, Cui H. Intention, action planning, and decision making in parietal-frontal circuits. Neuron 2009; 63:568-83. [PMID: 19755101 DOI: 10.1016/j.neuron.2009.08.028] [Citation(s) in RCA: 464] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2009] [Revised: 08/26/2009] [Accepted: 08/26/2009] [Indexed: 10/20/2022]
Abstract
The posterior parietal cortex and frontal cortical areas to which it connects are responsible for sensorimotor transformations. This review covers new research on four components of this transformation process: planning, decision making, forward state estimation, and relative-coordinate representations. These sensorimotor functions can be harnessed for neural prosthetic operations by decoding intended goals (planning) and trajectories (forward state estimation) of movements as well as higher cortical functions related to decision making and potentially the coordination of multiple body parts (relative-coordinate representations).
Collapse
Affiliation(s)
- Richard A Andersen
- Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA.
| | | |
Collapse
|
28
|
Ilg UJ, Thier P. The neural basis of smooth pursuit eye movements in the rhesus monkey brain. Brain Cogn 2008; 68:229-40. [DOI: 10.1016/j.bandc.2008.08.014] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2008] [Indexed: 12/28/2022]
|
29
|
Synofzik M, Lindner A, Thier P. The cerebellum updates predictions about the visual consequences of one's behavior. Curr Biol 2008; 18:814-8. [PMID: 18514520 DOI: 10.1016/j.cub.2008.04.071] [Citation(s) in RCA: 159] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2008] [Revised: 04/23/2008] [Accepted: 04/23/2008] [Indexed: 11/18/2022]
Abstract
Each action has sensory consequences that need to be distinguished from sensations arising from the environment. This is accomplished by the comparing of internal predictions about these consequences with the actual afference, thereby isolating the afferent component that is self-produced. Because the sensory consequences of actions vary as a result of changes of the effector's efficacy, internal predictions need to be updated continuously and on a short time scale. Here, we tested the hypothesis that this updating of predictions about the sensory consequences of actions is mediated by the cerebellum, a notion that parallels the cerebellum's role in motor learning. Patients with cerebellar lesions and their matched controls were equally able to detect experimental modifications of visual feedback about their pointing movements. When such feedback was constantly rotated, both groups instantly attributed the visual feedback to their own actions. However, in interleaved trials without actual feedback, patients did no longer account for this feedback rotation--neither perceptually nor with respect to motor performance. Both deficits can be explained by an impaired updating of internal predictions about the sensory consequences of actions caused by cerebellar pathology. Thus, the cerebellum guarantees both precise performance and veridical perceptual interpretation of actions.
Collapse
Affiliation(s)
- Matthis Synofzik
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, 72076 Tübingen, Germany
| | | | | |
Collapse
|
30
|
Dicke PW, Chakraborty S, Thier P. Neuronal correlates of perceptual stability during eye movements. Eur J Neurosci 2008; 27:991-1002. [PMID: 18333969 DOI: 10.1111/j.1460-9568.2008.06054.x] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
We are usually unaware of retinal image motion resulting from our own movement. For instance, during slow-tracking eye movements the world around us remains perceptually stable despite the retinal image slip induced by the eye movement. It is commonly held that this example of perceptual invariance is achieved by subtracting an internal reference signal, reflecting the eye movement, from the retinal motion signal. If the two cancel each other, visual objects, which do not move, will also be perceived as non-moving. If, however, the reference signal is too small or too large, a false eye movement-induced motion of the external world, the Filehne illusion, will be perceived. We have exploited our ability to manipulate the size of the reference signal in an attempt to identify neurons in the visual cortex of monkeys, influenced by the percept of self-induced visual motion or the reference signal rather than the retinal motion signal. We report here that such 'percept-related' neurons can already be found in the primary visual cortex area, although few in numbers. They become more frequent in areas middle temporal and medial superior temporal in the superior temporal sulcus, and comprise almost 50% of all neurons in area visual posterior sylvian (VPS) in the posterior part of the lateral sulcus. In summary, our findings suggest that our ability to perceive a visual world, which is stable despite self-motion, is based on a neuronal network, which culminates in the VPS located in the lateral sulcus below the classical dorsal stream of visual processing.
Collapse
Affiliation(s)
- Peter W Dicke
- Center for Neurology, Hertie Institute for Clinical Brain Research, Department of Cognitive Neurology, University of Tuebingen, Otfried-Mueller-Str. 27, 72076 Tuebingen, Germany.
| | | | | |
Collapse
|
31
|
Trenner MU, Fahle M, Fasold O, Heekeren HR, Villringer A, Wenzel R. Human cortical areas involved in sustaining perceptual stability during smooth pursuit eye movements. Hum Brain Mapp 2008; 29:300-11. [PMID: 17415782 PMCID: PMC6870627 DOI: 10.1002/hbm.20387] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Because both, eye movements and object movements induce an image motion on the retina, eye movements must be compensated to allow a coherent and stable perception of our surroundings. The inferential theory of perception postulates that retinal image motion is compared with an internal reference signal related to eye movements. This mechanism allows to distinguish between the potential sources producing retinal image motion. Referring to this theory, we investigated referential calculation during smooth pursuit eye movements (SPEM) in humans using event-related functional magnetic resonance imaging (fMRI). The blood oxygenation level dependent (BOLD) response related to SPEM in front of a stable background was measured for different parametric steps of preceding motion stimuli and therefore assumed for different states of the referential system. To achieve optimally accurate anatomy and more detectable fMRI signal changes in group analysis, we applied cortex-based statistics both to all brain volumes and to defined regions of interest. Our analysis revealed that the activity in a temporal region as well as the posterior parietal cortex (PPC) depended on the velocity of the preceding stimuli. Previous single-cell recordings in monkeys demonstrated that the visual posterior sylvian area (VPS) is relevant for perceptual stability. The activation apparent in our study thus may represent a human analogue of this area. The PPC is known as being strongly related to goal-directed eye movements. In conclusion, temporal and parietal cortical areas may be involved in referential calculation and thereby in sustaining visual perceptual stability during eye movements.
Collapse
Affiliation(s)
- Maja U Trenner
- Berlin NeuroImaging Center, Neurologische Klinik und Poliklinik, Charité Universitätsmedizin Berlin, Berlin, Germany.
| | | | | | | | | | | |
Collapse
|
32
|
Abstract
During goal-directed movements, primates are able to rapidly and accurately control an online trajectory despite substantial delay times incurred in the sensorimotor control loop. To address the problem of large delays, it has been proposed that the brain uses an internal forward model of the arm to estimate current and upcoming states of a movement, which are more useful for rapid online control. To study online control mechanisms in the posterior parietal cortex (PPC), we recorded from single neurons while monkeys performed a joystick task. Neurons encoded the static target direction and the dynamic movement angle of the cursor. The dynamic encoding properties of many movement angle neurons reflected a forward estimate of the state of the cursor that is neither directly available from passive sensory feedback nor compatible with outgoing motor commands and is consistent with PPC serving as a forward model for online sensorimotor control. In addition, we found that the space-time tuning functions of these neurons were largely separable in the angle-time plane, suggesting that they mostly encode straight and approximately instantaneous trajectories.
Collapse
|
33
|
Synofzik M, Vosgerau G, Newen A. I move, therefore I am: a new theoretical framework to investigate agency and ownership. Conscious Cogn 2008; 17:411-24. [PMID: 18411059 DOI: 10.1016/j.concog.2008.03.008] [Citation(s) in RCA: 145] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2008] [Accepted: 03/04/2008] [Indexed: 10/22/2022]
Abstract
The neurocognitive structure of the acting self has recently been widely studied, yet is still perplexing and remains an often confounded issue in cognitive neuroscience, psychopathology and philosophy. We provide a new systematic account of two of its main features, the sense of agency and the sense of ownership, demonstrating that although both features appear as phenomenally uniform, they each in fact are complex crossmodal phenomena of largely heterogeneous functional and (self-)representational levels. These levels can be arranged within a gradually evolving, onto- and phylogenetically plausible framework which proceeds from basic non-conceptual sensorimotor processes to more complex conceptual and meta-representational processes of agency and ownership, respectively. In particular, three fundamental levels of agency and ownership processing have to be distinguished: The level of feeling, thinking and social interaction. This naturalistic account will not only allow to "ground the self in action", but also provide an empirically testable taxonomy for cognitive neuroscience and a new tool for disentangling agency and ownership disturbances in psychopathology (e.g. alien hand, anarchic hand, anosognosia for one's own hemiparesis).
Collapse
Affiliation(s)
- Matthis Synofzik
- Centre for Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler Strasse 3, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
34
|
Simultaneous adaptation of retinal and extra-retinal motion signals. Vision Res 2007; 47:3373-84. [PMID: 18006036 DOI: 10.1016/j.visres.2007.10.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2006] [Revised: 09/20/2007] [Accepted: 10/03/2007] [Indexed: 11/23/2022]
Abstract
A number of models of motion perception include estimates of eye velocity to help compensate for the incidental retinal motion produced by smooth pursuit. The 'classical' model uses extra-retinal motor command signals to obtain the estimate. More recent 'reference-signal' models use retinal motion information to enhance the extra-retinal signal. The consequence of simultaneously adapting to pursuit and retinal motion is thought to favour the reference-signal model, largely because the perception of motion during pursuit ('perceived stability') changes despite the absence of a standard motion aftereffect. The current experiments investigated whether the classical model could also account for these findings. Experiment 1 replicated the changes to perceived stability and then showed how simultaneous motion adaptation changes perceived retinal speed (a velocity aftereffect). Contrary to claims made by proponents of the reference-signal model, adapting simultaneously to pursuit and retinal motion therefore alters the retinal motion inputs to the stability computation. Experiment 2 tested the idea that simultaneous motion adaptation sets up a competitive interaction between two types of velocity aftereffect, one retinal and one extra-retinal. The results showed that pursuit adaptation by itself drove perceived stability in one direction and that adding adapting retinal motion drove perceived stability in the other. Moreover, perceived stability changed in conditions that contained no mismatch between adapting pursuit and adapting retinal motion, contrary to the reference-signal account. Experiment 3 investigated whether the effects of simultaneous motion adaptation were directionally tuned. Surprisingly no tuning was found, but this was true for both perceived stability and retinal velocity aftereffect. The three experiments suggest that simultaneous motion adaptation alters perceived stability based on separable changes to retinal and extra-retinal inputs. Possible mechanisms underlying the extra-retinal velocity aftereffect are discussed.
Collapse
|
35
|
Synofzik M, Vosgerau G, Newen A. Beyond the comparator model: a multifactorial two-step account of agency. Conscious Cogn 2007; 17:219-39. [PMID: 17482480 DOI: 10.1016/j.concog.2007.03.010] [Citation(s) in RCA: 502] [Impact Index Per Article: 27.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2006] [Revised: 03/21/2007] [Accepted: 03/22/2007] [Indexed: 11/17/2022]
Abstract
There is an increasing amount of empirical work investigating the sense of agency, i.e. the registration that we are the initiators of our own actions. Many studies try to relate the sense of agency to an internal feed-forward mechanism, called the "comparator model". In this paper, we draw a sharp distinction between a non-conceptual level of feeling of agency and a conceptual level of judgement of agency. By analyzing recent empirical studies, we show that the comparator model is not able to explain either. Rather, we argue for a two-step account: a multifactorial weighting process of different agency indicators accounts for the feeling of agency, which is, in a second step, further processed by conceptual modules to form an attribution judgement. This new framework is then applied to disruptions of agency in schizophrenia, for which the comparator model also fails. Two further extensions are discussed: We show that the comparator model can neither be extended to account for the sense of ownership (which also has to be differentiated into a feeling and a judgement of ownership) nor for the sense of agency for thoughts. Our framework, however, is able to provide a unified account for the sense of agency for both actions and thoughts.
Collapse
Affiliation(s)
- Matthis Synofzik
- Department of Cognitive Neurology, Hertie Institute of Clinical Brain Research, University of Tübingen, Hoppe-Seyler-Str. 3, 72076 Tübingen, Germany.
| | | | | |
Collapse
|
36
|
Angelaki DE, Hess BJM. Self-motion-induced eye movements: effects on visual acuity and navigation. Nat Rev Neurosci 2007; 6:966-76. [PMID: 16340956 DOI: 10.1038/nrn1804] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Self-motion disturbs the stability of retinal images by inducing optic flow. Objects of interest need to be fixated or tracked, yet these eye movements can infringe on the experienced retinal flow that is important for visual navigation. Separating the components of optic flow caused by an eye movement from those due to self-motion, as well as using optic flow for visual navigation while simultaneously maintaining visual acuity on near targets, represent key challenges for the visual system. Here we summarize recent advances in our understanding of how the visuomotor and vestibulomotor systems function and interact, given the complex task of compensating for instabilities of retinal images, which typically vary as a function of retinal location and differ for each eye.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Neurobiology, Washington University School of Medicine, 660 South Euclid Avenue, St. Louis, Missouri 63110, USA.
| | | |
Collapse
|
37
|
Lindner A, Haarmeier T, Erb M, Grodd W, Thier P. Cerebrocerebellar circuits for the perceptual cancellation of eye-movement-induced retinal image motion. J Cogn Neurosci 2006; 18:1899-912. [PMID: 17069480 DOI: 10.1162/jocn.2006.18.11.1899] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Despite smooth pursuit eye movements, we are unaware of resultant retinal image motion. This example of perceptual invariance is achieved by comparing retinal image slip with an internal reference signal predicting the sensory consequences of the eye movement. This prediction can be manipulated experimentally, allowing one to vary the amount of self-induced image motion for which the reference signal compensates and, accordingly, the resulting percept of motion. Here we were able to map regions in CRUS I within the lateral cerebellar hemispheres that exhibited a significant correlation between functional magnetic resonance imaging signal amplitudes and the amount of motion predicted by the reference signal. The fact that these cerebellar regions were found to be functionally coupled with the left parieto-insular cortex and the supplementary eye fields points to these cortical areas as the sites of interaction between predicted and experienced sensory events, ultimately giving rise to the perception of a stable world despite self-induced retinal motion.
Collapse
Affiliation(s)
- Axel Lindner
- Hertie-Institut für Clinical Brain Research, Department of Cognitive Neurology, Tübingen, Germany.
| | | | | | | | | |
Collapse
|
38
|
Synofzik M, Thier P, Lindner A. Internalizing Agency of Self-Action: Perception of One's Own Hand Movements Depends on an Adaptable Prediction About the Sensory Action Outcome. J Neurophysiol 2006; 96:1592-601. [PMID: 16738220 DOI: 10.1152/jn.00104.2006] [Citation(s) in RCA: 85] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Extensive work on learning in reaching and pointing tasks has demonstrated high degrees of plasticity in our ability to optimize goal-directed motor behavior. However, studies focusing on the perceptual awareness of our own actions during motor adaptation are still rare. Here we present the first simultaneous investigation of sensorimotor adaptation on both levels, i.e., action and action perception. We hypothesized that self-action perception relies on internal predictions about the sensory action outcome that are updated in a way similar to that of motor control. Twenty human subjects performed out-and-back pointing movements that were fed back visually. Feedback was initially presented in spatiotemporal correspondence with respect to the actual finger position, but later rotated by a constant angle. When distorted feedback was applied repetitively, subjects' perceived pointing direction shifted in the direction of the trajectory rotation. A comparable perceptual reinterpretation was observed in control trials without visual feedback, indicating that subjects learned to predict the new visual outcome of their actions based on nonvisual, internal information. The perception of the world, however, remained unchanged. The changes in perception of one's own movements were accompanied by adaptive changes in motor performance of the same amount, i.e., a secondary motor compensation opposite to the direction of the imposed visual rotation. Our results show that the perception of one's own actions depends on adaptable internal predictions about the sensory action outcome, allowing us to attribute new sensory consequences of our actions to our own agency. Furthermore, they indicate that the updated sensory prediction can be used to optimize motor control.
Collapse
Affiliation(s)
- Matthis Synofzik
- Department of Cognitive Neurology, Hertie Institute of Clinical Brain Research, University of Tübingen, Tübingen, Germany.
| | | | | |
Collapse
|
39
|
Freeman TCA, Sumnall JH. Extra-retinal adaptation of cortical motion-processing areas during pursuit eye movements. Proc Biol Sci 2006; 272:2127-32. [PMID: 16191625 PMCID: PMC1559950 DOI: 10.1098/rspb.2005.3198] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Repetitive eye movement produces a compelling motion aftereffect (MAE). One mechanism thought to contribute to the illusory movement is an extra-retinal motion signal generated after adaptation. However, extra-retinal signals are also generated during pursuit. They modulate activity within cortical motion-processing area MST, helping transform retinal motion into motion in the world during an eye movement. Given the evidence that MST plays a key role in generating MAE, it may also become indirectly adapted by prolonged pursuit. To differentiate between these two extra-retinal mechanisms we examined storage of the MAE across a period of darkness. In one condition observers were told to stare at a moving pattern, an instruction that induces a more reflexive type of eye movement. In another they were told to deliberately pursue it. We found equally long MAEs when testing immediately after adaptation but not when the test was delayed by 40 s. In the case of the reflexive eye movement the delay almost completely extinguished the MAE, whereas the illusory motion following pursuit remained intact. This suggests pursuit adapts cortical motion-processing areas whereas unintentional eye movement does not. A second experiment showed that cortical mechanisms cannot be the sole determinant of pursuit-induced MAE. Following oblique pursuit, we found MAE direction changes from oblique to vertical. Perceived MAE direction appears to be influenced by a subcortical mechanism as well, one based on the relative recovery rate of horizontal and vertical eye-movement processes recruited during oblique pursuit.
Collapse
Affiliation(s)
- Tom C A Freeman
- School of Psychology, Cardiff University, Tower Building, Park Place CF10 3AT, UK.
| | | |
Collapse
|
40
|
Thier P, Ilg UJ. The neural basis of smooth-pursuit eye movements. Curr Opin Neurobiol 2005; 15:645-52. [PMID: 16271460 DOI: 10.1016/j.conb.2005.10.013] [Citation(s) in RCA: 115] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2005] [Accepted: 10/21/2005] [Indexed: 11/26/2022]
Abstract
Smooth-pursuit eye movements are used to stabilize the image of a moving object of interest on the fovea, thus guaranteeing its high-acuity scrutiny. Such movements are based on a phylogenetically recent cerebro-ponto-cerebellar pathway that has evolved in parallel with foveal vision. Recent work has shown that a network of several cerebrocortical areas directs attention to objects of interest moving in three dimensions and reconstructs the trajectory of the target in extrapersonal space, thereby integrating various sources of multimodal sensory and efference copy information, as well as cognitive influences such as prediction. This cortical network is the starting point of a set of parallel cerebrofugal projections that use different parts of the dorsal pontine nuclei and the neighboring rostral nucleus reticularis tegmenti pontis as intermediate stations to feed two areas of the cerebellum, the flocculus-paraflocculus and the posterior vermis, which make mainly complementary contributions to the control of smooth pursuit.
Collapse
Affiliation(s)
- Peter Thier
- Department of Cognitive Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Hoppe-Seyler Strasse 3, 72076 Tuebingen, Germany.
| | | |
Collapse
|
41
|
Lindner A, Thier P, Kircher TTJ, Haarmeier T, Leube DT. Disorders of agency in schizophrenia correlate with an inability to compensate for the sensory consequences of actions. Curr Biol 2005; 15:1119-24. [PMID: 15964277 DOI: 10.1016/j.cub.2005.05.049] [Citation(s) in RCA: 147] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2005] [Revised: 05/03/2005] [Accepted: 05/04/2005] [Indexed: 11/26/2022]
Abstract
Psychopathological symptoms in schizophrenia patients suggest that the concept of self might be disturbed in these individuals [1]. Delusions of influence make them feel that someone else is guiding their actions, and certain kinds of their hallucinations seem to be misinterpretations of their own inner voice as an external voice, the common denominator being that self-produced information is perceived as if coming from outside. If this interpretation were correct, we might expect that schizophrenia patients might also attribute the sensory consequences of their own eye movements to the environment rather than to themselves, challenging the percept of a stable world. Indeed, this seems to be the case because we found a clear correlation between the strength of delusions of influence and the ability of schizophrenia patients to cancel out such self-induced retinal information in motion perception. This correlation reflects direct experimental evidence supporting the view that delusions of influence in schizophrenia might be due to a specific deficit in the perceptual compensation of the sensory consequences of one's own actions [1, 2, 3, 4, 5 and 6].
Collapse
Affiliation(s)
- Axel Lindner
- Department of Cognitive Neurology, Hertie Institute for Clinical Brain Research, Hoppe-Seyler-Str. 3, D72076 Tübingen, Germany.
| | | | | | | | | |
Collapse
|
42
|
Lencer R, Nagel M, Sprenger A, Zapf S, Erdmann C, Heide W, Binkofski F. Cortical mechanisms of smooth pursuit eye movements with target blanking. An fMRI study. Eur J Neurosci 2004; 19:1430-6. [PMID: 15016102 DOI: 10.1111/j.1460-9568.2004.03229.x] [Citation(s) in RCA: 74] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Smooth pursuit eye movements are evoked by retinal image motion of visible moving objects and can also be driven by the internal representation of a target due to extraretinal mechanisms (e.g. efference copy). To delineate the corresponding neuronal correlates, functional magnetic resonance imaging at 1.5 T was applied during smooth pursuit at 10 degrees /s with continuous target presentation and target blanking for 1 s to 16 right-handed healthy males. Eye movements were assessed during scanning sessions by infra-red reflection oculography. Smooth pursuit performance was optimal when the target was visible but decreased to a residual velocity of about 30% of the velocity observed during continuous target presentation. Random effects analysis of the imaging data yielded an activation pattern for smooth pursuit in the absence of a visual target (in contrast to continuous target presentation) which included a number of cortical areas in which extraretinal information is available such as the frontal eye field, the superior parietal lobe, the anterior and the posterior intraparietal sulcus and the premotor cortex, and also the supplementary and the presupplementary eye field, the supramarginal gyrus, the dorsolateral prefrontal cortex, cerebellar areas and the basal ganglia. We suggest that cortical mechanisms such as prediction, visuo-spatial attention and transformation, multimodal visuomotor control and working memory are of special importance for maintaining smooth pursuit eye movements in the absence of a visible target.
Collapse
Affiliation(s)
- Rebekka Lencer
- Department of Psychiatry and Psychotherapy, University of Leubeck, Ratzeburger Allee 160, 23538 Luebeck, Germany.
| | | | | | | | | | | | | |
Collapse
|
43
|
Tikhonov A, Haarmeier T, Thier P, Braun C, Lutzenberger W. Neuromagnetic activity in medial parietooccipital cortex reflects the perception of visual motion during eye movements. Neuroimage 2004; 21:593-600. [PMID: 14980561 DOI: 10.1016/j.neuroimage.2003.09.045] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2003] [Revised: 09/17/2003] [Accepted: 09/17/2003] [Indexed: 11/22/2022] Open
Abstract
We usually perceive a stationary, stable world despite coherent visual motion induced by eye movements. This astonishing example of perceptual invariance results from a comparison of visual information with internal reference signals (nonretinal signals) predicting the visual consequences of an eye movement. The important consequence of this concept is that our subjective percept of visual motion reflects the outcome of this comparison rather than retinal image slip. To localize the cortical networks underlying this comparison, we compared magnetoencephalography (MEG) responses under two conditions of pursuit-induced retinal image motion, which were identical physically but--due to different calibrational states of the nonretinal signal prompted under our experimental conditions--gave rise to different percepts of visual motion. This approach allows us to demonstrate that our perception of self-induced visual motion resides in comparably "late" parts of the cortical hierarchy of motion processing sparing the early stages up to cortical area MT/V5 but including cortex in and around the medial aspect of the parietooccipital cortex as one of its core elements.
Collapse
Affiliation(s)
- Alexander Tikhonov
- Department of Cognitive Neurology, University of Tübingen, D-72076 Tuebingen, Germany
| | | | | | | | | |
Collapse
|
44
|
Goltz HC, DeSouza JFX, Menon RS, Tweed DB, Vilis T. Interaction of retinal image and eye velocity in motion perception. Neuron 2003; 39:569-76. [PMID: 12895428 DOI: 10.1016/s0896-6273(03)00460-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
When we move our eyes, why does the world look stable even as its image flows across our retinas, and why do afterimages, which are stationary on the retinas, appear to move? Current theories say this is because we perceive motion by summation: if an object slips across the retina at r degrees/s while the eye turns at e degrees/s, the object's perceived velocity in space should be r + e. We show that activity in MT+, the visual-motion complex in human cortex, does reflect a mix of r and e rather than r alone. But we show also that, for optimal perception, r and e should not summate; rather, the signals coding e interact multiplicatively with the spatial gradient of illumination.
Collapse
Affiliation(s)
- Herbert C Goltz
- CIHR Group on Action and Perception, University of Western Ontario, London N6A 5C1, Canada
| | | | | | | | | |
Collapse
|
45
|
Abstract
The eyes are always moving even during fixation, making the retinal image move concomitantly. While these motions activate early visual stages, they are excluded from one's perception. A striking illusion reported here renders them visible: a static pattern surrounded by a synchronously flickering pattern appears to move coherently in random directions. There was a positive correlation between the illusion and fixational eye movements. A simulation revealed that motion computation artificially creates a motion difference between center and surround, which is usually a cue to object motion but now a wrong cue to seeing eye movements of oneself on-line. Therefore, this novel illusion indicates that the visual system normally counteracts shaky visual inputs due to small eye movements by using retinal, as opposed to extraretinal, motion signals. As long as they comprise common image motions over space, they are interpreted as coming from a static outer world viewed through moving eyes. Such visual stability fails in the condition of artificial flicker, because common image motions due to eye movements are registered differently between flickering and non-flickering regions.
Collapse
Affiliation(s)
- Ikuya Murakami
- Human and Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, 3-1 Morinosato Wakamiya, Atsugi, 243-0198, Kanagawa, Japan.
| |
Collapse
|
46
|
Tadin D, Lappin JS, Blake R, Grossman ED. What constitutes an efficient reference frame for vision? Nat Neurosci 2002; 5:1010-5. [PMID: 12219092 PMCID: PMC4613799 DOI: 10.1038/nn914] [Citation(s) in RCA: 43] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2002] [Accepted: 08/06/2002] [Indexed: 11/09/2022]
Abstract
Vision requires a reference frame. To what extent does this reference frame depend on the structure of the visual input, rather than just on retinal landmarks? This question is particularly relevant to the perception of dynamic scenes, when keeping track of external motion relative to the retina is difficult. We tested human subjects' ability to discriminate the motion and temporal coherence of changing elements that were embedded in global patterns and whose perceptual organization was manipulated in a way that caused only minor changes to the retinal image. Coherence discriminations were always better when local elements were perceived to be organized as a global moving form than when they were perceived to be unorganized, individually moving entities. Our results indicate that perceived form influences the neural representation of its component features, and from this, we propose a new method for studying perceptual organization.
Collapse
Affiliation(s)
- Duje Tadin
- Vanderbilt Vision Research Center, 301 Wilson Hall, Vanderbilt University, 111 21st Avenue South, Nashville, Tennessee 37203, USA.
| | | | | | | |
Collapse
|
47
|
Abstract
Stimulus motion is a prominent feature that is used by the visual system to segment figure from ground and perceptually bind widely separated objects. Pursuit eye movements can be influenced by such perceptual grouping processes. We have examined the subjects' ability to detect small amounts of coherent motion in random dot kinematograms during pursuit. We compared performance on tests of coherent motion perception while subjects fixated a stationary spot or while they tracked a moving target. The results indicate that smooth pursuit can improve subjects' ability to detect the presence of coherent motion. We tentatively propose that an efference copy of the eye movement signal can enhance the ability of the visual system to detect correlations between sparsely placed targets among noisy distractors.
Collapse
Affiliation(s)
- Mark W Greenlee
- Institute of Cognitive Science, University of Oldenburg, Ammerländer Heerstrasse 114, 26111 Oldenburg, Germany
| | | | | |
Collapse
|