1
|
Tsunada J, Eliades SJ. Frontal-auditory cortical interactions and sensory prediction during vocal production in marmoset monkeys. Curr Biol 2025:S0960-9822(25)00393-8. [PMID: 40250436 DOI: 10.1016/j.cub.2025.03.077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 02/20/2025] [Accepted: 03/28/2025] [Indexed: 04/20/2025]
Abstract
The control of speech and vocal production involves the calculation of error between the intended vocal output and the resulting auditory feedback. This model has been supported by evidence that the auditory cortex (AC) is suppressed immediately before and during vocal production yet remains sensitive to differences between vocal output and altered auditory feedback. This suppression has been suggested to be the result of top-down signals about the intended vocal output, potentially originating from frontal cortical (FC) areas. However, whether FC is the source of suppressive and predictive signaling to AC during vocalization remains unknown. Here, we simultaneously recorded neural activity from both AC and FC of marmoset monkeys during self-initiated vocalizations. We found increases in neural activity in both brain areas from 1 to 0.5 s before vocal production (early pre-vocal period), specifically changes in both multi-unit activity and theta-band power. Connectivity analysis using Granger causality demonstrated that FC sends directed signaling to AC during this early pre-vocal period. Importantly, early pre-vocal activity correlated with both vocalization-induced suppression in AC as well as the structure and acoustics of subsequent calls, such as fundamental frequency. Furthermore, bidirectional auditory-frontal interactions emerged during experimentally altered vocal feedback and predicted subsequent compensatory vocal behavior. These results suggest that FC communicates with AC during vocal production, with frontal-to-auditory signals that may reflect the transmission of sensory prediction information before vocalization and bidirectional signaling during vocalization suggestive of error detection that could drive feedback-dependent vocal control.
Collapse
Affiliation(s)
- Joji Tsunada
- Beijing Institute for Brain Research, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 102206, China; Chinese Institute for Brain Research, Beijing 102206, China; Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka 0208550, Iwate, Japan.
| | - Steven J Eliades
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
2
|
Zhang Y, Wu X, Zheng C, Zhao Y, Gao J, Deng Z, Zhang X, Chen J. Effects of Vergence Eye Movement Planning on Size Perception and Early Visual Processing. J Cogn Neurosci 2024; 36:2793-2806. [PMID: 38940732 DOI: 10.1162/jocn_a_02207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Our perception of objects depends on non-oculomotor depth cues, such as pictorial distance cues and binocular disparity, and oculomotor depth cues, such as vergence and accommodation. Although vergence eye movements are always involved in perceiving real distance, previous studies have mainly focused on the effect of oculomotor state via "proprioception" on distance and size perception. It remains unclear whether the oculomotor command of vergence eye movement would also influence visual processing. To address this question, we placed a light at 28.5 cm and a screen for stimulus presentation at 57 cm from the participants. In the NoDivergence condition, participants were asked to maintain fixation on the light regardless of stimulus presentation throughout the trial. In the WithDivergence condition, participants were instructed to initially maintain fixation on the near light and then turn their two eyes outward to look at the stimulus on the far screen. The stimulus was presented for 100 msec, entirely within the preparation stage of the divergence eye movement. We found that participants perceived the stimulus as larger but were less sensitive to stimulus sizes in the WithDivergence condition than in the NoDivergence condition. The earliest visual evoked component C1 (peak latency 80 msec), which varied with stimulus size in the NoDivergence condition, showed similar amplitudes for larger and smaller stimuli in the WithDivergence condition. These results show that vergence eye movement planning affects the earliest visual processing and size perception, and demonstrate an example of the effect of motor command on sensory processing.
Collapse
Affiliation(s)
| | | | | | | | - Jie Gao
- South China Normal University
| | | | | | | |
Collapse
|
3
|
Kim J, Yoshida T. Sense of agency at a temporally-delayed gaze-contingent display. PLoS One 2024; 19:e0309998. [PMID: 39241025 DOI: 10.1371/journal.pone.0309998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 08/22/2024] [Indexed: 09/08/2024] Open
Abstract
The subjective feeling of being the author of one's actions and the subsequent consequences is referred to as a sense of agency. Such a feeling is crucial for usability in human-computer interactions, where eye movement has been adopted, yet this area has been scarcely investigated. We examined how the temporal action-feedback discrepancy affects the sense of agency concerning eye movement. Participants conducted a visual search for an array of nine Chinese characters within a temporally-delayed gaze-contingent display, blurring the peripheral view. The relative delay between each eye movement and the subsequent window movement varied from 0 to 4,000 ms. In the control condition, the window played a recorded gaze behavior. The mean authorship rating and the proportion of "self" responses in the categorical authorship report ("self," "delayed self," and "other") gradually decreased as the temporal discrepancy increased, with "other" being rarely reported, except in the control condition. These results generally mirror those of prior studies on hand actions, suggesting that sense of agency extends beyond the effector body parts to other modalities, and two different types of sense of agency that have different temporal characteristics are simultaneously operating. The mode of fixation duration shifted as the delay increased under 200-ms delays and was divided into two modes at 200-500 ms delays. The frequency of 0-1.5° saccades exhibited an increasing trend as the delay increased. These results demonstrate the influence of perceived action-effect discrepancy on action refinement and task strategy.
Collapse
Affiliation(s)
- Junhui Kim
- School of Engineering, Tokyo Institute of Technology, Meguro City, Tokyo, Japan
| | - Takako Yoshida
- School of Engineering, Tokyo Institute of Technology, Meguro City, Tokyo, Japan
| |
Collapse
|
4
|
Aizenman AM, Gegenfurtner KR, Goettker A. Oculomotor routines for perceptual judgments. J Vis 2024; 24:3. [PMID: 38709511 PMCID: PMC11078167 DOI: 10.1167/jov.24.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/09/2024] [Indexed: 05/07/2024] Open
Abstract
In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.
Collapse
Affiliation(s)
- Avi M Aizenman
- Psychology Department Giessen University, Giessen, Germany
- http://aviaizenman.com/
| | - Karl R Gegenfurtner
- Psychology Department Giessen University, Giessen, Germany
- https://www.allpsych.uni-giessen.de/karl/
| | - Alexander Goettker
- Psychology Department Giessen University, Giessen, Germany
- https://alexgoettker.com/
| |
Collapse
|
5
|
Tsunada J, Eliades SJ. Frontal-Auditory Cortical Interactions and Sensory Prediction During Vocal Production in Marmoset Monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577656. [PMID: 38352422 PMCID: PMC10862695 DOI: 10.1101/2024.01.28.577656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
The control of speech and vocal production involves the calculation of error between the intended vocal output and the resulting auditory feedback. Consistent with this model, recent evidence has demonstrated that the auditory cortex is suppressed immediately before and during vocal production, yet is still sensitive to differences between vocal output and altered auditory feedback. This suppression has been suggested to be the result of top-down signals containing information about the intended vocal output, potentially originating from motor or other frontal cortical areas. However, whether such frontal areas are the source of suppressive and predictive signaling to the auditory cortex during vocalization is unknown. Here, we simultaneously recorded neural activity from both the auditory and frontal cortices of marmoset monkeys while they produced self-initiated vocalizations. We found increases in neural activity in both brain areas preceding the onset of vocal production, notably changes in both multi-unit activity and local field potential theta-band power. Connectivity analysis using Granger causality demonstrated that frontal cortex sends directed signaling to the auditory cortex during this pre-vocal period. Importantly, this pre-vocal activity predicted both vocalization-induced suppression of the auditory cortex as well as the acoustics of subsequent vocalizations. These results suggest that frontal cortical areas communicate with the auditory cortex preceding vocal production, with frontal-auditory signals that may reflect the transmission of sensory prediction information. This interaction between frontal and auditory cortices may contribute to mechanisms that calculate errors between intended and actual vocal outputs during vocal communication.
Collapse
Affiliation(s)
- Joji Tsunada
- Chinese Institute for Brain Research, Beijing, China
- Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka, Iwate, Japan
| | - Steven J. Eliades
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
6
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
7
|
Le Naour T, Papinutto M, Lobier M, Bresciani JP. Controlling the trajectory of a moving object substantially shortens the latency of motor responses to visual stimuli. iScience 2023; 26:106838. [PMID: 37250785 PMCID: PMC10212987 DOI: 10.1016/j.isci.2023.106838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 03/01/2023] [Accepted: 05/04/2023] [Indexed: 05/31/2023] Open
Abstract
Motor responses to visual stimuli have shorter latencies for controlling than for initiating movement. The shorter latencies observed for movement control are notably believed to reflect the involvement of forward models when controlling moving limbs. We assessed whether controlling a moving limb is a "requisite" to observe shortened response latencies. The latency of button-press responses to a visual stimulus was compared between conditions involving or not involving the control of a moving object, but never involving any actual control of a body segment. When the motor response controlled a moving object, response latencies were significantly shorter and less variable, probably reflecting a faster sensorimotor processing (as assessed fitting a LATER model to our data). These results suggest that when the task at hand entails a control component, the sensorimotor processing of visual information is hastened, and this even if the task does not require to actually control a moving limb.
Collapse
Affiliation(s)
- Thibaut Le Naour
- Department of Neuroscience, University of Fribourg, Fribourg, Switzerland
- Motion-up, Vannes, France
| | - Michael Papinutto
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | | | - Jean-Pierre Bresciani
- Department of Neuroscience, University of Fribourg, Fribourg, Switzerland
- Grenoble-Alpes University, Grenoble, France
| |
Collapse
|
8
|
Developmental Coordination Disorder: State of the Art and Future Directions from a Neurophysiological Perspective. CHILDREN 2022; 9:children9070945. [PMID: 35883929 PMCID: PMC9318843 DOI: 10.3390/children9070945] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 06/20/2022] [Accepted: 06/21/2022] [Indexed: 11/16/2022]
Abstract
Developmental coordination disorder (DCD) is a common neurodevelopmental condition characterized by disabling motor impairments being visible from the first years of life. Over recent decades, research in this field has gained important results, showing alterations in several processes involved in the regulation of motor behavior (e.g., planning and monitoring of actions, motor learning, action imitation). However, these studies mostly pursued a behavioral approach, leaving relevant questions open concerning the neural correlates of this condition. In this narrative review, we first survey the literature on motor control and sensorimotor impairments in DCD. Then, we illustrate the contributions to the field that may be achieved using transcranial magnetic stimulation (TMS) of the motor cortex. While still rarely employed in DCD research, this approach offers several opportunities, ranging from the clarification of low-level cortical electrophysiology to the assessment of the motor commands transmitted throughout the corticospinal system. We propose that TMS may help to investigate the neural correlates of motor impairments reported in behavioral studies, thus guiding DCD research toward a brain-oriented acknowledgment of this condition. This effort would help translational research to provide novel diagnostic and therapeutic tools.
Collapse
|
9
|
Doss MK, Madden MB, Gaddis A, Nebel MB, Griffiths RR, Mathur BN, Barrett FS. Models of psychedelic drug action: modulation of cortical-subcortical circuits. Brain 2022; 145:441-456. [PMID: 34897383 PMCID: PMC9014750 DOI: 10.1093/brain/awab406] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/10/2021] [Accepted: 10/05/2021] [Indexed: 12/15/2022] Open
Abstract
Classic psychedelic drugs such as psilocybin and lysergic acid diethylamide (LSD) have recaptured the imagination of both science and popular culture, and may have efficacy in treating a wide range of psychiatric disorders. Human and animal studies of psychedelic drug action in the brain have demonstrated the involvement of the serotonin 2A (5-HT2A) receptor and the cerebral cortex in acute psychedelic drug action, but different models have evolved to try to explain the impact of 5-HT2A activation on neural systems. Two prominent models of psychedelic drug action (the cortico-striatal thalamo-cortical, or CSTC, model and relaxed beliefs under psychedelics, or REBUS, model) have emphasized the role of different subcortical structures as crucial in mediating psychedelic drug effects. We describe these models and discuss gaps in knowledge, inconsistencies in the literature and extensions of both models. We then introduce a third circuit-level model involving the claustrum, a thin strip of grey matter between the insula and the external capsule that densely expresses 5-HT2A receptors (the cortico-claustro-cortical, or CCC, model). In this model, we propose that the claustrum entrains canonical cortical network states, and that psychedelic drugs disrupt 5-HT2A-mediated network coupling between the claustrum and the cortex, leading to attenuation of canonical cortical networks during psychedelic drug effects. Together, these three models may explain many phenomena of the psychedelic experience, and using this framework, future research may help to delineate the functional specificity of each circuit to the action of both serotonergic and non-serotonergic hallucinogens.
Collapse
Affiliation(s)
- Manoj K Doss
- Center for Psychedelic and Consciousness Research, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
| | - Maxwell B Madden
- Department of Pharmacology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| | - Andrew Gaddis
- Center for Psychedelic and Consciousness Research, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
| | - Mary Beth Nebel
- Center for Neurodevelopmental and Imaging Research, Kennedy Krieger Institute, Baltimore, MD, 21205, USA
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Roland R Griffiths
- Center for Psychedelic and Consciousness Research, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
- Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Brian N Mathur
- Department of Pharmacology, University of Maryland School of Medicine, Baltimore, MD, 21201, USA
| | - Frederick S Barrett
- Center for Psychedelic and Consciousness Research, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, 21224, USA
- Department of Neuroscience, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
10
|
Nicholas S, Nordström K. Efference copies: Context matters when ignoring self-induced motion. Curr Biol 2021; 31:R1388-R1390. [PMID: 34699803 DOI: 10.1016/j.cub.2021.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Across the animal kingdom, efference copies of neuronal motor commands are used to ensure our senses ignore stimuli generated by our own actions. New work shows that the underlying motivation for an action affects whether visual neurons are responsive to self-generated stimuli.
Collapse
Affiliation(s)
- Sarah Nicholas
- Neuroscience, Flinders Health and Medical Research Institute, Flinders University, Adelaide, Australia.
| | - Karin Nordström
- Neuroscience, Flinders Health and Medical Research Institute, Flinders University, Adelaide, Australia; Department of Neuroscience, Uppsala University, Uppsala, Sweden.
| |
Collapse
|
11
|
Fenk LM, Kim AJ, Maimon G. Suppression of motion vision during course-changing, but not course-stabilizing, navigational turns. Curr Biol 2021; 31:4608-4619.e3. [PMID: 34644548 DOI: 10.1016/j.cub.2021.09.068] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/10/2021] [Accepted: 09/23/2021] [Indexed: 11/20/2022]
Abstract
From mammals to insects, locomotion has been shown to strongly modulate visual-system physiology. Does the manner in which a locomotor act is initiated change the modulation observed? We performed patch-clamp recordings from motion-sensitive visual neurons in tethered, flying Drosophila. We observed motor-related signals in flies performing flight turns in rapid response to looming discs and also during spontaneous turns, but motor-related signals were weak or non-existent in the context of turns made in response to brief pulses of unidirectional visual motion (i.e., optomotor responses). Thus, the act of a locomotor turn is variably associated with modulation of visual processing. These results can be understood via the following principle: suppress visual responses during course-changing, but not course-stabilizing, navigational turns. This principle is likely to apply broadly-even to mammals-whenever visual cells whose activity helps to stabilize a locomotor trajectory or the visual gaze angle are targeted for motor modulation.
Collapse
Affiliation(s)
- Lisa M Fenk
- Laboratory of Integrative Brain Function and Howard Hughes Medical Institute, The Rockefeller University, New York, NY, USA; Active Sensing, Max Plank Institute of Neurobiology, Martinsried, Germany.
| | - Anmo J Kim
- Laboratory of Integrative Brain Function and Howard Hughes Medical Institute, The Rockefeller University, New York, NY, USA; Department of Biomedical Engineering, Hanyang University, Seoul, South Korea; Department of Electronic Engineering, Hanyang University, Seoul, South Korea.
| | - Gaby Maimon
- Laboratory of Integrative Brain Function and Howard Hughes Medical Institute, The Rockefeller University, New York, NY, USA.
| |
Collapse
|
12
|
Numasawa K, Miyamoto T, Kizuka T, Ono S. The relationship between the implicit visuomotor control and the motor planning accuracy. Exp Brain Res 2021; 239:2151-2158. [PMID: 33977362 DOI: 10.1007/s00221-021-06120-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 04/20/2021] [Indexed: 11/29/2022]
Abstract
It has been well established that an implicit motor response can be elicited by a target perturbation or a visual background motion during a reaching movement. Computational studies have suggested that the mechanism of this response is based on the error signal between the efference copy and the actual sensory feedback. If the implicit motor response is based on the efference copy, the motor command accuracy would affect the amount of the modulation of the motor response. Therefore, the purpose of the current study was to investigate the relationship between the implicit motor response and the motor planning accuracy. We used a memory-guided reaching task and a manual following response (MFR) which is induced by visual grating motion. Participants performed reaching movements toward a memorized-target location with a beep cue which was presented 0 or 3 s after the target disappeared (0-s delay and 3-s delay conditions). Leftward or rightward visual grating motion was applied 400 ms after the cue. In addition, an event-related potential (ERP) was recorded during the reaching task, which reflects the motor command accuracy. Our results showed that the N170 ERP amplitude in the parietal electrodes and the MFR amplitude were significantly larger for the 3-s delay condition than the 0-s delay condition. These results suggest that the motor planning accuracy affects the amount of the implicit visuomotor response. Furthermore, there was a significant within-subjects correlation between the MFR and the N170 amplitude, which could corroborate the relationship between the implicit motor response and the motor planning accuracy.
Collapse
Affiliation(s)
- Kosuke Numasawa
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Takeshi Miyamoto
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Tomohiro Kizuka
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan
| | - Seiji Ono
- Faculty of Health and Sport Sciences, University of Tsukuba, 1-1-1, Tennodai, Tsukuba, Ibaraki, 305-8574, Japan.
| |
Collapse
|
13
|
Sensory attenuation is modulated by the contrasting effects of predictability and control. Neuroimage 2021; 237:118103. [PMID: 33957233 DOI: 10.1016/j.neuroimage.2021.118103] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 03/18/2021] [Accepted: 04/23/2021] [Indexed: 11/22/2022] Open
Abstract
Self-generated stimuli have been found to elicit a reduced sensory response compared with externally-generated stimuli. However, much of the literature has not adequately controlled for differences in the temporal predictability and temporal control of stimuli. In two experiments, we compared the N1 (and P2) components of the auditory-evoked potential to self- and externally-generated tones that differed with respect to these two factors. In Experiment 1 (n = 42), we found that increasing temporal predictability reduced N1 amplitude in a manner that may often account for the observed reduction in sensory response to self-generated sounds. We also observed that reducing temporal control over the tones resulted in a reduction in N1 amplitude. The contrasting effects of temporal predictability and temporal control on N1 amplitude meant that sensory attenuation prevailed when controlling for each. Experiment 2 (n = 38) explored the potential effect of selective attention on the results of Experiment 1 by modifying task requirements such that similar levels of attention were allocated to the visual stimuli across conditions. The results of Experiment 2 replicated those of Experiment 1, and suggested that the observed effects of temporal control and sensory attenuation were not driven by differences in attention. Given that self- and externally-generated sensations commonly differ with respect to both temporal predictability and temporal control, findings of the present study may necessitate a re-evaluation of the experimental paradigms used to study sensory attenuation.
Collapse
|
14
|
Fooken J, Kreyenmeier P, Spering M. The role of eye movements in manual interception: A mini-review. Vision Res 2021; 183:81-90. [PMID: 33743442 DOI: 10.1016/j.visres.2021.02.007] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/28/2021] [Accepted: 02/04/2021] [Indexed: 10/21/2022]
Abstract
When we catch a moving object in mid-flight, our eyes and hands are directed toward the object. Yet, the functional role of eye movements in guiding interceptive hand movements is not yet well understood. This review synthesizes emergent views on the importance of eye movements during manual interception with an emphasis on laboratory studies published since 2015. We discuss the role of eye movements in forming visual predictions about a moving object, and for enhancing the accuracy of interceptive hand movements through feedforward (extraretinal) and feedback (retinal) signals. We conclude by proposing a framework that defines the role of human eye movements for manual interception accuracy as a function of visual certainty and object motion predictability.
Collapse
Affiliation(s)
- Jolande Fooken
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - Philipp Kreyenmeier
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada.
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada; Graduate Program in Neuroscience, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada; Institute for Computing, Information, and Cognitive Systems, University of British Columbia, Vancouver, Canada
| |
Collapse
|
15
|
Abstract
A number of notions in the fields of motor control and kinesthetic perception have been used without clear definitions. In this review, we consider definitions for efference copy, percept, and sense of effort based on recent studies within the physical approach, which assumes that the neural control of movement is based on principles of parametric control and involves defining time-varying profiles of spatial referent coordinates for the effectors. The apparent redundancy in both motor and perceptual processes is reconsidered based on the principle of abundance. Abundance of efferent and afferent signals is viewed as the means of stabilizing both salient action characteristics and salient percepts formalized as stable manifolds in high-dimensional spaces of relevant elemental variables. This theoretical scheme has led recently to a number of novel predictions and findings. These include, in particular, lower accuracy in perception of variables produced by elements involved in a multielement task compared with the same elements in single-element tasks, dissociation between motor and perceptual effects of muscle coactivation, force illusions induced by muscle vibration, and errors in perception of unintentional drifts in performance. Taken together, these results suggest that participation of efferent signals in perception frequently involves distorted copies of actual neural commands, particularly those to antagonist muscles. Sense of effort is associated with such distorted efferent signals. Distortions in efference copy happen spontaneously and can also be caused by changes in sensory signals, e.g., those produced by muscle vibration.
Collapse
Affiliation(s)
- Mark L Latash
- Department of Kinesiology, The Pennsylvania State University, University Park, Pennsylvania
| |
Collapse
|
16
|
Godon JM, Argentieri S, Gas B. A Formal Account of Structuring Motor Actions With Sensory Prediction for a Naive Agent. Front Robot AI 2020; 7:561660. [PMID: 33501325 PMCID: PMC7805968 DOI: 10.3389/frobt.2020.561660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 10/23/2020] [Indexed: 12/03/2022] Open
Abstract
For naive robots to become truly autonomous, they need a means of developing their perceptive capabilities instead of relying on hand crafted models. The sensorimotor contingency theory asserts that such a way resides in learning invariants of the sensorimotor flow. We propose a formal framework inspired by this theory for the description of sensorimotor experiences of a naive agent, extending previous related works. We then use said formalism to conduct a theoretical study where we isolate sufficient conditions for the determination of a sensory prediction function. Furthermore, we also show that algebraic structure found in this prediction can be taken as a proxy for structure on the motor displacements, allowing for the discovery of the combinatorial structure of said displacements. Both these claims are further illustrated in simulations where a toy naive agent determines the sensory predictions of its spatial displacements from its uninterpreted sensory flow, which it then uses to infer the combinatorics of said displacements.
Collapse
Affiliation(s)
| | - Sylvain Argentieri
- Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique, ISIR, Paris, France
| | | |
Collapse
|
17
|
Xie M, Niehorster DC, Lappe M, Li L. Roles of visual and non-visual information in the perception of scene-relative object motion during walking. J Vis 2020; 20:15. [PMID: 33052410 PMCID: PMC7571284 DOI: 10.1167/jov.20.10.15] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Perceiving object motion during self-movement is an essential ability of humans. Previous studies have reported that the visual system can use both visual information (such as optic flow) and non-visual information (such as vestibular, somatosensory, and proprioceptive information) to identify and globally subtract the retinal motion component due to self-movement to recover scene-relative object motion. In this study, we used a motion-nulling method to directly measure and quantify the contribution of visual and non-visual information to the perception of scene-relative object motion during walking. We found that about 50% of the retinal motion component of the probe due to translational self-movement was removed with non-visual information alone and about 80% with visual information alone. With combined visual and non-visual information, the self-movement component was removed almost completely. Although non-visual information played an important role in the removal of self-movement-induced retinal motion, it was associated with decreased precision of probe motion estimates. We conclude that neither non-visual nor visual information alone is sufficient for the accurate perception of scene-relative object motion during walking, which instead requires the integration of both sources of information.
Collapse
Affiliation(s)
- Mingyang Xie
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,
| | | | - Markus Lappe
- Institute for Psychology, University of Muenster, Muenster, Germany.,
| | - Li Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,New York University-East China Normal University Institute of Brain and Cognitive Science at New York University Shanghai, Shanghai, China.,Faculty of Arts and Science, New York University Shanghai, Shanghai, China.,
| |
Collapse
|
18
|
Comparison of Postural Sway, Plantar Cutaneous Sensation According to Saccadic Eye Movement Frequency in Young Adults. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17197067. [PMID: 32992570 PMCID: PMC7579430 DOI: 10.3390/ijerph17197067] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 11/17/2022]
Abstract
The crossover trial study aimed to identify the saccadic eye movement (SEM) frequency to improve postural sway (PS) and plantar cutaneous sensation (PUS) in young adults. The 17 participants randomly performed 0.5-, 2-, and 3-Hz SEM. The SEM frequency was determined to allow the target to appear once per 2 s (0.5 Hz), twice per second (2 Hz), or thrice per second (3 Hz). SEM performance time was 3 min with a washout period of 5 min. PS and PUS were measured at baseline and during 0.5-Hz, 2-Hz, and 3-Hz SEMs using a Zebris FDM 1.5 force plate. PS was determined by measuring the sway area, path length, and speed of center of pressure (COP) displacement, and PUS was determined via the plantar surface area (PSA). In PS parameters, there was a significant difference among the SEM frequencies in the COPsway area PSAleft foot and PSAright foot. Compared to that at baseline, COPsway area decreased at 0.5 Hz and 2 Hz, while PSAleft foot and PSAright foot increased at 2 Hz. These results suggest that 2 Hz SEM may improve PS and PSA.
Collapse
|
19
|
Park ASY, Metha AB, Bedggood PA, Anderson AJ. The influence of retinal image motion on the perceptual grouping of temporally asynchronous stimuli. J Vis 2019; 19:2. [PMID: 30943528 PMCID: PMC6450642 DOI: 10.1167/19.4.2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Briefly presented stimuli can reveal the lower limit of retinal-based perceptual stabilization mechanisms. This is demonstrated in perceptual grouping of temporally asynchronous stimuli, in which alternate row or column elements of a regular grid are presented over two successive display frames with an imperceptible temporal offset. The grouping phenomenon results from a subtle shift between alternate grid elements due to incomplete compensation of small, fixational eye movements occurring between the two presentation frames. This suggests that larger retinal shifts should amplify the introduced shifts between alternate grid elements and improve grouping performance. However, large shifts are necessarily absent in small eye movements. Furthermore, shifts follow a random walk, making the relationship between shift magnitude and performance difficult to explore systematically. Here, we established a systematic relationship between retinal image motion and perceptual grouping by presenting alternate grid elements (untracked) during smooth pursuit of known velocities. Our results show grouping performance to improve in direct proportion to pursuit velocity. Any potential compensation by extraretinal signals (e.g., efference copy) does not seem to occur.
Collapse
Affiliation(s)
- Adela S Y Park
- Department of Optometry & Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Andrew B Metha
- Department of Optometry & Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Phillip A Bedggood
- Department of Optometry & Vision Sciences, The University of Melbourne, Melbourne, Australia
| | - Andrew J Anderson
- Department of Optometry & Vision Sciences, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
20
|
Kolev OI. Self-Motion Versus Environmental-Motion Perception Following Rotational Vestibular Stimulation and Factors Modifying Them. Front Neurol 2019; 10:162. [PMID: 30873110 PMCID: PMC6400846 DOI: 10.3389/fneur.2019.00162] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 02/07/2019] [Indexed: 11/16/2022] Open
Abstract
Motion perception following rotational vestibular stimulation is described either as a self-motion or as an environmental-motion. The purpose of the present study was to establish frequency of occurrence of both sensations in healthy humans; what other sensations they experience and how factors insinuation and visual cues modify them. Twenty-four healthy subjects were rotated with constant velocity of 80°/s in four combinations of opened and closed eyes during the rotation and after a sudden stop. After the cessation of the rotation they reported their spontaneous or insinuated illusory motion. During spontaneous perception after sudden cessation of rotation and with the subject's eyes open, the illusory sensations of self- and environmental-motion were almost equally presented. There was no simultaneous illusory perception of self-motion and environmental-motion. Insinuation modified the perception of motion; presence or absence of visual cues prior to the cessation of the rotation and the presence or absence of visual cues immediately after the cessation of the rotation changed the motion sensation. There is a gender effect in motion perception. This finding might be of benefit in further exploring the gender difference in the susceptibility to motion sickness.
Collapse
Affiliation(s)
- Ognyan I Kolev
- University Hospital of Neurology and Psychiatry, Sofia, Bulgaria.,Institute of Neurobiology, Bulgarian Academy of Sciences, Sofia, Bulgaria
| |
Collapse
|
21
|
Calub CA, Furtak SC, Brown TH. Revisiting the autoconditioning hypothesis for acquired reactivity to ultrasonic alarm calls. Physiol Behav 2018; 194:380-386. [DOI: 10.1016/j.physbeh.2018.06.029] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2018] [Revised: 05/31/2018] [Accepted: 06/18/2018] [Indexed: 10/28/2022]
|
22
|
Gaveau V, Priot AE, Pisella L, Havé L, Prablanc C, Rossetti Y. Paradoxical adaptation of successful movements: The crucial role of internal error signals. Conscious Cogn 2018; 64:135-145. [DOI: 10.1016/j.concog.2018.06.011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 06/11/2018] [Accepted: 06/11/2018] [Indexed: 11/28/2022]
|
23
|
Spivey MJ, Batzloff BJ. Bridgemanian space constancy as a precursor to extended cognition. Conscious Cogn 2018; 64:164-175. [PMID: 29709438 DOI: 10.1016/j.concog.2018.04.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 03/23/2018] [Accepted: 04/02/2018] [Indexed: 11/30/2022]
Abstract
A few decades ago, cognitive psychologists generally took for granted that the reason we perceive our visual environment as one contiguous stable whole (i.e., space constancy) is because we have an internal mental representation of the visual environment as one contiguous stable whole. They supposed that the non-contiguous visual images that are gathered during the brief fixations that intervene between pairs of saccadic eye movements (a few times every second) are somehow stitched together to construct this contiguous internal mental representation. Determining how exactly the brain does this proved to be a vexing puzzle for vision researchers. Bruce Bridgeman's research career is the story of how meticulous psychophysical experimentation, and a genius theoretical insight, eventually solved this puzzle. The reason that it was so difficult for researchers to figure out how the brain stitches together these visual snapshots into one accurately-rendered mental representation of the visual environment is that it doesn't do that. Bruce discovered that the brain couldn't do that if it tried. The neural information that codes for saccade amplitude and direction is simply too inaccurate to determine exact relative locations of each fixation. Rather than the perception of space constancy being the result of an internal representation, Bruce determined that it is the result of a brain that simply assumes that external space remains constant, and it rarely checks to verify this assumption. In our extension of Bridgeman's formulation, we suggest that objects in the world often serve as their own representations, and cognitive operations can be performed on those objects themselves, rather than on mental representations of them.
Collapse
Affiliation(s)
- Michael J Spivey
- Cognitive and Information Sciences, University of California, Merced, United States.
| | - Brandon J Batzloff
- Cognitive and Information Sciences, University of California, Merced, United States
| |
Collapse
|
24
|
Bansal S, Ford JM, Spering M. The function and failure of sensory predictions. Ann N Y Acad Sci 2018; 1426:199-220. [PMID: 29683518 DOI: 10.1111/nyas.13686] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 02/26/2018] [Accepted: 02/27/2018] [Indexed: 01/24/2023]
Abstract
Humans and other primates are equipped with neural mechanisms that allow them to automatically make predictions about future events, facilitating processing of expected sensations and actions. Prediction-driven control and monitoring of perceptual and motor acts are vital to normal cognitive functioning. This review provides an overview of corollary discharge mechanisms involved in predictions across sensory modalities and discusses consequences of predictive coding for cognition and behavior. Converging evidence now links impairments in corollary discharge mechanisms to neuropsychiatric symptoms such as hallucinations and delusions. We review studies supporting a prediction-failure hypothesis of perceptual and cognitive disturbances. We also outline neural correlates underlying prediction function and failure, highlighting similarities across the visual, auditory, and somatosensory systems. In linking basic psychophysical and psychophysiological evidence of visual, auditory, and somatosensory prediction failures to neuropsychiatric symptoms, our review furthers our understanding of disease mechanisms.
Collapse
Affiliation(s)
- Sonia Bansal
- Maryland Psychiatric Research Center, University of Maryland, Catonsville, Maryland
| | - Judith M Ford
- University of California and Veterans Affairs Medical Center, San Francisco, California
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
25
|
The reference frame for encoding and retention of motion depends on stimulus set size. Atten Percept Psychophys 2017; 79:888-910. [PMID: 28092077 DOI: 10.3758/s13414-016-1258-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The goal of this study was to investigate the reference frames used in perceptual encoding and storage of visual motion information. In our experiments, observers viewed multiple moving objects and reported the direction of motion of a randomly selected item. Using a vector-decomposition technique, we computed performance during smooth pursuit with respect to a spatiotopic (nonretinotopic) and to a retinotopic component and compared them with performance during fixation, which served as the baseline. For the stimulus encoding stage, which precedes memory, we found that the reference frame depends on the stimulus set size. For a single moving target, the spatiotopic reference frame had the most significant contribution with some additional contribution from the retinotopic reference frame. When the number of items increased (Set Sizes 3 to 7), the spatiotopic reference frame was able to account for the performance. Finally, when the number of items became larger than 7, the distinction between reference frames vanished. We interpret this finding as a switch to a more abstract nonmetric encoding of motion direction. We found that the retinotopic reference frame was not used in memory. Taken together with other studies, our results suggest that, whereas a retinotopic reference frame may be employed for controlling eye movements, perception and memory use primarily nonretinotopic reference frames. Furthermore, the use of nonretinotopic reference frames appears to be capacity limited. In the case of complex stimuli, the visual system may use perceptual grouping in order to simplify the complexity of stimuli or resort to a nonmetric abstract coding of motion information.
Collapse
|
26
|
Herrmann CJJ, Metzler R, Engbert R. A self-avoiding walk with neural delays as a model of fixational eye movements. Sci Rep 2017; 7:12958. [PMID: 29021548 PMCID: PMC5636902 DOI: 10.1038/s41598-017-13489-8] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 09/26/2017] [Indexed: 11/09/2022] Open
Abstract
Fixational eye movements show scaling behaviour of the positional mean-squared displacement with a characteristic transition from persistence to antipersistence for increasing time-lag. These statistical patterns were found to be mainly shaped by microsaccades (fast, small-amplitude movements). However, our re-analysis of fixational eye-movement data provides evidence that the slow component (physiological drift) of the eyes exhibits scaling behaviour of the mean-squared displacement that varies across human participants. These results suggest that drift is a correlated movement that interacts with microsaccades. Moreover, on the long time scale, the mean-squared displacement of the drift shows oscillations, which is also present in the displacement auto-correlation function. This finding lends support to the presence of time-delayed feedback in the control of drift movements. Based on an earlier non-linear delayed feedback model of fixational eye movements, we propose and discuss different versions of a new model that combines a self-avoiding walk with time delay. As a result, we identify a model that reproduces oscillatory correlation functions, the transition from persistence to antipersistence, and microsaccades.
Collapse
Affiliation(s)
- Carl J J Herrmann
- Institute of Physics and Astronomy, University of Potsdam, Potsdam, D-14476, Germany
| | - Ralf Metzler
- Institute of Physics and Astronomy, University of Potsdam, Potsdam, D-14476, Germany.
| | - Ralf Engbert
- Department of Psychology, University of Potsdam, Potsdam, D-14476, Germany
| |
Collapse
|
27
|
Schut MJ, Fabius JH, Van der Stoep N, Van der Stigchel S. Object files across eye movements: Previous fixations affect the latencies of corrective saccades. Atten Percept Psychophys 2017; 79:138-153. [PMID: 27743259 PMCID: PMC5179592 DOI: 10.3758/s13414-016-1220-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
One of the factors contributing to a seamless visual experience is object correspondence-that is, the integration of pre- and postsaccadic visual object information into one representation. Previous research had suggested that before the execution of a saccade, a target object is loaded into visual working memory and subsequently is used to locate the target object after the saccade. Until now, studies on object correspondence have not taken previous fixations into account. In the present study, we investigated the influence of previously fixated information on object correspondence. To this end, we adapted a gaze correction paradigm in which a saccade was executed toward either a previously fixated or a novel target. During the saccade, the stimuli were displaced such that the participant's gaze landed between the target stimulus and a distractor. Participants then executed a corrective saccade to the target. The results indicated that these corrective saccades had lower latencies toward previously fixated than toward nonfixated targets, indicating object-specific facilitation. In two follow-up experiments, we showed that presaccadic spatial and object (surface feature) information can contribute separately to the execution of a corrective saccade, as well as in conjunction. Whereas the execution of a corrective saccade to a previously fixated target object at a previously fixated location is slowed down (i.e., inhibition of return), corrective saccades toward either a previously fixated target object or a previously fixated location are facilitated. We concluded that corrective saccades are executed on the basis of object files rather than of unintegrated feature information.
Collapse
Affiliation(s)
- Martijn J Schut
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - Jasper H Fabius
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
28
|
Mruczek REB, Blair CD, Strother L, Caplovitz GP. The Dynamic Ebbinghaus: motion dynamics greatly enhance the classic contextual size illusion. Front Hum Neurosci 2015; 9:77. [PMID: 25741271 PMCID: PMC4332331 DOI: 10.3389/fnhum.2015.00077] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2014] [Accepted: 01/30/2015] [Indexed: 11/13/2022] Open
Abstract
The Ebbinghaus illusion is a classic example of the influence of a contextual surround on the perceived size of an object. Here, we introduce a novel variant of this illusion called the Dynamic Ebbinghaus illusion in which the size and eccentricity of the surrounding inducers modulates dynamically over time. Under these conditions, the size of the central circle is perceived to change in opposition with the size of the inducers. Interestingly, this illusory effect is relatively weak when participants are fixating a stationary central target, less than half the magnitude of the classic static illusion. However, when the entire stimulus translates in space requiring a smooth pursuit eye movement to track the target, the illusory effect is greatly enhanced, almost twice the magnitude of the classic static illusion. A variety of manipulations including target motion, peripheral viewing, and smooth pursuit eye movements all lead to dramatic illusory effects, with the largest effect nearly four times the strength of the classic static illusion. We interpret these results in light of the fact that motion-related manipulations lead to uncertainty in the image size representation of the target, specifically due to added noise at the level of the retinal input. We propose that the neural circuits integrating visual cues for size perception, such as retinal image size, perceived distance, and various contextual factors, weight each cue according to the level of noise or uncertainty in their neural representation. Thus, more weight is given to the influence of contextual information in deriving perceived size in the presence of stimulus and eye motion. Biologically plausible models of size perception should be able to account for the reweighting of different visual cues under varying levels of certainty.
Collapse
Affiliation(s)
- Ryan E B Mruczek
- Department of Psychology, University of Nevada Reno Reno, NV, USA
| | | | - Lars Strother
- Department of Psychology, University of Nevada Reno Reno, NV, USA
| | | |
Collapse
|
29
|
Ghazanfar AA, Eliades SJ. The neurobiology of primate vocal communication. Curr Opin Neurobiol 2014; 28:128-35. [PMID: 25062473 PMCID: PMC4177356 DOI: 10.1016/j.conb.2014.06.015] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Revised: 06/26/2014] [Accepted: 06/27/2014] [Indexed: 12/20/2022]
Abstract
Recent investigations of non-human primate communication revealed vocal behaviors far more complex than previously appreciated. Understanding the neural basis of these communicative behaviors is important as it has the potential to reveal the basic underpinnings of the still more complex human speech. The latest work revealed vocalization-sensitive regions both within and beyond the traditional boundaries of the central auditory system. The importance and mechanisms of multi-sensory face-voice integration in vocal communication are also increasingly apparent. Finally, studies on the mechanisms of vocal production demonstrated auditory-motor interactions that may allow for self-monitoring and vocal control. We review the current work in these areas of primate communication research.
Collapse
Affiliation(s)
- Asif A Ghazanfar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA; Department of Ecology & Evolutionary Biology, Princeton University, Princeton, NJ 08544, USA.
| | - Steven J Eliades
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, 3400 Spruce Street, 5 Ravdin, Philadelphia, PA 19104, USA
| |
Collapse
|
30
|
Rieser JJ, Erdemir A, Khuu NT, Beck S. Knowing the Results of One's Own Actions Without Visual or Auditory Feedback When Walking, Throwing, and Singing. ECOLOGICAL PSYCHOLOGY 2014. [DOI: 10.1080/10407413.2014.875318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
31
|
Gaveau V, Pisella L, Priot AE, Fukui T, Rossetti Y, Pélisson D, Prablanc C. Automatic online control of motor adjustments in reaching and grasping. Neuropsychologia 2013; 55:25-40. [PMID: 24334110 DOI: 10.1016/j.neuropsychologia.2013.12.005] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Revised: 11/16/2013] [Accepted: 12/04/2013] [Indexed: 11/16/2022]
Abstract
Following the princeps investigations of Marc Jeannerod on action-perception, specifically, goal-directed movement, this review article addresses visual and non-visual processes involved in guiding the hand in reaching or grasping tasks. The contributions of different sources of correction of ongoing movements are considered; these include visual feedback of the hand, as well as the often-neglected but important spatial updating and sharpening of goal localization following gaze-saccade orientation. The existence of an automatic online process guiding limb trajectory toward its goal is highlighted by a series of princeps experiments of goal-directed pointing movements. We then review psychophysical, electrophysiological, neuroimaging and clinical studies that have explored the properties of these automatic corrective mechanisms and their neural bases, and established their generality. Finally, the functional significance of automatic corrective mechanisms-referred to as motor flexibility-and their potential use in rehabilitation are discussed.
Collapse
Affiliation(s)
- Valérie Gaveau
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Laure Pisella
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Anne-Emmanuelle Priot
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Institut de recherche biomédicale des armées (IRBA), BP 73, 91223 Brétigny-sur-Orge cedex, France
| | - Takao Fukui
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France
| | - Yves Rossetti
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Denis Pélisson
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France
| | - Claude Prablanc
- INSERM, U1028, CNRS, UMR5292, Lyon Neurosciences Research Center, ImpAct, 16 avenue du doyen Lépine, 69676 Bron cedex, France; Université Lyon 1, Villeurbanne, France.
| |
Collapse
|
32
|
Dokka K, MacNeilage PR, DeAngelis GC, Angelaki DE. Multisensory self-motion compensation during object trajectory judgments. ACTA ACUST UNITED AC 2013; 25:619-30. [PMID: 24062317 DOI: 10.1093/cercor/bht247] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Judging object trajectory during self-motion is a fundamental ability for mobile organisms interacting with their environment. This fundamental ability requires the nervous system to compensate for the visual consequences of self-motion in order to make accurate judgments, but the mechanisms of this compensation are poorly understood. We comprehensively examined both the accuracy and precision of observers' ability to judge object trajectory in the world when self-motion was defined by vestibular, visual, or combined visual-vestibular cues. Without decision feedback, subjects demonstrated no compensation for self-motion that was defined solely by vestibular cues, partial compensation (47%) for visually defined self-motion, and significantly greater compensation (58%) during combined visual-vestibular self-motion. With decision feedback, subjects learned to accurately judge object trajectory in the world, and this generalized to novel self-motion speeds. Across conditions, greater compensation for self-motion was associated with decreased precision of object trajectory judgments, indicating that self-motion compensation comes at the cost of reduced discriminability. Our findings suggest that the brain can flexibly represent object trajectory relative to either the observer or the world, but a world-centered representation comes at the cost of decreased precision due to the inclusion of noisy self-motion signals.
Collapse
Affiliation(s)
- Kalpana Dokka
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Paul R MacNeilage
- German Center for Vertigo and Balance Disorders, University Hospital of Munich, Munich, Germany and
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
33
|
Bedi H, Goltz HC, Wong AMF, Chandrakumar M, Niechwiej-Szwedo E. Error correcting mechanisms during antisaccades: contribution of online control during primary saccades and offline control via secondary saccades. PLoS One 2013; 8:e68613. [PMID: 23936308 PMCID: PMC3735558 DOI: 10.1371/journal.pone.0068613] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2012] [Accepted: 05/31/2013] [Indexed: 11/26/2022] Open
Abstract
Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary “corrective” saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task.
Collapse
Affiliation(s)
- Harleen Bedi
- The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Herbert C. Goltz
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Agnes M. F. Wong
- The Hospital for Sick Children, Toronto, Ontario, Canada
- Department of Ophthalmology and Vision Sciences, University of Toronto, Toronto, Ontario, Canada
| | | | - Ewa Niechwiej-Szwedo
- Department of Kinesiology, University of Waterloo, Waterloo, Ontario, Canada
- * E-mail:
| |
Collapse
|
34
|
Tsujita M, Ichikawa M. Non-retinotopic motor-visual recalibration to temporal lag. Front Psychol 2013; 3:487. [PMID: 23293610 PMCID: PMC3536266 DOI: 10.3389/fpsyg.2012.00487] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 10/23/2012] [Indexed: 11/13/2022] Open
Abstract
Temporal order judgment (TOJ) between the voluntary motor action and its perceptual feedback is important in distinguishing between a sensory feedback which is caused by observer's own action and other stimulus, which are irrelevant to that action. Prolonged exposure to fixed temporal lag between motor action and visual feedback recalibrates motor-visual temporal relationship, and consequently shifts the point of subjective simultaneity (PSS). Previous studies on the audio-visual temporal recalibration without voluntary action revealed that the low level processing is involved. However, it is not clear how the low and high level processings affect the recalibration to constant temporal lag between voluntary action and visual feedback. This study examined retinotopic specificity of the motor-visual temporal recalibration. During the adaptation phase, observers repeatedly pressed a key, and visual stimulus was presented in left or right visual field with a fixed temporal lag (0 or 200 ms). In the test phase, observers performed a TOJ for observer's voluntary keypress and test stimulus, which was presented in the same as or opposite to the visual field in which the stimulus was presented in the adaptation phase. We found that the PSS was shifted toward the exposed lag in both visual fields. These results suggest that the low visual processing, which is retinotopically specific, has minor contribution to the motor-visual temporal lag adaptation, and that the adaptation to shift the PSS mainly depends upon the high level processing such as attention to specific properties of the stimulus.
Collapse
Affiliation(s)
- Masaki Tsujita
- Graduate School of Humanities and Social Sciences, Chiba University Chiba, Japan
| | | |
Collapse
|
35
|
Colnaghi S, Ramat S, D'Angelo E, Cortese A, Beltrami G, Moglia A, Versino M. θ-burst stimulation of the cerebellum interferes with internal representations of sensory-motor information related to eye movements in humans. THE CEREBELLUM 2012; 10:711-9. [PMID: 21544589 DOI: 10.1007/s12311-011-0282-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Continuous theta-burst stimulation (cTBS) applied over the cerebellum exerts long-lasting effects by modulating long-term synaptic plasticity, which is thought to be the basis of learning and behavioral adaptation. To investigate the impact of cTBS over the cerebellum on short-term sensory-motor memory, we recorded in two groups of eight healthy subject each the visually guided saccades (VGSs), the memory-guided saccades (MGSs), and the multiple memory-guided saccades (MMGSs), before and after cTBS (cTBS group) or simulated cTBS (control group). In the cTBS group, cTBS determined hypometria of contralateral centrifugal VGSs and worsened the accuracy of MMGS bilaterally. In the control group, no significant differences were found between the two recording sessions. These results indicate that cTBS over the cerebellum causes eye movement effects that last longer than the stimulus duration. The VGS contralateral hypometria suggested that we eventually inhibited the fastigial nucleus on the stimulated side. MMGSs in normal subjects have a better final accuracy with respect to MGSs. Such improvement is due to the availability in MMGSs of the efference copy of the initial reflexive saccade directed toward the same peripheral target, which provides a sensory-motor information that is memorized and then used to improve the accuracy of the subsequent volitional memory-guided saccade. Thus, we hypothesize that cTBS disrupted the capability of the cerebellum to make an internal representation of the memorized sensory-motor information to be used after a short interval for forward control of saccades.
Collapse
Affiliation(s)
- Silvia Colnaghi
- Department of Public Health and Neuroscience, University of Pavia, Pavia, Italy.
| | | | | | | | | | | | | |
Collapse
|
36
|
Lee TE, Kim SH, Cho YA. Postoperative changes in spatial localization following exotropia surgery. Curr Eye Res 2012; 38:210-4. [PMID: 22870922 DOI: 10.3109/02713683.2012.713151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
PURPOSE To measure changes in spatial localization following exotropia surgery using a computer touch-screen method of measurement. METHODS Enrolled in the study were 60 exotropia patients, all of whom had undergone corrective muscle surgeries under general anesthesia: 37 patients had undergone unilateral lateral rectus or bilateral lateral rectus muscle recession procedures (recession group) and 23 patients had undergone unilateral lateral and medial rectus muscle resection (R&R), or unilateral medial rectus resection only (resection group). We evaluated spatial localization by having patients point to targets on a computer touch-screen before surgery, and 1 day and 1 month after surgery. The pointing error, Δp, is defined as the difference between the actual location of the target and the pointed-to location of the target by unsigned value, was recorded as the mean of five tests. We compared the extent of postoperative changes in Δp between the two groups. RESULTS The mean Δp before surgery did not differ statistically between the two groups (p = 0.93). One day after surgery, however, the postoperative change in Δp of the resection group compared with that of the recession group (2.0 ± 0.7° and 0.4 ± 0.5°, respectively) was significant (p = 0.01 and p = 0.86 respectively). CONCLUSIONS The ability for spatial localization is decreased in patients immediately following medial rectus resection, but is regained by 1 month following surgery.
Collapse
Affiliation(s)
- Tae-Eun Lee
- Department of Ophthalmology, Korea University College of Medicine, Seoul, South Korea
| | | | | |
Collapse
|
37
|
Temporal recalibration in vocalization induced by adaptation of delayed auditory feedback. PLoS One 2011; 6:e29414. [PMID: 22216275 PMCID: PMC3245272 DOI: 10.1371/journal.pone.0029414] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2011] [Accepted: 11/28/2011] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF). DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms) during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.
Collapse
|
38
|
|
39
|
Das S, Ray D, Banerjee M. Does hallucination affect vigilance performance in schizophrenia? An exploratory study. Asian J Psychiatr 2011; 4:196-202. [PMID: 23051117 DOI: 10.1016/j.ajp.2011.05.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2011] [Revised: 05/24/2011] [Accepted: 05/31/2011] [Indexed: 11/29/2022]
Abstract
The present study investigates the role of "auditory verbal hallucination" (AVH) in the attentional processes of individuals diagnosed with schizophrenia compared with healthy participants. The sample consisted of 26 participants diagnosed with schizophrenia divided into - "schizophrenia with hallucination" (N=12) and "schizophrenia without hallucination" (N=14). 13 matched healthy participants were taken. A general health questionnaire was used to screen out psychiatric morbidity in healthy participants. The presence and/or absence of AVH were substantiated through the administration of the Positive and Negative Syndrome Scale (PANSS). Only individuals having higher composite scores in the positive scale were included. Edinburgh Handedness Inventory was administered to all participants. Software designed to measure vigilance was used to assess attentional deficits in the three groups included in the study. The complexity of the "vigilance task" was varied across three parameters: (1) spatial position of the target stimulus and buffer, (2) frequency of the target stimulus and buffer and (3) colour of target stimulus and buffer. The performances of the 3 groups were compared statistically in terms of Hit, Miss and False Alarm scores. Results revealed that schizophrenia patients are deficient as compared to their healthy counterparts in the ability to focus on a specific target while inhibiting non-relevant information across all conditions. Also, schizophrenia patients who have AVH are relatively more deficient as compared to the schizophrenia patients without AVH. It can be concluded that perceptual abnormality in schizophrenia patients with hallucination has an additional negative impact on attentional processes.
Collapse
Affiliation(s)
- Sudeshna Das
- Department of Health and Family Welfare, M.J.N. District Hospital, Cooch-Behar, India
| | | | | |
Collapse
|
40
|
Sridhar D, Bedell HE. Relative contributions of the two eyes to perceived egocentric visual direction in normal binocular vision. Vision Res 2011; 51:1075-85. [PMID: 21371491 PMCID: PMC3092072 DOI: 10.1016/j.visres.2011.02.023] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2010] [Revised: 02/21/2011] [Accepted: 02/23/2011] [Indexed: 10/18/2022]
Abstract
Perceived egocentric direction (EVD) is based on the sensed position of the eyes in the orbit and the oculocentric visual direction (eye-centered, OVD). Previous reports indicate that in some subjects eye-position information from the two eyes contributes unequally to the perceived EVD. Findings from other studies indicate that the retinal information from the two eyes may not always contribute equally to perceived OVD. The goal of this study was to assess whether these two sources of information covary similarly within the same individuals. Open-loop pointing responses to an isolated target presented randomly at several horizontal locations were collected from 13 subjects during different magnitudes of asymmetric vergence to estimate the contribution of the position information from each eye to perceived EVD. For the same subjects, the direction at which a horizontally or vertically disparate target with different interocular contrast or luminance ratios appeared aligned with a non-disparate target estimated the relative contribution of each eye's retinal information. The results show that the eye-position and retinal information vary similarly in most subjects, which is consistent with a modified version of Hering's law of visual direction.
Collapse
Affiliation(s)
- Deepika Sridhar
- College of Optometry, University of Houston. 505 J Armistead Bldg, Houston, TX 77204-2020, USA
| | - Harold E. Bedell
- College of Optometry, University of Houston. 505 J Armistead Bldg, Houston, TX 77204-2020, USA
- Center for NeuroEngineering & Cognitive Science, University of Houston, Houston, TX 77204-4005, USA,
| |
Collapse
|
41
|
Bedell HE, Tong J, Aydin M. The perception of motion smear during eye and head movements. Vision Res 2010; 50:2692-701. [PMID: 20875444 PMCID: PMC2991377 DOI: 10.1016/j.visres.2010.09.025] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2010] [Revised: 08/26/2010] [Accepted: 09/21/2010] [Indexed: 10/19/2022]
Abstract
Because the visual system integrates information across time, an image that moves on the retina would be expected to be perceived as smeared. In this article, we summarize the previous evidence that human observers perceive a smaller extent of smear when retinal image motion results from an eye or head movement, compared to when a physically moving target generates comparable image motion while the eyes and head are still. This evidence indicates that the reduction of perceived motion smear is asymmetrical, occurring only for targets that move against the direction of an eye or head movement. In addition, we present new data to show that no reduction of perceived motion smear occurs for targets that move in either direction during a visually-induced perception of self motion. We propose that low-level extra-retinal eye- and head-movement signals are responsible for the reduction of perceived motion smear, by decreasing the duration of the temporal impulse response. Although retinal as well as extra-retinal mechanisms can reduce the extent of perceived motion smear, available evidence suggests that improved visual functioning may occur only when an extra-retinal mechanism reduces the perception of smear.
Collapse
Affiliation(s)
- Harold E Bedell
- College of Optometry, 505 J. Davis Armistead Bldg., University of Houston, Houston, TX 77204-2020, USA.
| | | | | |
Collapse
|
42
|
Lin IF, Gorea A. Location and identity memory of saccade targets. Vision Res 2010; 51:323-32. [PMID: 21115027 DOI: 10.1016/j.visres.2010.11.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2010] [Revised: 10/17/2010] [Accepted: 11/22/2010] [Indexed: 11/16/2022]
Abstract
While the memory of objects' identity and of their spatiotopic location may sustain transsaccadic spatial constancy, the memory of their retinotopic location may hamper it. Is it then true that saccades perturb retinotopic but not spatiotopic memory? We address this issue by assessing localization performances of the last and of the penultimate saccade target in a series of 2-6 saccades. Upon fixation, nine letter-pairs, eight black and one white, were displayed at 3° eccentricity around fixation within a 20° × 20° grey frame, and subjects were instructed to saccade to the white letter-pair; the cycle was then repeated. Identical conditions were run with the eyes maintaining fixation throughout the trial but with the grey frame moving so as to mimic its retinal displacement when the eyes moved. At the end of a trial, subjects reported the identity and/or the location of the target in either retinotopic (relative to the current fixation dot) or frame-based(1) (relative to the grey frame) coordinates. Saccades degraded target's retinotopic location memory but not its frame-based location or its identity memory. Results are compatible with the notion that spatiotopic representation takes over retinotopic representation during eye movements thereby contributing to the stability of the visual world as its retinal projection jumps on our retina from saccade to saccade.
Collapse
Affiliation(s)
- I-Fan Lin
- Laboratorie Psychologie de la Perception, Paris Descartes University and CNRS, 45 rue des Saints Pères, 75006 Paris, France
| | | |
Collapse
|
43
|
Abstract
Fundamental knowledge of motor cognition is an important component in a human factors repertoire, and this chapter serves as a guide to the history, theory, and application of motor cognition research.“From intention to input” captures the scope of this chapter in that cognitive theories of motor control, neural control of movement, and the effects of feedback on movement are all discussed. The chapter progresses from an overview and history of motor cognition theories down to the neural basis for movement, then to an application of these theories via the study of specific actions. From there, rooted in the scientist-practitioner paradigm of human factors, the chapter covers applied considerations for designing control tasks and their associated inputs, taking into account individual differences in motor cognition and control and identifying critical issues in designing for input. General, applied guidelines are provided for use with current and future systems that have a motor cognition component.
Collapse
|
44
|
Cui QN, Razavi B, O'Neill WE, Paige GD. Perception of auditory, visual, and egocentric spatial alignment adapts differently to changes in eye position. J Neurophysiol 2009; 103:1020-35. [PMID: 19846626 DOI: 10.1152/jn.00500.2009] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (high-pass and low-pass noise) related to specific auditory spatial channels, 3) matched by a shift in the perceived straight-ahead (PSA), and 4) accompanied by a spatial shift for visual and/or bimodal (visual and auditory) targets. Subjects were tested in a dark echo-attenuated chamber with their heads fixed facing a cylindrical screen, behind which a mobile speaker/LED presented targets across the frontal field. Subjects fixated alternating reference spots (0, +/-20 degrees ) horizontally or vertically while either localizing targets or indicating PSA using a laser pointer. Results showed that the spatial shift induced by ocular eccentricity is 1) preserved for auditory targets without a visual fixation reference, 2) generalized for all frequency bands, and thus all auditory spatial channels, 3) paralleled by a shift in PSA, and 4) restricted to auditory space. Findings are consistent with a set-point control strategy by which eye position governs multimodal spatial alignment. The phenomenon is robust for auditory space and egocentric perception, and highlights the importance of controlling for eye position in the examination of spatial perception and behavior.
Collapse
Affiliation(s)
- Qi N Cui
- Department of Neurobiology and Anatomy, University of Rochester Medical Center,Rochester, NY 14642-8603, USA
| | | | | | | |
Collapse
|
45
|
Wurtz RH. Neuronal mechanisms of visual stability. Vision Res 2008; 48:2070-89. [PMID: 18513781 PMCID: PMC2556215 DOI: 10.1016/j.visres.2008.03.021] [Citation(s) in RCA: 389] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2008] [Revised: 03/22/2008] [Accepted: 03/25/2008] [Indexed: 10/22/2022]
Abstract
Human vision is stable and continuous in spite of the incessant interruptions produced by saccadic eye movements. These rapid eye movements serve vision by directing the high resolution fovea rapidly from one part of the visual scene to another. They should detract from vision because they generate two major problems: displacement of the retinal image with each saccade and blurring of the image during the saccade. This review considers the substantial advances in understanding the neuronal mechanisms underlying this visual stability derived primarily from neuronal recording and inactivation studies in the monkey, an excellent model for systems in the human brain. For the first problem, saccadic displacement, two neuronal candidates are salient. First are the neurons in frontal and parietal cortex with shifting receptive fields that provide anticipatory activity with each saccade and are driven by a corollary discharge. These could provide the mechanism for a retinotopic hypothesis of visual stability and possibly for a transsaccadic memory hypothesis, The second neuronal mechanism is provided by neurons whose visual response is modulated by eye position (gain field neurons) or are largely independent of eye position (real position neurons), and these neurons could provide the basis for a spatiotopic hypothesis. For the second problem, saccadic suppression, visual masking and corollary discharge are well established mechanisms, and possible neuronal correlates have been identified for each.
Collapse
Affiliation(s)
- Robert H Wurtz
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bldg. 49, RM 2A50, Bethesda, MD 20892-4435, USA.
| |
Collapse
|
46
|
Abstract
How can an action to a target be selected without yet knowing what it is? Pre-emptive perception (PEP) is a framework which orders neuronal mechanisms in association with voluntary actions before an action is started and until it is completed. It is assumed that PEP serves the purpose of perception, but a conscious, perceptual identification of the goal is not obligatorily completed during the time period of PEP itself. The concept of PEP is that the brain pre-emptively optimizes an action plan to maximize eventual perception, even before being sure what the goal is. Experimental studies of voluntary saccadic eye movements are considered as prototypic activity within the framework of PEP. The core concept of pre-emption is that a particular saccade is selected while a large number of other possible actions are deselected. Pre-emptive computations include mechanisms associated with internal context and reward. Neurophysiological studies which show anatomically and functionally separate cortical and some subcortical neuronal groups in computing saccades are summarized. There is a potential relationship of PEP as a neurobiological framework and some philosophical concepts. Terms for processes between planning and action, such as intention, anticipation, and attention, are often incongruent in everyday language and in epistemology. It is proposed here that a scrutiny of these terms can be rigorously approached by temporal subdivision of PEP and conversely, clear definitions of these terms can lead to organized experimental designs of cognitive neurobiology. The temporal subdivision of PEP allows a critique of The Will in the definition of Schopenhauer and distinguishes it from the 'free will'.
Collapse
Affiliation(s)
- Ivan Bodis-Wollner
- Downstate Medical Center, State University of New York, 450 Clarkson Avenue, Brooklyn, NY 11203, USA.
| |
Collapse
|
47
|
Abstract
AbstractAccurate perception of moving objects would be useful; accurate visually guided action is crucial. Visual motion across the scene influences perceived object location and the trajectory of reaching movements to objects. In this commentary, I propose that the visual system assigns the position of any object based on the predominant motion present in the scene, and that this is used to guide reaching movements to compensate for delays in visuomotor processing.
Collapse
|
48
|
Forgacs PB, von Gizycki H, Selesnick I, Syed NA, Ebrahim K, Avitable M, Amassian V, Lytton W, Bodis-Wollner I. Perisaccadic Parietal and Occipital Gamma Power in Light and in Complete Darkness. Perception 2008; 37:419-32. [DOI: 10.1068/p5875] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Our objective was to determine perisaccadic gamma range oscillations in the EEG during voluntary saccades in humans. We evaluated occipital perisaccadic gamma activity both in the presence and absence of visual input, when the observer was blindfolded. We quantified gamma power in the time periods before, during, and after horizontal saccades. The corresponding EEG was evaluated for individual saccades and the wavelet transformed EEG averaged for each time window, without averaging the EEG first. We found that, in both dark and light, parietal and occipital gamma power increased during the saccade and peaked prior to reaching new fixation. We show that this is not the result of muscle activity and not the result of visual input during saccades. Saccade direction affects the laterality of gamma power over posterior electrodes. Gamma power recorded over the posterior scalp increases during a saccade. The phasic modulation of gamma by saccades in darkness—when occipital activity is decoupled from visual input—provides electrophysiological evidence that voluntary saccades affect ongoing EEG. We suggest that saccade-phasic gamma modulation may contribute to short-term plasticity required to realign the visual space to the intended fixation point of a saccade and provides a mechanism for neuronal assembly formation prior to achieving the intended saccadic goal. The wavelet-transformed perisaccadic EEG could provide an electrophysiological tool applicable in humans for the purpose of fine analysis and potential separation of stages of ‘planning’ and ‘action’.
Collapse
Affiliation(s)
| | | | - Ivan Selesnick
- Department of Electrical & Computer Engineering, Polytechnic University, Brooklyn, NY 11201, USA
| | | | | | | | | | | | | |
Collapse
|
49
|
Abstract
Saccades and smooth pursuit eye movements are two different modes of oculomotor control. Saccades are primarily directed toward stationary targets whereas smooth pursuit is elicited to track moving targets. In recent years, behavioural and neurophysiological data demonstrated that both types of eye movements work in synergy for visual tracking. This suggests that saccades and pursuit are two outcomes of a single sensorimotor process that aims at orienting the visual axis.
Collapse
|
50
|
White RL, Snyder LH. Subthreshold microstimulation in frontal eye fields updates spatial memories. Exp Brain Res 2007; 181:477-92. [PMID: 17486326 DOI: 10.1007/s00221-007-0947-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2006] [Accepted: 03/28/2007] [Indexed: 12/25/2022]
Abstract
The brain's sensitivity to self-generated movements is critical for behavior, and relies on accurate internal representations of movements that have been made. In the present study, we stimulated neurons below saccade threshold in the frontal eye fields of monkeys performing an oculomotor delayed response task. Stimulation during, but not before, the memory period caused small but consistent displacements of memory-guided saccade endpoints. This displacement was in the opposite direction of the saccade that was evoked by stronger stimulation at the same site, suggesting that weak stimulation induced an internal saccade signal without evoking an actual movement. Consistent with this idea, the stimulation effect was nearly absent on a task where an animal was trained to ignore self-generated eye movements. These findings support a role for the frontal eye fields in accounting for self-generated movements, and indicate that corollary discharge signals can be manipulated independent of motor output.
Collapse
Affiliation(s)
- Robert L White
- Department of Anatomy and Neurobiology, Box 8108, Washington University School of Medicine, 660 S. Euclid Ave, St. Louis, MO 63110, USA
| | | |
Collapse
|