1
|
Duyar A, Carrasco M. Eyes on the past: Gaze stability differs between temporal expectation and temporal attention. J Vis 2025; 25:11. [PMID: 40238139 PMCID: PMC12011131 DOI: 10.1167/jov.25.4.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2024] [Accepted: 03/15/2025] [Indexed: 04/18/2025] Open
Abstract
Does the timing of a preceding visual event affect when people deploy attention in the future? Temporal expectation and temporal attention are two distinct processes that interact at the behavioral and neural levels, improving performance and gaze stability. The preceding foreperiod-the interval between the preparatory signal and stimulus onset in the previous trial-modulates expectation at the behavioral and oculomotor levels. Here, we investigated whether the preceding foreperiod also modulates the effects of temporal attention and whether such effects interact with expectation. We found that, regardless of whether the stimulus occurred earlier than, later than, or at the expected moment in the preceding foreperiod, temporal attention improved performance and accelerated gaze stability onset and offset consistently by shifting microsaccade timing. However, overall, only with expected preceding foreperiods, attention inhibited microsaccade rates. Moreover, late preceding foreperiods weakened the expectation effects on microsaccade rates, but this weakening was overridden by attention. Altogether, these findings reveal that the oculomotor system's flexibility does not translate to performance, and suggest that, although selection history can be used as one of the sources of expectation in subsequent trials, it does not necessarily determine, strengthen, or guide attentional deployment.
Collapse
Affiliation(s)
- Aysun Duyar
- Department of Psychology, New York University, New York, NY, USA
- https://orcid.org/0000-0003-1039-8625
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
- https://orcid.org/0000-0002-1002-9056
| |
Collapse
|
2
|
Duyar A, Carrasco M. Temporal attention and oculomotor effects dissociate distinct types of temporal expectation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.04.641562. [PMID: 40093085 PMCID: PMC11908187 DOI: 10.1101/2025.03.04.641562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 03/19/2025]
Abstract
Temporal expectation-the ability to predict when events occur-relies on probabilistic information within the environment. Two types of temporal expectation, temporal precision, based on the variability of an event's onset, and hazard rate, based on the increasing probability of an event with onset delay, interact with temporal attention-ability to prioritize specific moments- at the performance level: Attentional benefits increase with precision but diminish with hazard rate. Both temporal expectation and temporal attention improve fixational stability; however, the distinct oculomotor effects of temporal precision and hazard rate, as well as their interactions with temporal attention, remain unknown. Investigating microsaccade dynamics, we found that hazard-based expectations were reflected in the oculomotor responses, whereas precision-based expectations emerged only when temporal attention was deployed. We also found perception-eye movement dissociations for both types of temporal expectation, yet attentional benefits in performance coincided with microsaccade rate modulations. These findings reveal an interplay among distinct types of temporal expectation and temporal attention in enhancing and recalibrating fixational stability.
Collapse
Affiliation(s)
- Aysun Duyar
- Department of Psychology, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
3
|
Hu J, Badde S, Vetter P. Auditory guidance of eye movements toward threat-related images in the absence of visual awareness. Front Hum Neurosci 2024; 18:1441915. [PMID: 39175660 PMCID: PMC11338778 DOI: 10.3389/fnhum.2024.1441915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 07/30/2024] [Indexed: 08/24/2024] Open
Abstract
The human brain is sensitive to threat-related information even when we are not aware of this information. For example, fearful faces attract gaze in the absence of visual awareness. Moreover, information in different sensory modalities interacts in the absence of awareness, for example, the detection of suppressed visual stimuli is facilitated by simultaneously presented congruent sounds or tactile stimuli. Here, we combined these two lines of research and investigated whether threat-related sounds could facilitate visual processing of threat-related images suppressed from awareness such that they attract eye gaze. We suppressed threat-related images of cars and neutral images of human hands from visual awareness using continuous flash suppression and tracked observers' eye movements while presenting congruent or incongruent sounds (finger snapping and car engine sounds). Indeed, threat-related car sounds guided the eyes toward suppressed car images, participants looked longer at the hidden car images than at any other part of the display. In contrast, neither congruent nor incongruent sounds had a significant effect on eye responses to suppressed finger images. Overall, our results suggest that only in a danger-related context semantically congruent sounds modulate eye movements to images suppressed from awareness, highlighting the prioritisation of eye responses to threat-related stimuli in the absence of visual awareness.
Collapse
Affiliation(s)
- Junchao Hu
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Stephanie Badde
- Department of Psychology, Tufts University, Medford, MA, United States
| | - Petra Vetter
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
4
|
Carrasco M, Spering M. Perception-action Dissociations as a Window into Consciousness. J Cogn Neurosci 2024; 36:1557-1566. [PMID: 38865201 DOI: 10.1162/jocn_a_02122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
Understanding the neural correlates of unconscious perception stands as a primary goal of experimental research in cognitive psychology and neuroscience. In this Perspectives paper, we explain why experimental protocols probing qualitative dissociations between perception and action provide valuable insights into conscious and unconscious processing, along with their corresponding neural correlates. We present research that utilizes human eye movements as a sensitive indicator of unconscious visual processing. Given the increasing reliance on oculomotor and pupillary responses in consciousness research, these dissociations also provide a cautionary tale about inferring conscious perception solely based on no-report protocols.
Collapse
|
5
|
Lisi M, Cavanagh P. Different extrapolation of moving object locations in perception, smooth pursuit, and saccades. J Vis 2024; 24:9. [PMID: 38546586 PMCID: PMC10996402 DOI: 10.1167/jov.24.3.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 02/01/2024] [Indexed: 04/07/2024] Open
Abstract
The ability to accurately perceive and track moving objects is crucial for many everyday activities. In this study, we use a "double-drift stimulus" to explore the processing of visual motion signals that underlie perception, pursuit, and saccade responses to a moving object. Participants were presented with peripheral moving apertures filled with noise that either drifted orthogonally to the aperture's direction or had no net motion. Participants were asked to saccade to and track these targets with their gaze as soon as they appeared and then to report their direction. In the trials with internal motion, the target disappeared at saccade onset so that the first 100 ms of the postsaccadic pursuit response was driven uniquely by peripheral information gathered before saccade onset. This provided independent measures of perceptual, pursuit, and saccadic responses to the double-drift stimulus on a trial-by-trial basis. Our analysis revealed systematic differences between saccadic responses, on one hand, and perceptual and pursuit responses, on the other. These differences are unlikely to be caused by differences in the processing of motion signals because both saccades and pursuits seem to rely on shared target position and velocity information. We conclude that our results are instead due to a difference in how the processing mechanisms underlying perception, pursuit, and saccades combine motor signals with target position. These findings advance our understanding of the mechanisms underlying dissociation in visual processing between perception and eye movements.
Collapse
Affiliation(s)
- Matteo Lisi
- Department of Psychology, Royal Holloway, University of London, London, UK
| | - Patrick Cavanagh
- Department of Psychology, Glendon College, Toronto, Ontario, Canada
- Department Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
6
|
Sheliga BM, FitzGibbon EJ. Manipulating the Fourier spectra of stimuli comprising a two-frame kinematogram to study early visual motion-detecting mechanisms: Perception versus short latency ocular-following responses. J Vis 2023; 23:11. [PMID: 37725387 PMCID: PMC10513114 DOI: 10.1167/jov.23.10.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/20/2023] [Indexed: 09/21/2023] Open
Abstract
Two-frame kinematograms have been extensively used to study motion perception in human vision. Measurements of the direction-discrimination performance limits (Dmax) have been the primary subject of such studies, whereas surprisingly little research has asked how the variability in the spatial frequency content of individual frames affects motion processing. Here, we used two-frame one-dimensional vertical pink noise kinematograms, in which images in both frames were bandpass filtered, with the central spatial frequency of the filter manipulated independently for each image. To avoid spatial aliasing, there was no actual leftward-rightward shift of the image: instead, the phases of all Fourier components of the second image were shifted by ±¼ wavelength with respect to those of the first. We recorded ocular-following responses (OFRs) and perceptual direction discrimination in human subjects. OFRs were in the direction of the Fourier components' shift and showed a smooth decline in amplitude, well fit by Gaussian functions, as the difference between the central spatial frequencies of the first and second images increased. In sharp contrast, 100% correct perceptual direction-discrimination performance was observed when the difference between the central spatial frequencies of the first and second images was small, deteriorating rapidly to chance when increased further. Perceptual dependencies moved closer to the OFR ones when subjects were allowed to grade the strength of perceived motion. Response asymmetries common for perceptual judgments and the OFRs suggest that they rely on the same early visual processing mechanisms. The OFR data were quantitatively well described by a model which combined two factors: (1) an excitatory drive determined by a power law sum of stimulus Fourier components' contributions, scaled by (2) a contrast normalization mechanism. Thus, in addition to traditional studies relying on perceptual reports, the OFRs represent a valuable behavioral tool for studying early motion processing on a fine scale.
Collapse
Affiliation(s)
- Boris M Sheliga
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Edmond J FitzGibbon
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
7
|
Kwon S, Fahrenthold BK, Cavanaugh MR, Huxlin KR, Mitchell JF. Perceptual restoration fails to recover unconscious processing for smooth eye movements after occipital stroke. eLife 2022; 11:67573. [PMID: 35730931 PMCID: PMC9255960 DOI: 10.7554/elife.67573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 06/21/2022] [Indexed: 11/28/2022] Open
Abstract
The visual pathways that guide actions do not necessarily mediate conscious perception. Patients with primary visual cortex (V1) damage lose conscious perception but often retain unconscious abilities (e.g. blindsight). Here, we asked if saccade accuracy and post-saccadic following responses (PFRs) that automatically track target motion upon saccade landing are retained when conscious perception is lost. We contrasted these behaviors in the blind and intact fields of 11 chronic V1-stroke patients, and in 8 visually intact controls. Saccade accuracy was relatively normal in all cases. Stroke patients also had normal PFR in their intact fields, but no PFR in their blind fields. Thus, V1 damage did not spare the unconscious visual processing necessary for automatic, post-saccadic smooth eye movements. Importantly, visual training that recovered motion perception in the blind field did not restore the PFR, suggesting a clear dissociation between pathways mediating perceptual restoration and automatic actions in the V1-damaged visual system.
Collapse
Affiliation(s)
- Sunwoo Kwon
- Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, United States
| | | | - Matthew R Cavanaugh
- Center for Visual Science, University of Rochester, Rochester, United States
| | - Krystel R Huxlin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| | - Jude F Mitchell
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, United States
| |
Collapse
|
8
|
Yoshimoto S, Hayasaka T. Common and independent processing of visual motion perception and oculomotor response. J Vis 2022; 22:6. [PMID: 35293955 PMCID: PMC8944401 DOI: 10.1167/jov.22.4.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual motion signals are used not only to drive motion perception but also to elicit oculomotor responses. A fundamental question is whether perceptual and oculomotor processing of motion signals shares a common mechanism. This study aimed to address this question using visual motion priming, in which the perceived direction of a directionally ambiguous stimulus is biased in the same (positive priming) or opposite (negative priming) direction as that of a priming stimulus. The priming effect depends on the duration of the priming stimulus. It is assumed that positive and negative priming are mediated by high- and low-level motion systems, respectively. Participants were asked to judge the perceived direction of a π-phase-shifted test grating after a smoothly drifting priming grating during varied durations. Their eye movements were measured while the test grating was presented. The perception and eye movements were discrepant under positive priming and correlated under negative priming on a trial-by-trial basis when an interstimulus interval was inserted between the priming and test stimuli, indicating that the eye movements were evoked by the test stimulus per se. These findings suggest that perceptual and oculomotor responses are induced by a common mechanism at a low level of motion processing but by independent mechanisms at a high level of motion processing.
Collapse
Affiliation(s)
- Sanae Yoshimoto
- School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan.,
| | - Tomoyuki Hayasaka
- School of Integrated Arts and Sciences, Hiroshima University, Hiroshima, Japan.,
| |
Collapse
|
9
|
Macpherson T, Matsumoto M, Gomi H, Morimoto J, Uchibe E, Hikida T. Parallel and hierarchical neural mechanisms for adaptive and predictive behavioral control. Neural Netw 2021; 144:507-521. [PMID: 34601363 DOI: 10.1016/j.neunet.2021.09.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 07/21/2021] [Accepted: 09/06/2021] [Indexed: 12/21/2022]
Abstract
Our brain can be recognized as a network of largely hierarchically organized neural circuits that operate to control specific functions, but when acting in parallel, enable the performance of complex and simultaneous behaviors. Indeed, many of our daily actions require concurrent information processing in sensorimotor, associative, and limbic circuits that are dynamically and hierarchically modulated by sensory information and previous learning. This organization of information processing in biological organisms has served as a major inspiration for artificial intelligence and has helped to create in silico systems capable of matching or even outperforming humans in several specific tasks, including visual recognition and strategy-based games. However, the development of human-like robots that are able to move as quickly as humans and respond flexibly in various situations remains a major challenge and indicates an area where further use of parallel and hierarchical architectures may hold promise. In this article we review several important neural and behavioral mechanisms organizing hierarchical and predictive processing for the acquisition and realization of flexible behavioral control. Then, inspired by the organizational features of brain circuits, we introduce a multi-timescale parallel and hierarchical learning framework for the realization of versatile and agile movement in humanoid robots.
Collapse
Affiliation(s)
- Tom Macpherson
- Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, Osaka, Japan
| | - Masayuki Matsumoto
- Division of Biomedical Science, Faculty of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Co., Kanagawa, Japan
| | - Jun Morimoto
- Department of Brain Robot Interface, ATR Computational Neuroscience Laboratories, Kyoto, Japan; Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Eiji Uchibe
- Department of Brain Robot Interface, ATR Computational Neuroscience Laboratories, Kyoto, Japan
| | - Takatoshi Hikida
- Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, Osaka, Japan.
| |
Collapse
|
10
|
Park WJ, Schauder KB, Kwon OS, Bennetto L, Tadin D. Atypical visual motion prediction abilities in autism spectrum disorder. Clin Psychol Sci 2021; 9:944-960. [PMID: 34721951 DOI: 10.1177/2167702621991803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A recent theory posits that prediction deficits may underlie the core symptoms in autism spectrum disorder (ASD). However, empirical evidence for this hypothesis is minimal. Using a visual extrapolation task, we tested motion prediction abilities in children and adolescents with and without ASD. We examined the factors known to be important for motion prediction: the central-tendency response bias and smooth pursuit eye movements. In ASD, response biases followed an atypical trajectory that was dominated by early responses. This differed from controls who exhibited response biases that reflected a gradual accumulation of knowledge about stimulus statistics. Moreover, while better smooth pursuit eye movements for the moving object were linked to more accurate motion prediction in controls, in ASD, better smooth pursuit was counterintuitively linked to a more pronounced early response bias. Together, these results demonstrate atypical visual prediction abilities in ASD and offer insights into possible mechanisms underlying the observed differences.
Collapse
Affiliation(s)
- Woon Ju Park
- Department of Psychology, University of Washington, Seattle, WA, 98195
| | - Kimberly B Schauder
- Center for Autism Spectrum Disorders, Children's National Hospital, Rockville, MD, 20850
| | - Oh-Sang Kwon
- Department of Human Factors Engineering, Ulsan National Institute of Science and Technology, Ulsan, South Korea
| | - Loisa Bennetto
- Department of Psychology, University of Rochester, Rochester, NY, 14627.,Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627.,Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, 14642
| | - Duje Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, 14627.,Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, 14642.,Center for Visual Science, University of Rochester, Rochester, NY, 14627.,Department of Ophthalmology, University of Rochester Medical Center, Rochester, NY, 14642
| |
Collapse
|
11
|
Cloherty SL, Yates JL, Graf D, DeAngelis GC, Mitchell JF. Motion Perception in the Common Marmoset. Cereb Cortex 2021; 30:2658-2672. [PMID: 31828299 DOI: 10.1093/cercor/bhz267] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 08/23/2019] [Accepted: 09/17/2019] [Indexed: 11/13/2022] Open
Abstract
Visual motion processing is a well-established model system for studying neural population codes in primates. The common marmoset, a small new world primate, offers unparalleled opportunities to probe these population codes in key motion processing areas, such as cortical areas MT and MST, because these areas are accessible for imaging and recording at the cortical surface. However, little is currently known about the perceptual abilities of the marmoset. Here, we introduce a paradigm for studying motion perception in the marmoset and compare their psychophysical performance with human observers. We trained two marmosets to perform a motion estimation task in which they provided an analog report of their perceived direction of motion with an eye movement to a ring that surrounded the motion stimulus. Marmosets and humans exhibited similar trade-offs in speed versus accuracy: errors were larger and reaction times were longer as the strength of the motion signal was reduced. Reverse correlation on the temporal fluctuations in motion direction revealed that both species exhibited short integration windows; however, marmosets had substantially less nondecision time than humans. Our results provide the first quantification of motion perception in the marmoset and demonstrate several advantages to using analog estimation tasks.
Collapse
Affiliation(s)
- Shaun L Cloherty
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA.,Department of Physiology, Monash University, Melbourne, VIC 3800, Australia
| | - Jacob L Yates
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| | - Dina Graf
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| | - Gregory C DeAngelis
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| | - Jude F Mitchell
- Department of Brain and Cognitive Sciences, University of Rochester, New York, NY 14627, USA
| |
Collapse
|
12
|
Abstract
Visual perceptual learning (VPL) is an improvement in visual function following training. Although the practical utility of VPL was once thought to be limited by its specificity to the precise stimuli used during training, more recent work has shown that such specificity can be overcome with appropriate training protocols. In contrast, relatively little is known about the extent to which VPL exhibits motor specificity. Previous studies have yielded mixed results. In this work, we have examined the effector specificity of VPL by training observers on a motion discrimination task that maintains the same visual stimulus (drifting grating) and task structure, but that requires different effectors to indicate the response (saccade vs. button press). We find that, in these conditions, VPL transfers fully between a manual and an oculomotor response. These results are consistent with the idea that VPL entails the learning of a decision rule that can generalize across effectors.
Collapse
Affiliation(s)
- Asmara Awada
- Department of Neurology and Neurosurgery, McGill University, Montreal, Canada.,
| | - Shahab Bakhtiari
- Department of Computer Science, McGill University, Montreal, Canada.,
| | - Christopher C Pack
- Department of Neurology and Neurosurgery, McGill University, Montreal, Canada.,
| |
Collapse
|
13
|
Lakshminarasimhan KJ, Avila E, Neyhart E, DeAngelis GC, Pitkow X, Angelaki DE. Tracking the Mind's Eye: Primate Gaze Behavior during Virtual Visuomotor Navigation Reflects Belief Dynamics. Neuron 2020; 106:662-674.e5. [PMID: 32171388 PMCID: PMC7323886 DOI: 10.1016/j.neuron.2020.02.023] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 12/24/2019] [Accepted: 02/19/2020] [Indexed: 01/02/2023]
Abstract
To take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects' belief dynamics from natural behavior. We tested whether eye movements could be used to infer subjects' beliefs about latent variables using a naturalistic navigation task. Humans and monkeys navigated to a remembered goal location in a virtual environment that provided optic flow but lacked explicit position cues. We observed eye movements that appeared to continuously track the goal location even when no visible target was present there. Accurate goal tracking was associated with improved task performance, and inhibiting eye movements in humans impaired navigation precision. These results suggest that gaze dynamics play a key role in action selection during challenging visuomotor behaviors and may possibly serve as a window into the subject's dynamically evolving internal beliefs.
Collapse
Affiliation(s)
- Kaushik J Lakshminarasimhan
- Center for Neural Science, New York University, New York, NY, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY, USA.
| | - Eric Avila
- Center for Neural Science, New York University, New York, NY, USA
| | - Erin Neyhart
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA
| | | | - Xaq Pitkow
- Department of Neuroscience, Baylor College of Medicine, Houston, TX, USA; Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, NY, USA; Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
14
|
Spatial suppression promotes rapid figure-ground segmentation of moving objects. Nat Commun 2019; 10:2732. [PMID: 31266956 PMCID: PMC6606582 DOI: 10.1038/s41467-019-10653-8] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2018] [Accepted: 05/21/2019] [Indexed: 12/21/2022] Open
Abstract
Segregation of objects from their backgrounds is a fundamental visual function and one that is particularly effective when objects are in motion. Theoretically, suppressive center-surround mechanisms are well suited for accomplishing motion segregation. This longstanding hypothesis, however, has received limited empirical support. We report converging correlational and causal evidence that spatial suppression of background motion signals is critical for rapid segmentation of moving objects. Motion segregation ability is strongly predicted by both individual and stimulus-driven variations in spatial suppression strength. Moreover, aging-related superiority in perceiving background motion is associated with profound impairments in motion segregation. This segregation deficit is alleviated via perceptual learning, but only when motion segregation training also causes decreased sensitivity to background motion. We argue that perceptual insensitivity to large moving stimuli effectively implements background subtraction, which, in turn, enhances the visibility of moving objects and accounts for the observed link between spatial suppression and motion segregation. The visual system excels at segregating moving objects from their backgrounds, a key visual function hypothesized to be driven by suppressive centre-surround mechanisms. Here, the authors show that spatial suppression of background motion signals is critical for rapid segmentation of moving objects.
Collapse
|
15
|
Vetter P, Badde S, Phelps EA, Carrasco M. Emotional faces guide the eyes in the absence of awareness. eLife 2019; 8:43467. [PMID: 30735123 PMCID: PMC6382349 DOI: 10.7554/elife.43467] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Accepted: 02/07/2019] [Indexed: 12/14/2022] Open
Abstract
The ability to act quickly to a threat is a key skill for survival. Under awareness, threat-related emotional information, such as an angry or fearful face, has not only perceptual advantages but also guides rapid actions such as eye movements. Emotional information that is suppressed from awareness still confers perceptual and attentional benefits. However, it is unknown whether suppressed emotional information can directly guide actions, or whether emotional information has to enter awareness to do so. We suppressed emotional faces from awareness using continuous flash suppression and tracked eye gaze position. Under successful suppression, as indicated by objective and subjective measures, gaze moved towards fearful faces, but away from angry faces. Our findings reveal that: (1) threat-related emotional stimuli can guide eye movements in the absence of visual awareness; (2) threat-related emotional face information guides distinct oculomotor actions depending on the type of threat conveyed by the emotional expression.
Collapse
Affiliation(s)
- Petra Vetter
- Department of Psychology, Center for Neural Science, New York University, New York, United States.,Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
| | - Stephanie Badde
- Department of Psychology, Center for Neural Science, New York University, New York, United States
| | - Elizabeth A Phelps
- Department of Psychology, Center for Neural Science, New York University, New York, United States.,Department of Psychology, Harvard University, Cambridge, United States
| | - Marisa Carrasco
- Department of Psychology, Center for Neural Science, New York University, New York, United States
| |
Collapse
|
16
|
Abstract
Psychophysical studies and our own subjective experience suggest that, in natural viewing conditions (i.e., at medium to high contrasts), monocularly and binocularly viewed scenes appear very similar, with the exception of the improved depth perception provided by stereopsis. This phenomenon is usually described as a lack of binocular summation. We show here that there is an exception to this rule: Ocular following eye movements induced by the sudden motion of a large stimulus, which we recorded from three human subjects, are much larger when both eyes see the moving stimulus, than when only one eye does. We further discovered that this binocular advantage is a function of the interocular correlation between the two monocular images: It is maximal when they are identical, and reduced when the two eyes are presented with different images. This is possible only if the neurons that underlie ocular following are sensitive to binocular disparity.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, MD, USA
| |
Collapse
|
17
|
Suppression and Contrast Normalization in Motion Processing. J Neurosci 2017; 37:11051-11066. [PMID: 29018158 DOI: 10.1523/jneurosci.1572-17.2017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 08/11/2017] [Accepted: 08/18/2017] [Indexed: 11/21/2022] Open
Abstract
Sensory neurons are activated by a range of stimuli to which they are said to be tuned. Usually, they are also suppressed by another set of stimuli that have little effect when presented in isolation. The interactions between preferred and suppressive stimuli are often quite complex and vary across neurons, even within a single area, making it difficult to infer their collective effect on behavioral responses mediated by activity across populations of neurons. Here, we investigated this issue by measuring, in human subjects (three males), the suppressive effect of static masks on the ocular following responses induced by moving stimuli. We found a wide range of effects, which depend in a nonlinear and nonseparable manner on the spatial frequency, contrast, and spatial location of both stimulus and mask. Under some conditions, the presence of the mask can be seen as scaling the contrast of the driving stimulus. Under other conditions, the effect is more complex, involving also a direct scaling of the behavioral response. All of this complexity at the behavioral level can be captured by a simple model in which stimulus and mask interact nonlinearly at two stages, one monocular and one binocular. The nature of the interactions is compatible with those observed at the level of single neurons in primates, usually broadly described as divisive normalization, without having to invoke any scaling mechanism.SIGNIFICANCE STATEMENT The response of sensory neurons to their preferred stimulus is often modulated by stimuli that are not effective when presented alone. Individual neurons can exhibit multiple modulatory effects, with considerable variability across neurons even in a single area. Such diversity has made it difficult to infer the impact of these modulatory mechanisms on behavioral responses. Here, we report the effects of a stationary mask on the reflexive eye movements induced by a moving stimulus. A model with two stages, each incorporating a divisive modulatory mechanism, reproduces our experimental results and suggests that qualitative variability of masking effects in cortical neurons might arise from differences in the extent to which such effects are inherited from earlier stages.
Collapse
|
18
|
Probing Early Motion Processing With Eye Movements: Differences of Vestibular Migraine, Migraine With and Without Aura in the Attack Free Interval. Headache 2017; 58:275-286. [DOI: 10.1111/head.13185] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2017] [Indexed: 01/03/2023]
|
19
|
Schauder KB, Park WJ, Tadin D, Bennetto L. Larger Receptive Field Size as a Mechanism Underlying Atypical Motion Perception in Autism Spectrum Disorder. Clin Psychol Sci 2017; 5:827-842. [PMID: 28989818 DOI: 10.1177/2167702617707733] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Atypical visual motion perception has been widely observed in individuals with autism spectrum disorder (ASD). The pattern of results, however, has been inconsistent. Emerging mechanistic hypotheses seek to explain these variable patterns of atypical motion sensitivity, each uniquely predicting specific patterns of performance across varying stimulus conditions. Here, we investigated the integrity of two such fundamental mechanisms-response gain control and receptive field size. Twenty children and adolescents with ASD and 20 typically developing (TD) age- and IQ-matched controls performed a motion discrimination task. To adequately model group differences in both mechanisms of interest, we tested a range of 23 stimulus conditions varying in size and contrast. Results revealed a motion perception impairment in ASD that was specific to the smallest sized stimuli (1°), irrespective of stimulus contrast. Model analyses provided evidence for larger receptive field size in ASD as the mechanism that explains this size-specific reduction of motion sensitivity.
Collapse
Affiliation(s)
- Kimberly B Schauder
- Department of Clinical and Social Sciences in Psychology, University of Rochester
- Center for Visual Science, University of Rochester
| | - Woon Ju Park
- Department of Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
| | - Duje Tadin
- Department of Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
- Department of Ophthalmology, University of Rochester School of Medicine
| | - Loisa Bennetto
- Department of Clinical and Social Sciences in Psychology, University of Rochester
- Department of Brain and Cognitive Sciences, University of Rochester
| |
Collapse
|
20
|
Dieter KC, Brascamp J, Tadin D, Blake R. Does visual attention drive the dynamics of bistable perception? Atten Percept Psychophys 2016; 78:1861-73. [PMID: 27230785 PMCID: PMC5014653 DOI: 10.3758/s13414-016-1143-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.
Collapse
Affiliation(s)
- Kevin C Dieter
- Vanderbilt Vision Research Center and Department of Psychology, Vanderbilt University, Nashville, TN, 37240, USA.
| | - Jan Brascamp
- Department of Psychology, Michigan State University, East Lansing, MI, 48823, USA
| | - Duje Tadin
- Department of Brain & Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY, 14627, USA
- Department of Ophthalmology, University of Rochester School of Medicine, Rochester, NY, 14642, USA
| | - Randolph Blake
- Vanderbilt Vision Research Center and Department of Psychology, Vanderbilt University, Nashville, TN, 37240, USA
| |
Collapse
|
21
|
Quaia C, Optican LM, Cumming BG. A Motion-from-Form Mechanism Contributes to Extracting Pattern Motion from Plaids. J Neurosci 2016; 36:3903-18. [PMID: 27053199 PMCID: PMC4821905 DOI: 10.1523/jneurosci.3398-15.2016] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 02/22/2016] [Accepted: 02/24/2016] [Indexed: 11/21/2022] Open
Abstract
Since the discovery of neurons selective for pattern motion direction in primate middle temporal area MT (Albright, 1984; Movshon et al., 1985), the neural computation of this signal has been the subject of intense study. The bulk of this work has explored responses to plaids obtained by summing two drifting sinusoidal gratings. Unfortunately, with these stimuli, many different mechanisms are similarly effective at extracting pattern motion. We devised a new set of stimuli, obtained by summing two random line stimuli with different orientations. This allowed several novel manipulations, including generating plaids that do not contain rigid 2D motion. Importantly, these stimuli do not engage most of the previously proposed mechanisms. We then recorded the ocular following responses that such stimuli induce in human subjects. We found that pattern motion is computed even with stimuli that do not cohere perceptually, including those without rigid motion, and even when the two gratings are presented separately to the two eyes. Moderate temporal and/or spatial separation of the gratings impairs the computation. We show that, of the models proposed so far, only those based on the intersection-of-constraints rule, embedding a motion-from-form mechanism (in which orientation signals are used in the computation of motion direction signals), can account for our results. At least for the eye movements reported here, a motion-from-form mechanism is thus involved in one of the most basic functions of the visual motion system: extracting motion direction from complex scenes. SIGNIFICANCE STATEMENT Anatomical considerations led to the proposal that visual function is organized in separate processing streams: one (ventral) devoted to form and one (dorsal) devoted to motion. Several experimental results have challenged this view, arguing in favor of a more integrated view of visual processing. Here we add to this body of work, supporting a role for form information even in a function--extracting pattern motion direction from complex scenes--for which decisive evidence for the involvement of form signals has been lacking.
Collapse
Affiliation(s)
- Christian Quaia
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Lance M Optican
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| | - Bruce G Cumming
- Laboratory of Sensorimotor Research, National Eye Institute, National Institutes of Health, Department of Health and Human Services, Bethesda, Maryland 20892
| |
Collapse
|
22
|
Turkozer HB, Pamir Z, Boyaci H. Contrast Affects fMRI Activity in Middle Temporal Cortex Related to Center-Surround Interaction in Motion Perception. Front Psychol 2016; 7:454. [PMID: 27065922 PMCID: PMC4811923 DOI: 10.3389/fpsyg.2016.00454] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2015] [Accepted: 03/14/2016] [Indexed: 11/26/2022] Open
Abstract
As the size of a high contrast drifting Gabor patch increases, perceiving its direction of motion becomes harder. However, the same behavioral effect is not observed for a low contrast Gabor patch. Neuronal mechanisms underlying this size–contrast interaction are not well understood. Here using psychophysical methods and functional magnetic resonance imaging (fMRI), we investigated the neural correlates of this behavioral effect. In the behavioral experiments, motion direction discrimination thresholds were assessed for drifting Gabor patches with different sizes and contrasts. Thresholds increased significantly as the size of the stimulus increased for high contrast (65%) but did not change for low contrast (2%) stimuli. In the fMRI experiment, cortical activity was recorded while observers viewed drifting Gabor patches with different contrasts and sizes. We found that the activity in middle temporal (MT) area increased with size at low contrast, but did not change at high contrast. Taken together, our results show that MT activity reflects the size–contrast interaction in motion perception.
Collapse
Affiliation(s)
- Halide B Turkozer
- National Magnetic Resonance Research Center, Bilkent UniversityAnkara, Turkey; Department of Psychiatry, Marmara UniversityIstanbul, Turkey
| | - Zahide Pamir
- National Magnetic Resonance Research Center, Bilkent UniversityAnkara, Turkey; Neuroscience Graduate Program, Bilkent UniversityAnkara, Turkey
| | - Huseyin Boyaci
- National Magnetic Resonance Research Center, Bilkent UniversityAnkara, Turkey; Neuroscience Graduate Program, Bilkent UniversityAnkara, Turkey; Department of Psychology, Bilkent UniversityAnkara, Turkey; Department of Psychology, Justus Liebig University GiessenGiessen, Germany
| |
Collapse
|
23
|
Tadin D. Suppressive mechanisms in visual motion processing: From perception to intelligence. Vision Res 2015; 115:58-70. [PMID: 26299386 DOI: 10.1016/j.visres.2015.08.005] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Revised: 07/31/2015] [Accepted: 08/04/2015] [Indexed: 11/19/2022]
Abstract
Perception operates on an immense amount of incoming information that greatly exceeds the brain's processing capacity. Because of this fundamental limitation, the ability to suppress irrelevant information is a key determinant of perceptual efficiency. Here, I will review a series of studies investigating suppressive mechanisms in visual motion processing, namely perceptual suppression of large, background-like motions. These spatial suppression mechanisms are adaptive, operating only when sensory inputs are sufficiently robust to guarantee visibility. Converging correlational and causal evidence links these behavioral results with inhibitory center-surround mechanisms, namely those in cortical area MT. Spatial suppression is abnormally weak in several special populations, including the elderly and individuals with schizophrenia-a deficit that is evidenced by better-than-normal direction discriminations of large moving stimuli. Theoretical work shows that this abnormal weakening of spatial suppression should result in motion segregation deficits, but direct behavioral support of this hypothesis is lacking. Finally, I will argue that the ability to suppress information is a fundamental neural process that applies not only to perception but also to cognition in general. Supporting this argument, I will discuss recent research that shows individual differences in spatial suppression of motion signals strongly predict individual variations in IQ scores.
Collapse
Affiliation(s)
- Duje Tadin
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA; Center for Visual Science, University of Rochester, Rochester, NY 14627, USA; Department of Ophthalmology, University of Rochester School of Medicine, Rochester, NY 14642, USA.
| |
Collapse
|
24
|
Acting without seeing: eye movements reveal visual processing without awareness. Trends Neurosci 2015; 38:247-58. [PMID: 25765322 DOI: 10.1016/j.tins.2015.02.002] [Citation(s) in RCA: 82] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Revised: 02/03/2015] [Accepted: 02/09/2015] [Indexed: 11/23/2022]
Abstract
Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness.
Collapse
|
25
|
Caniard F, Bülthoff HH, Thornton IM. Action can amplify motion-induced illusory displacement. Front Hum Neurosci 2015; 8:1058. [PMID: 25628558 PMCID: PMC4292580 DOI: 10.3389/fnhum.2014.01058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2014] [Accepted: 12/18/2014] [Indexed: 11/15/2022] Open
Abstract
Local motion is known to produce strong illusory displacement in the perceived position of globally static objects. For example, if a dot-cloud or grating drifts to the left within a stationary aperture, the perceived position of the whole aperture will also be shifted to the left. Previously, we used a simple tracking task to demonstrate that active control over the global position of an object did not eliminate this form of illusion. Here, we used a new iPad task to directly compare the magnitude of illusory displacement under active and passive conditions. In the active condition, participants guided a drifting Gabor patch along a virtual slalom course by using the tilt control of an iPad. The task was to position the patch so that it entered each gate at the direct center, and we used the left/right deviations from that point as our dependent measure. In the passive condition, participants watched playback of standardized trajectories along the same course. We systematically varied deviation from midpoint at gate entry, and participants made 2AFC left/right judgments. We fitted cumulative normal functions to individual distributions and extracted the point of subjective equality (PSE) as our dependent measure. To our surprise, the magnitude of displacement was consistently larger under active than under passive conditions. Importantly, control conditions ruled out the possibility that such amplification results from lack of motor control or differences in global trajectories as performance estimates were equivalent in the two conditions in the absence of local motion. Our results suggest that the illusion penetrates multiple levels of the perception-action cycle, indicating that one important direction for the future of perceptual illusions may be to more fully explore their influence during active vision.
Collapse
Affiliation(s)
- Franck Caniard
- Max Planck Institute for Biological Cybernetics Tübingen, Germany
| | - Heinrich H Bülthoff
- Max Planck Institute for Biological Cybernetics Tübingen, Germany ; Department of Brain and Cognitive Engineering, Korea University Seoul, South Korea
| | - Ian M Thornton
- Department of Cognitive Science, University of Malta Msida, Malta
| |
Collapse
|
26
|
Price NSC, Blum J. Motion perception correlates with volitional but not reflexive eye movements. Neuroscience 2014; 277:435-45. [PMID: 25073044 DOI: 10.1016/j.neuroscience.2014.07.028] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2014] [Accepted: 07/19/2014] [Indexed: 11/17/2022]
Abstract
Visually-driven actions and perception are traditionally ascribed to the dorsal and ventral visual streams of the cortical processing hierarchy. However, motion perception and the control of tracking eye movements both depend on sensory motion analysis by neurons in the dorsal stream, suggesting that the same sensory circuits may underlie both action and perception. Previous studies have suggested that multiple sensory modules may be responsible for the perception of low- and high-level motion, or the detection versus identification of motion direction. However, it remains unclear whether the sensory processing systems that contribute to direction perception and the control of eye movements have the same neuronal constraints. To address this, we examined inter-individual variability across 36 observers, using two tasks that simultaneously assessed the precision of eye movements and direction perception: in the smooth pursuit task, observers volitionally tracked a small moving target and reported its direction; in the ocular following task, observers reflexively tracked a large moving stimulus and reported its direction. We determined perceptual-oculomotor correlations across observers, defined as the correlation between each observer's mean perceptual precision and mean oculomotor precision. Across observers, we found that: (i) mean perceptual precision was correlated between the two tasks; (ii) mean oculomotor precision was correlated between the tasks, and (iii) oculomotor and perceptual precision were correlated for volitional smooth pursuit, but not reflexive ocular following. Collectively, these results demonstrate that sensory circuits with common neuronal constraints subserve motion perception and volitional, but not reflexive eye movements.
Collapse
Affiliation(s)
- N S C Price
- Department of Physiology, Monash University, VIC 3800, Australia.
| | - J Blum
- Department of Physiology, Monash University, VIC 3800, Australia
| |
Collapse
|
27
|
González EG, Lillakas L, Greenwald N, Gallie BL, Steinbach MJ. Unaffected smooth pursuit but impaired motion perception in monocularly enucleated observers. Vision Res 2014; 101:151-7. [PMID: 25007713 DOI: 10.1016/j.visres.2014.06.014] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2014] [Revised: 06/25/2014] [Accepted: 06/26/2014] [Indexed: 11/17/2022]
Abstract
The objective of this paper was to study the characteristics of closed-loop smooth pursuit eye movements of 15 unilaterally eye enucleated individuals and 18 age-matched controls and to compare them to their performance in two tests of motion perception: relative motion and motion coherence. The relative motion test used a brief (150 ms) small stimulus with a continuously present fixation target to preclude pursuit eye movements. The duration of the motion coherence trials was 1s, which allowed a brief pursuit of the stimuli. Smooth pursuit data were obtained with a step-ramp procedure. Controls were tested both monocularly and binocularly. The data showed worse performance by the enucleated observers in the relative motion task but no statistically significant differences in motion coherence between the two groups. On the other hand, the smooth pursuit gain of the enucleated participants was as good as that of controls for whom we found no binocular advantage. The data show that enucleated observers do not exhibit deficits in the afferent or sensory pathways or in the efferent or motor pathways of the steady-state smooth pursuit system even though their visual processing of motion is impaired.
Collapse
Affiliation(s)
- Esther G González
- Vision Science Research Program, Toronto Western Hospital, Toronto M5T 2S8, Canada; Ophthalmology and Vision Sciences, University of Toronto, Toronto M5T 2S8, Canada; Centre for Vision Research, York University, Toronto M3J 1P3, Canada.
| | - Linda Lillakas
- Vision Science Research Program, Toronto Western Hospital, Toronto M5T 2S8, Canada; Centre for Vision Research, York University, Toronto M3J 1P3, Canada
| | - Naomi Greenwald
- Vision Science Research Program, Toronto Western Hospital, Toronto M5T 2S8, Canada
| | - Brenda L Gallie
- Ophthalmology and Vision Sciences, University of Toronto, Toronto M5T 2S8, Canada; Cancer Informatics, Princess Margaret Hospital, Toronto M5T 2M9, Canada
| | - Martin J Steinbach
- Vision Science Research Program, Toronto Western Hospital, Toronto M5T 2S8, Canada; Ophthalmology and Vision Sciences, University of Toronto, Toronto M5T 2S8, Canada; Centre for Vision Research, York University, Toronto M3J 1P3, Canada
| |
Collapse
|