1
|
Attentional tracking takes place over perceived rather than veridical positions. Atten Percept Psychophys 2021; 83:1455-1462. [PMID: 33400220 DOI: 10.3758/s13414-020-02214-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/19/2020] [Indexed: 11/08/2022]
Abstract
Illusions can induce striking differences between perception and retinal input. For instance, a static Gabor with a moving internal texture appears to be shifted in the direction of its internal motion, a shift that increases dramatically when the Gabor itself is also in motion. Here, we ask whether attention operates on the perceptual or physical location of this stimulus. To do so, we generated an attentional tracking task where participants (N = 15) had to keep track of a single target among three Gabors that rotated around a common center in the periphery. During tracking, the illusion was used to make three Gabors appear either shifted away from or toward one another while maintaining the same physical separation. Because tracking performance depends in part on target to distractor spacing, if attention selects targets from perceived positions, performance should be better when the Gabors appear further apart and worse when they appear closer together. We find that tracking performance is superior with greater perceived separation, implying that attentional tracking operates over perceived rather than physical positions.
Collapse
|
2
|
Janic A, Cavanagh P, Rivest J. Effect of bilingualism on visual tracking attention and resistance to distraction. Sci Rep 2020; 10:14263. [PMID: 32868794 PMCID: PMC7459295 DOI: 10.1038/s41598-020-71185-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 08/07/2020] [Indexed: 11/09/2022] Open
Abstract
Speaking more than one language has been associated with enhanced cognitive capacities. Here we evaluated whether bilingual individuals have advantages in visual tracking attention. Adult bilingual (n = 35) and monolingual (n = 35) participants were tested in the Multiple Object Tracking task (MOT). In one condition, the MOT was performed by itself establishing the baseline performance of each group, and in the other condition, it was performed while participants counted backward out loud in their mother tongue. At baseline, the average speed tracking threshold of bilinguals was not better than that of the monolinguals. Importantly, for bilinguals, counting backward decreased their threshold by only 15%, but, for monolinguals, it decreased it three times as much. This result suggests that bilingualism confers advantages to visual tracking attention when dual tasking is required, extending the evidence that bilingualism affords cognitive benefits beyond verbal communication.
Collapse
Affiliation(s)
- Ana Janic
- Department of Psychology, Glendon College, York University, 2275 Bayview Avenue, Toronto, ON, M4N 3M6, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Patrick Cavanagh
- Department of Psychology, Glendon College, York University, 2275 Bayview Avenue, Toronto, ON, M4N 3M6, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Josée Rivest
- Department of Psychology, Glendon College, York University, 2275 Bayview Avenue, Toronto, ON, M4N 3M6, Canada.
- Centre for Vision Research, York University, Toronto, ON, Canada.
| |
Collapse
|
3
|
Hilo-Merkovich R, Yuval-Greenberg S. The coordinate system of endogenous spatial attention during smooth pursuit. J Vis 2020; 20:26. [PMID: 32720972 PMCID: PMC7424112 DOI: 10.1167/jov.20.7.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Accepted: 03/18/2020] [Indexed: 11/24/2022] Open
Abstract
A central question in vision is whether spatial attention is represented in an eye-centered (retinotopic) or world-centered (spatiotopic) reference-frame. Most previous studies on this question focused on how coordinates are modulated across saccades. In the present study, we investigated the reference-frame of attention across smooth pursuit eye-movements using a goal-directed saccade task. In two experiments, participants were asked to pursue a moving target while attending to one or two grating stimuli. On each trial, one stimulus was constant in its retinal position and the other was constant in its spatial position. Upon detection of a slight change in stimulus orientation, participants were asked to stop pursuing and perform a fast saccade toward the modified stimulus. In the focused attention condition, they attended one, predefined, stimulus, and in the divided attention condition they attended both. In Experiment 1 the angle of the orientation change marking the target event was constant across participants and conditions. In Experiment 2, the angle was individually adapted to equate performance across participants and conditions. Findings of the two experiments were consistent and showed that the enhancement of mean visual sensitivity in the focused relative to the divided attention condition was similar in magnitude for both retinotopic and spatiotopic targets. This indicates that during smooth pursuit, endogenous attention was proportionally divided between targets in retinotopic and spatiotopic frames of reference.
Collapse
Affiliation(s)
| | - Shlomit Yuval-Greenberg
- School of Psychological Sciences, Tel-Aviv University, Tel-Aviv, Israel
- Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
4
|
Decoding Trans-Saccadic Memory. J Neurosci 2017; 38:1114-1123. [PMID: 29263239 DOI: 10.1523/jneurosci.0854-17.2017] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 10/26/2017] [Accepted: 11/08/2017] [Indexed: 11/21/2022] Open
Abstract
We examine whether peripheral information at a planned saccade target affects immediate postsaccadic processing at the fovea on saccade landing. Current neuroimaging research suggests that presaccadic stimulation has a late effect on postsaccadic processing, in contrast to the early effect seen in behavioral studies. Human participants (both male and female) were instructed to saccade toward a face or a house that, on different trials, remained the same, changed, or disappeared during the saccade. We used a multivariate pattern analysis of electroencephalography data to decode face versus house processing directly after the saccade. The classifier was trained on separate trials without a saccade, where a house or face was presented at the fovea. When the saccade target remained the same across the saccade, we could reliably decode the target 123 ms after saccade offset. In contrast, when the target was changed during the saccade, the new target was decoded at a later time-point, 151 ms after saccade offset. The "same" condition advantage suggests that congruent presaccadic information facilitates processing of the postsaccadic stimulus compared with incongruent information. Finally, the saccade target could be decoded above chance even when it had been removed during the saccade, albeit with a slower time course (162 ms) and poorer signal strength. These findings indicate that information about the (peripheral) presaccadic stimulus is transferred across the saccade so that it becomes quickly available and influences processing at its expected new retinal position (the fovea).SIGNIFICANCE STATEMENT Here we provide neural evidence for early information transfer across saccades. Specifically, we examined the effect of presaccadic sensory information on the initial neuronal processing of a postsaccadic stimuli. Using electroencephalography and multivariate pattern analysis, we found the following: (1) that the identity of the presaccadic stimulus modulated the postsaccadic latency of stimulus relevant information; and (2) that a saccadic neural marker for a saccade target stimulus could be detected even when the stimulus had been removed during saccade. These results demonstrate that information about the peripheral presaccadic stimulus was transferred across the saccade and influenced processing at a new retinal position (the fovea) directly after the saccade landed.
Collapse
|
5
|
Studying visual attention using the multiple object tracking paradigm: A tutorial review. Atten Percept Psychophys 2017; 79:1255-1274. [DOI: 10.3758/s13414-017-1338-1] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
6
|
Mikellidou K, Turi M, Burr DC. Spatiotopic coding during dynamic head tilt. J Neurophysiol 2016; 117:808-817. [PMID: 27903636 DOI: 10.1152/jn.00508.2016] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Accepted: 11/29/2016] [Indexed: 11/22/2022] Open
Abstract
Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.
Collapse
Affiliation(s)
- Kyriaki Mikellidou
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy;
| | - Marco Turi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.,Fondazione Stella Maris Mediterraneo, Chiaromonte, Potenza, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, Florence, Italy; and.,Neuroscience Institute, National Research Council (CNR), Pisa, Italy
| |
Collapse
|
7
|
Functional connectivity indicates differential roles for the intraparietal sulcus and the superior parietal lobule in multiple object tracking. Neuroimage 2015; 123:129-37. [PMID: 26299796 DOI: 10.1016/j.neuroimage.2015.08.029] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 07/21/2015] [Accepted: 08/13/2015] [Indexed: 11/20/2022] Open
Abstract
Attentive tracking requires sustained object-based attention, rather than passive vigilance or rapid attentional shifts to brief events. Several theories of tracking suggest a mechanism of indexing objects that allows for attentional resources to be directed toward the moving targets. Imaging studies have shown that cortical areas belonging to the dorsal frontoparietal attention network increase BOLD-signal during multiple object tracking (MOT). Among these areas, some studies have assigned IPS a particular role in object indexing, but the neuroimaging evidence has been sparse. In the present study, we tested participants on a continuous version of the MOT task in order to investigate how cortical areas engage in functional networks during attentional tracking. Specifically, we analyzed the data using eigenvector centrality mapping (ECM) analysis, which provides estimates of individual voxels' connectedness with hub-like parts of the functional network. The results obtained using permutation based voxel-wise statistics support the proposed role for the IPS in object indexing as this region displayed increased centrality during tracking as well as increased functional connectivity with both prefrontal and visual perceptual cortices. In contrast, the opposite pattern was observed for the SPL, with decreasing centrality, as well as reduced functional connectivity with the visual and frontal cortices, in agreement with a hypothesized role for SPL in attentional shifts. These findings provide novel evidence that IPS and SPL serve different functional roles during MOT, while at the same time being highly engaged during tracking as measured by BOLD-signal changes.
Collapse
|
8
|
Spatial constancy of attention across eye movements is mediated by the presence of visual objects. Atten Percept Psychophys 2015; 77:1159-69. [DOI: 10.3758/s13414-015-0861-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
9
|
Szinte M, Carrasco M, Cavanagh P, Rolfs M. Attentional trade-offs maintain the tracking of moving objects across saccades. J Neurophysiol 2015; 113:2220-31. [PMID: 25609111 DOI: 10.1152/jn.00966.2014] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 01/13/2015] [Indexed: 11/22/2022] Open
Abstract
In many situations like playing sports or driving a car, we keep track of moving objects, despite the frequent eye movements that drastically interrupt their retinal motion trajectory. Here we report evidence that transsaccadic tracking relies on trade-offs of attentional resources from a tracked object's motion path to its remapped location. While participants covertly tracked a moving object, we presented pulses of coherent motion at different locations to probe the allocation of spatial attention along the object's entire motion path. Changes in the sensitivity for these pulses showed that during fixation attention shifted smoothly in anticipation of the tracked object's displacement. However, just before a saccade, attentional resources were withdrawn from the object's current motion path and reflexively drawn to the retinal location the object would have after saccade. This finding demonstrates the predictive choice the visual system makes to maintain the tracking of moving objects across saccades.
Collapse
Affiliation(s)
- Martin Szinte
- Allgemeine und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany;
| | - Marisa Carrasco
- Department of Psychology, Center for Neural Science, New York University, New York, New York
| | - Patrick Cavanagh
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, Centre National de la Recherche Scientifique Unité Mixte de Recherche 8242, Paris, France; and
| | - Martin Rolfs
- Bernstein Center for Computational Neuroscience and Department of Psychology, Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
10
|
Zimmermann E, Morrone MC, Burr DC. Buildup of spatial information over time and across eye-movements. Behav Brain Res 2014; 275:281-7. [PMID: 25224817 DOI: 10.1016/j.bbr.2014.09.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 09/04/2014] [Accepted: 09/07/2014] [Indexed: 11/27/2022]
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Psychology Department, University of Florence, Italy, Neuroscience Institute, National Research Council, Pisa, Italy.
| | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, via San Zeno 31, 56123 Pisa, Italy; Scientific Institute Stella Maris (IRCSS), viale del Tirreno 331, 56018 Calambrone, Pisa, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Heath, University of Florence, via San Salvi 12, 50135 Florence, Italy; Institute of Neuroscience CNR, via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
11
|
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation--or map--of the spatial layout of the external world. However, the construction of such a map poses major challenges to the visual system, given that the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head, and body to explore the world. Much research has been devoted to how the stability is achieved, with the debate often polarized between the utility of spatiotopic maps (that remain solid in external coordinates), as opposed to transiently updated retinotopic maps. Our research suggests that the visual system uses both strategies to maintain stability. fMRI, motion-adaptation, and saccade-adaptation studies demonstrate and characterize spatiotopic neural maps within the dorsal visual stream that remain solid in external rather than retinal coordinates. However, the construction of these maps takes time (up to 500 ms) and attentional resources. To solve the immediate problems created by individual saccades, we postulate the existence of a separate system to bridge each saccade with neural units that are 'transiently craniotopic'. These units prepare for the effects of saccades with a shift of their receptive fields before the saccade starts, then relaxing back into their standard position during the saccade, compensating for its action. Psychophysical studies investigating localization of stimuli flashed briefly around the time of saccades provide strong support for these neural mechanisms, and show quantitatively how they integrate information across saccades. This transient system cooperates with the spatiotopic mechanism to provide a useful map to guide interactions with our environment: one rapid and transitory, bringing into play the high-resolution visual areas; the other slow, long-lasting, and low-resolution, useful for interacting with the world.
Collapse
Affiliation(s)
- David C Burr
- Department of Psychology, University of Florence, via San Salvi 12, 50135 Florence, Italy.
| | | |
Collapse
|
12
|
Talsma D, White BJ, Mathôt S, Munoz DP, Theeuwes J. A retinotopic attentional trace after saccadic eye movements: evidence from event-related potentials. J Cogn Neurosci 2013; 25:1563-77. [PMID: 23530898 DOI: 10.1162/jocn_a_00390] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Saccadic eye movements are a major source of disruption to visual stability, yet we experience little of this disruption. We can keep track of the same object across multiple saccades. It is generally assumed that visual stability is due to the process of remapping, in which retinotopically organized maps are updated to compensate for the retinal shifts caused by eye movements. Recent behavioral and ERP evidence suggests that visual attention is also remapped, but that it may still leave a residual retinotopic trace immediately after a saccade. The current study was designed to further examine electrophysiological evidence for such a retinotopic trace by recording ERPs elicited by stimuli that were presented immediately after a saccade (80 msec SOA). Participants were required to maintain attention at a specific location (and to memorize this location) while making a saccadic eye movement. Immediately after the saccade, a visual stimulus was briefly presented at either the attended location (the same spatiotopic location), a location that matched the attended location retinotopically (the same retinotopic location), or one of two control locations. ERP data revealed an enhanced P1 amplitude for the stimulus presented at the retinotopically matched location, but a significant attenuation for probes presented at the original attended location. These results are consistent with the hypothesis that visuospatial attention lingers in retinotopic coordinates immediately following gaze shifts.
Collapse
Affiliation(s)
- Durk Talsma
- Department of Experimental Psychology, Faculty of Psychology and Educational Sciences, Ghent University, Henri Dunantlaan 2, 9000 Gent, Belgium.
| | | | | | | | | |
Collapse
|
13
|
Itthipuripat S, Garcia JO, Serences JT. Temporal dynamics of divided spatial attention. J Neurophysiol 2013; 109:2364-73. [PMID: 23390315 DOI: 10.1152/jn.01051.2012] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In naturalistic settings, observers often have to monitor multiple objects dispersed throughout the visual scene. However, the degree to which spatial attention can be divided across spatially noncontiguous objects has long been debated, particularly when those objects are in close proximity. Moreover, the temporal dynamics of divided attention are unclear: is the process of dividing spatial attention gradual and continuous, or does it onset in a discrete manner? To address these issues, we recorded steady-state visual evoked potentials (SSVEPs) as subjects covertly monitored two flickering targets while ignoring an intervening distractor that flickered at a different frequency. All three stimuli were clustered within either the lower left or the lower right quadrant, and our dependent measure was SSVEP power at the target and distractor frequencies measured over time. In two experiments, we observed a temporally discrete increase in power for target- vs. distractor-evoked SSVEPs extending from ∼350 to 150 ms prior to correct (but not incorrect) responses. The divergence in SSVEP power immediately prior to a correct response suggests that spatial attention can be divided across noncontiguous locations, even when the targets are closely spaced within a single quadrant. In addition, the division of spatial attention appears to be relatively discrete, as opposed to slow and continuous. Finally, the predictive relationship between SSVEP power and behavior demonstrates that these neurophysiological measures of divided attention are meaningfully related to cognitive function.
Collapse
Affiliation(s)
- Sirawaj Itthipuripat
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92093, USA.
| | | | | |
Collapse
|
14
|
Spencer JP, Barich K, Goldberg J, Perone S. Behavioral dynamics and neural grounding of a dynamic field theory of multi-object tracking. J Integr Neurosci 2012; 11:339-62. [PMID: 22992027 PMCID: PMC4475345 DOI: 10.1142/s0219635212500227] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The ability to dynamically track moving objects in the environment is crucial for efficient interaction with the local surrounds. Here, we examined this ability in the context of the multi-object tracking (MOT) task. Several theories have been proposed to explain how people track moving objects; however, only one of these previous theories is implemented in a real-time process model, and there has been no direct contact between theories of object tracking and the growing neural literature using ERPs and fMRI. Here, we present a neural process model of object tracking that builds from a Dynamic Field Theory of spatial cognition. Simulations reveal that our dynamic field model captures recent behavioral data examining the impact of speed and tracking duration on MOT performance. Moreover, we show that the same model with the same trajectories and parameters can shed light on recent ERP results probing how people distribute attentional resources to targets vs. distractors. We conclude by comparing this new theory of object tracking to other recent accounts, and discuss how the neural grounding of the theory might be effectively explored in future work.
Collapse
Affiliation(s)
- J P Spencer
- Department of Psychology, E11 Seashore Hall, University of Iowa, Iowa City, IA 52242, USA.
| | | | | | | |
Collapse
|
15
|
Casarotti M, Lisi M, Umiltà C, Zorzi M. Paying Attention through Eye Movements: A Computational Investigation of the Premotor Theory of Spatial Attention. J Cogn Neurosci 2012; 24:1519-31. [DOI: 10.1162/jocn_a_00231] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Growing evidence indicates that planning eye movements and orienting visuospatial attention share overlapping brain mechanisms. A tight link between endogenous attention and eye movements is maintained by the premotor theory, in contrast to other accounts that postulate the existence of specific attention mechanisms that modulate the activity of information processing systems. The strong assumption of equivalence between attention and eye movements, however, is challenged by demonstrations that human observers are able to keep attention on a specific location while moving the eyes elsewhere. Here we investigate whether a recurrent model of saccadic planning can account for attentional effects without requiring additional or specific mechanisms separate from the circuits that perform sensorimotor transformations for eye movements. The model builds on the basis function approach and includes a circuit that performs spatial remapping using an “internal forward model” of how visual inputs are modified as a result of saccadic movements. Simulations show that the latter circuit is crucial to account for dissociations between attention and eye movements that may be invoked to disprove the premotor theory. The model provides new insights into how spatial remapping may be implemented in parietal cortex and offers a computational framework for recent proposals that link visual stability with remapping of attention pointers.
Collapse
|
16
|
Jahn G, Papenmeier F, Meyerhoff HS, Huff M. Spatial Reference in Multiple Object Tracking. Exp Psychol 2012; 59:163-73. [DOI: 10.1027/1618-3169/a000139] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Spatial reference in multiple object tracking is available from configurations of dynamic objects and static reference objects. In three experiments, we studied the use of spatial reference in tracking and in relocating targets after abrupt scene rotations. Observers tracked 1, 2, 3, 4, and 6 targets in 3D scenes, in which white balls moved on a square floor plane. The floor plane was either visible thus providing static spatial reference or it was invisible. Without scene rotations, the configuration of dynamic objects provided sufficient spatial reference and static spatial reference was not advantageous. In contrast, with abrupt scene rotations of 20°, static spatial reference supported in relocating targets. A wireframe floor plane lacking local visual detail was as effective as a checkerboard. Individually colored geometric forms as static reference objects provided no additional benefit either, even if targets were centered on these forms at the abrupt scene rotation. Individualizing the dynamic objects themselves by color for a brief interval around the abrupt scene rotation, however, did improve performance. We conclude that attentional tracking of moving targets proceeds within dynamic configurations but detached from static local background.
Collapse
Affiliation(s)
- Georg Jahn
- Department of Psychology, University of Greifswald, Germany
| | | | | | | |
Collapse
|
17
|
Meyerhoff HS, Huff M, Papenmeier F, Jahn G, Schwan S. Continuous visual cues trigger automatic spatial target updating in dynamic scenes. Cognition 2011; 121:73-82. [DOI: 10.1016/j.cognition.2011.06.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2011] [Revised: 05/26/2011] [Accepted: 06/02/2011] [Indexed: 11/17/2022]
|