1
|
Hong I, Kim MS. Habit-like attentional bias is unlike goal-driven attentional bias against spatial updating. Cogn Res Princ Implic 2022; 7:50. [PMID: 35713814 PMCID: PMC9206057 DOI: 10.1186/s41235-022-00404-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 06/05/2022] [Indexed: 11/29/2022] Open
Abstract
Statistical knowledge of a target’s location may benefit visual search, and rapidly understanding the changes in regularity would increase the adaptability in visual search situations where fast and accurate performance is required. The current study tested the sources of statistical knowledge—explicitly-given instruction or experience-driven learning—and whether they affect the speed and location spatial attention is guided. Participants performed a visual search task with a statistical regularity to bias one quadrant (“old-rich” condition) in the training phase, followed by another quadrant (“new-rich” condition) in the switching phase. The “instruction” group was explicitly instructed on the regularity, whereas the “no-instruction” group was not. It was expected that the instruction group would rely on goal-driven attention (using regularities with explicit top-down knowledge), and the no-instruction group would rely on habit-like attention (learning regularities through repetitive experiences) in visual search. Compared with the no-instruction group, the instruction group readjusted spatial attention following the regularity switch more rapidly. The instruction group showed greater attentional bias toward the new-rich quadrant than the old-rich quadrant; however, the no-instruction group showed a similar extent of attentional bias to two rich quadrants. The current study suggests that the source of statistical knowledge can affect attentional allocation. Moreover, habit-like attention, a different type of attentional source than goal-driven attention, is relatively implicit and inflexible.
Collapse
Affiliation(s)
- Injae Hong
- Department of Psychology, Yonsei University, Yonsei-ro 50 Seodaemun-gu, Seoul, 03722, Korea
| | - Min-Shik Kim
- Department of Psychology, Yonsei University, Yonsei-ro 50 Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
2
|
Is probabilistic cuing of visual search an inflexible attentional habit? A meta-analytic review. Psychon Bull Rev 2021; 29:521-529. [PMID: 34816390 PMCID: PMC9038896 DOI: 10.3758/s13423-021-02025-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/30/2021] [Indexed: 11/25/2022]
Abstract
In studies on probabilistic cuing of visual search, participants search for a target among several distractors and report some feature of the target. In a biased stage the target appears more frequently in one specific area of the search display. Eventually, participants become faster at finding the target in that rich region compared to the sparse region. In some experiments, this stage is followed by an unbiased stage, where the target is evenly located across all regions of the display. Despite this change in the spatial distribution of targets, search speed usually remains faster when the target is located in the previously rich region. The persistence of the bias even when it is no longer advantageous has been taken as evidence that this phenomenon is an attentional habit. The aim of this meta-analysis was to test whether the magnitude of probabilistic cuing decreases from the biased to the unbiased stage. A meta-analysis of 42 studies confirmed that probabilistic cuing during the unbiased stage was roughly half the size of cuing during the biased stage, and this decrease persisted even after correcting for publication bias. Thus, the evidence supporting the claim that probabilistic cuing is an attentional habit might not be as compelling as previously thought.
Collapse
|
3
|
Zheng L, Dobroschke JG, Pollmann S. Egocentric and Allocentric Reference Frames Can Flexibly Support Contextual Cueing. Front Psychol 2021; 12:711890. [PMID: 34413816 PMCID: PMC8369006 DOI: 10.3389/fpsyg.2021.711890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/07/2021] [Indexed: 11/20/2022] Open
Abstract
We investigated if contextual cueing can be guided by egocentric and allocentric reference frames. Combinations of search configurations and external frame orientations were learned during a training phase. In Experiment 1, either the frame orientation or the configuration was rotated, thereby disrupting either the allocentric or egocentric and allocentric predictions of the target location. Contextual cueing survived both of these manipulations, suggesting that it can overcome interference from both reference frames. In contrast, when changed orientations of the external frame became valid predictors of the target location in Experiment 2, we observed contextual cueing as long as one reference frame was predictive of the target location, but contextual cueing was eliminated when both reference frames were invalid. Thus, search guidance in repeated contexts can be supported by both egocentric and allocentric reference frames as long as they contain valid information about the search goal.
Collapse
Affiliation(s)
- Lei Zheng
- Department of Experimental Psychology, Otto-von-Guericke-University, Magdeburg, Germany
| | | | - Stefan Pollmann
- Department of Experimental Psychology, Otto-von-Guericke-University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke-University, Magdeburg, Germany.,Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
4
|
Gotcha: Working memory prioritization from automatic attentional biases. Psychon Bull Rev 2021; 29:415-429. [PMID: 34131892 DOI: 10.3758/s13423-021-01958-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2021] [Indexed: 11/08/2022]
Abstract
Attention is an important resource for prioritizing information in working memory (WM), and it can be deployed both strategically and automatically. Most research investigating the relationship between WM and attention has focused on strategic efforts to deploy attentional resources toward remembering relevant information. However, such voluntary attentional control represents a mere subset of the attentional processes that select information to be encoded and maintained in WM (Theeuwes, Journal of Cognition, 1[1]: 29, 1-15, 2018). Here, we discuss three ways in which information becomes prioritized automatically in WM-physical salience, statistical learning, and reward learning. This review integrates findings from perception and working memory studies to propose a more sophisticated understanding of the relationship between attention and working memory.
Collapse
|
5
|
The effects of perceptual cues on visual statistical learning: Evidence from children and adults. Mem Cognit 2021; 49:1645-1664. [PMID: 33876401 DOI: 10.3758/s13421-021-01179-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2021] [Indexed: 11/08/2022]
Abstract
In visual statistical learning, one can extract the statistical regularities of target locations in an incidental manner. The current study examined the impact of salient perceptual cues on one type of visual statistical learning: probability cueing effects. In a visual search task, the target appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the screen was rotated by 90° and the targets appeared in the four quadrants with equal probabilities. In Experiment 1 without the addition of salient perceptual cues, adults showed significant probability cueing effects, but did not show a persistent attentional bias in the testing phase. In Experiments 2, 3, and 4, salient perceptual cues were added to the rich or the sparse quadrants. Adults showed significant probability cueing effects but no persistent attentional bias. In Experiment 5, younger children, older children, and adults showed significant probability cueing effects. All three groups also showed an attentional gradient phenomenon: reaction times were slower when the targets were in the sparse quadrant diagonal to, rather than adjacent to, the rich quadrant. Furthermore, both children groups showed a persistent egocentric attentional bias in the testing phase. These findings indicated that salient perceptual cues enhanced but did not reduce probability cueing effects, children and adults shared similar basic attentional mechanisms in probability cueing effects, and children and adults showed differences in the persistence of attentional bias.
Collapse
|
6
|
Visual statistical learning in children and adults: evidence from probability cueing. PSYCHOLOGICAL RESEARCH 2020; 85:2911-2921. [PMID: 33170355 DOI: 10.1007/s00426-020-01445-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 10/26/2020] [Indexed: 10/23/2022]
Abstract
In visual statistical learning (VSL), one can extract and exhibit memory for the statistical regularities of target locations in an incidental manner. The current study examined the development of VSL using the probability cueing paradigm with salient perceptual cues. We also investigated the elicited attention gradient phenomenon in VSL. In a visual search task, the target first appeared more often in one quadrant (i.e., rich) than the other quadrants (i.e., sparse). Then, the participants rotated the screen by 90° and the targets appeared in the four quadrants with equal probabilities. Each quadrant had a unique background color and was, hence, associated with salient perceptual cues. 1st-4th graders and adults participated. All participants showed probability cueing effects to a similar extent. We observed an attention gradient phenomenon, as all participants responded slower to the sparse quadrant that was distant from, rather than the ones that were adjacent to the rich quadrant. In the testing phase, all age groups showed persistent attentional biases based on both egocentric and allocentric perspectives. These findings showed that probability cueing effects may develop early, that perceptual cues can bias attention guidance during VSL for both children and adults, and that VSL can elicit a spaced-based attention gradient phenomenon for children and adults.
Collapse
|
7
|
Baxter R, Smith AD. Searching for individual determinants of probabilistic cueing in large-scale immersive virtual environments. Q J Exp Psychol (Hove) 2020; 75:328-347. [PMID: 33089735 DOI: 10.1177/1747021820969148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Large-scale search behaviour is an everyday occurrence, yet its underlying mechanisms are not commonly examined within experimental psychology. Key to efficient search behaviour is the sensitivity to environmental cues that might guide exploration, such as a target appearing with greater regularity in one region than another. Spatial cueing by probability has been examined in visual search paradigms, but the few studies that have addressed its contribution to large-scale search and foraging present contrasting accounts of the conditions under which a cueing effect can be reliably observed. In the present study, participants physically searched a virtual arena by inspecting identical locations until they found the target. The target was always present, although its location was probabilistically defined so that it appeared in the cued hemispace on 80% of trials. In Experiment 1, when participants' starting positions were stable, a probabilistic cueing effect was observed, with a strong bias towards searching the cued side. In Experiment 2, the starting position changed across the experiment, such that the cued region was defined in allocentric co-ordinates only. In this case, a probabilistic cueing effect was not observed across the sample. Analysis of individual differences in Experiment 2 suggests, however, that some participants may have learned the contingency underpinning the target's location, although these differences were unrelated to other tests of visuospatial ability. These results suggest that the ability to learn the likelihood of an item's fixed location when starting from different perspectives is driven by individual differences in other cognitive or perceptual factors.
Collapse
Affiliation(s)
- Rory Baxter
- School of Psychology, University of Plymouth, Plymouth, UK
| | | |
Collapse
|
8
|
Abstract
It is well known that spatial attention can be directed in a top-down way to task-relevant locations in space. In addition, through visual statistical learning (VSL), attention can be biased towards relevant (target) locations and away from irrelevant (distractor) locations. The present study investigates the interaction between the explicit task-relevant, top-down attention and the lingering attentional biases due to VSL. We wanted to determine the contribution of each of these two processes to attentional selection. In the current study, participants performed a search task while keeping a location in spatial working memory. In Experiment 1, the target appeared more often in one location, and appeared less often in other location. In Experiment 2, a color singleton distractor was presented more often in location than in all other locations. The results show that when the search target matched the location that was kept in working memory, participants were much faster at responding to the search target than when it did not match, signifying top-down attentional selection. Independent of this top-down effect, we found a clear effect of VSL as responses were even faster when target (Experiment 1) or the distractor (Experiment 2) was presented at a more likely location in visual field. We conclude that attentional selection is driven by implicit biases due to statistical learning and by explicit top-down processing, each process individually and independently modulating the neural activity within the spatial priority map.
Collapse
|
9
|
Abstract
Recent studies on the probability cueing effect have shown that a spatial bias emerges toward a location where a target frequently appears. In the present study, we explored whether such spatial bias can be flexibly shifted when the target-frequent location changes depending on the given context. In four consecutive experiments, participants performed a visual search task within two distinct contexts that predicted the visual quadrant that was more likely to contain a target. We found that spatial attention was equally biased toward two target-frequent quadrants, regardless of context (context-independent spatial bias), when the context information was not mandatory for accurate visual search. Conversely, when the context became critical for the visual search task, the spatial bias shifted significantly more to the target-frequent quadrant predicted by the given context (context-specific spatial bias). These results show that the task relevance of context determines whether probabilistic knowledge can be learned flexibly in a context-specific manner.
Collapse
|
10
|
Shioiri S, Kobayashi M, Matsumiya K, Kuriki I. Spatial representations of the viewer's surroundings. Sci Rep 2018; 8:7171. [PMID: 29740127 PMCID: PMC5940847 DOI: 10.1038/s41598-018-25433-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 04/23/2018] [Indexed: 11/13/2022] Open
Abstract
Spatial representation surrounding a viewer including outside the visual field is crucial for moving around the three-dimensional world. To obtain such spatial representations, we predict that there is a learning process that integrates visual inputs from different viewpoints covering all the 360° visual angles. We report here the learning effect of the spatial layouts on six displays arranged to surround the viewer, showing shortening of visual search time on surrounding layouts that are repeatedly used (contextual cueing effect). The learning effect is found even in the time to reach the display with the target as well as the time to reach the target within the target display, which indicates that there is an implicit learning effect on spatial configurations of stimulus elements across displays. Since, furthermore, the learning effect is found between layouts and the target presented on displays located even 120° apart, this effect should be based on the representation that covers visual information far outside the visual field.
Collapse
Affiliation(s)
- Satoshi Shioiri
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan. .,Graduate School of Information Sciences, Tohoku University, Sendai, Japan.
| | - Masayuki Kobayashi
- Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Kazumichi Matsumiya
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.,Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| | - Ichiro Kuriki
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.,Graduate School of Information Sciences, Tohoku University, Sendai, Japan
| |
Collapse
|
11
|
Abstract
Recent research has expanded the list of factors that control spatial attention. Beside current goals and perceptual salience, statistical learning, reward, motivation and emotion also affect attention. But do these various factors influence spatial attention in the same manner, as suggested by the integrated framework of attention, or do they target different aspects of spatial attention? Here I present evidence that the control of attention may be implemented in two ways. Whereas current goals typically modulate where in space attention is prioritized, search habits affect how one moves attention in space. Using the location probability learning paradigm, I show that a search habit forms when people frequently find a visual search target in one region of space. Attentional cuing by probability learning differs from that by current goals. Probability cuing is implicit and persists long after the probability cue is no longer valid. Whereas explicit goal-driven attention codes space in an environment-centered reference frame, probability cuing is viewer-centered and is insensitive to secondary working memory load and aging. I propose a multi-level framework that separates the source of attentional control from its implementation. Similar to the integrated framework, the multi-level framework considers current goals, perceptual salience, and selection history as major sources of attentional control. However, these factors are implemented in two ways, controlling where spatial attention is allocated and how one shifts attention in space.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
12
|
Riggs CA, Godwin HJ, Mann CM, Smith SJ, Boardman M, Liversedge SP, Donnelly N. Rummage search by expert dyads, novice dyads and novice individuals for objects hidden in houses. VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1445678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
| | | | - Carl M. Mann
- School of Psychology, University of Southampton, Southampton, UK
| | - Sarah J. Smith
- Defence Science and Technology Laboratory, Salisbury, UK
| | | | | | - Nick Donnelly
- School of Psychology, University of Southampton, Southampton, UK
| |
Collapse
|
13
|
Schlagbauer B, Rausch M, Zehetleitner M, Müller HJ, Geyer T. Contextual cueing of visual search is associated with greater subjective experience of the search display configuration. Neurosci Conscious 2018; 2018:niy001. [PMID: 30042854 PMCID: PMC6007139 DOI: 10.1093/nc/niy001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Revised: 12/01/2017] [Accepted: 01/30/2018] [Indexed: 11/14/2022] Open
Abstract
Visual search is facilitated when display configurations are repeated over time, showing that memory of spatio-configural context can cue the location of the target. The present study investigates whether memory of the search target in relation to the configuration of distractors alters subjective experience of the visual search target and/or the subjective experience of the display configuration. Observers performed a masked localization task for targets embedded in repeated vs. non-repeated (baseline) arrays of distractors items. After the localization response, observers reported their subjective experience of either the target or the display configuration. Bayesian analysis revealed that repeated displays resulted in a stronger visual experience of both targets and display configurations. However, subsequent analysis showed that repeated search displays increased the correlation between the experience of the display configuration and localization accuracy, but there was no such effect on experience of the target stimulus. We suggest that memory of visual context enhances the representation of the current visual search display. This representation improves visual search and at the same time increases observers' subjective experience of the display configuration.
Collapse
Affiliation(s)
- Bernhard Schlagbauer
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Graduate School of Systemic Neurosciences, Groβhaderner Str. 2, 82152 Planegg-Martinsried, Germany
| | - Manuel Rausch
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Graduate School of Systemic Neurosciences, Groβhaderner Str. 2, 82152 Planegg-Martinsried, Germany
- Fakultät für Psychologie und Pädagogik, Fachgebiet Psychologie II, Katholische Universität Eichstätt-Ingolstadt, Ostenstraβe 25, 85072 Eichstätt, Germany
| | - Michael Zehetleitner
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Fakultät für Psychologie und Pädagogik, Fachgebiet Psychologie II, Katholische Universität Eichstätt-Ingolstadt, Ostenstraβe 25, 85072 Eichstätt, Germany
| | - Hermann J Müller
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, UK
| | - Thomas Geyer
- Department Psychologie, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
14
|
Addleman DA, Tao J, Remington RW, Jiang YV. Explicit goal-driven attention, unlike implicitly learned attention, spreads to secondary tasks. J Exp Psychol Hum Percept Perform 2018; 44:356-366. [PMID: 28795835 PMCID: PMC5809231 DOI: 10.1037/xhp0000457] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
To what degree does spatial attention for one task spread to all stimuli in the attended region, regardless of task relevance? Most models imply that spatial attention acts through a unitary priority map in a task-general manner. We show that implicit learning, unlike endogenous spatial cuing, can bias spatial attention within one task without biasing attention to a spatially overlapping secondary task. Participants completed a visual search task superimposed on a background containing scenes, which they were told to encode for a later memory task. Experiments 1 and 2 used explicit instructions to bias spatial attention to one region for visual search; Experiment 3 used location probability cuing to implicitly bias spatial attention. In location probability cuing, a target appeared in one region more than others despite participants not being told of this. In all experiments, search performance was better in the cued region than in uncued regions. However, scene memory was better in the cued region only following endogenous guidance, not after implicit biasing of attention. These data support a dual-system view of top-down attention that dissociates goal-driven and implicitly learned attention. Goal-driven attention is task general, amplifying processing of a cued region across tasks, whereas implicit statistical learning is task-specific. (PsycINFO Database Record
Collapse
Affiliation(s)
| | - Jinyi Tao
- Department of Psychology, University of Minnesota
| | | | | |
Collapse
|
15
|
Aging affects the balance between goal-guided and habitual spatial attention. Psychon Bull Rev 2016; 24:1135-1141. [DOI: 10.3758/s13423-016-1214-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
16
|
Modulation of spatial attention by goals, statistical learning, and monetary reward. Atten Percept Psychophys 2016; 77:2189-206. [PMID: 26105657 DOI: 10.3758/s13414-015-0952-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study documented the relative strength of task goals, visual statistical learning, and monetary reward in guiding spatial attention. Using a difficult T-among-L search task, we cued spatial attention to one visual quadrant by (i) instructing people to prioritize it (goal-driven attention), (ii) placing the target frequently there (location probability learning), or (iii) associating that quadrant with greater monetary gain (reward-based attention). Results showed that successful goal-driven attention exerted the strongest influence on search RT. Incidental location probability learning yielded a smaller though still robust effect. Incidental reward learning produced negligible guidance for spatial attention. The 95 % confidence intervals of the three effects were largely nonoverlapping. To understand these results, we simulated the role of location repetition priming in probability cuing and reward learning. Repetition priming underestimated the strength of location probability cuing, suggesting that probability cuing involved long-term statistical learning of how to shift attention. Repetition priming provided a reasonable account for the negligible effect of reward on spatial attention. We propose a multiple-systems view of spatial attention that includes task goals, search habit, and priming as primary drivers of top-down attention.
Collapse
|
17
|
|
18
|
Abstract
Statistical regularities in our environment enhance perception and modulate the allocation of spatial attention. Surprisingly little is known about how learning-induced changes in spatial attention transfer across tasks. In this study, we investigated whether a spatial attentional bias learned in one task transfers to another. Most of the experiments began with a training phase in which a search target was more likely to be located in one quadrant of the screen than in the other quadrants. An attentional bias toward the high-probability quadrant developed during training (probability cuing). In a subsequent, testing phase, the target's location distribution became random. In addition, the training and testing phases were based on different tasks. Probability cuing did not transfer between visual search and a foraging-like task. However, it did transfer between various types of visual search tasks that differed in stimuli and difficulty. These data suggest that different visual search tasks share a common and transferrable learned attentional bias. However, this bias is not shared by high-level, decision-making tasks such as foraging.
Collapse
|
19
|
Jiang YV, Won BY. Spatial scale, rather than nature of task or locomotion, modulates the spatial reference frame of attention. J Exp Psychol Hum Percept Perform 2015; 41:866-78. [PMID: 25867510 DOI: 10.1037/xhp0000056] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visuospatial attention is strongly biased to locations that had frequently contained a search target before. However, the function of this bias depends on the reference frame in which attended locations are coded. Previous research has shown a striking difference between tasks administered on a computer monitor and those administered in a large environment, with the former inducing viewer-centered learning and the latter environment-centered learning. Why does environment-centered learning fail on a computer? Here, we tested 3 possibilities: differences in spatial scale, the nature of task, and locomotion may each influence the reference frame of attention. Participants searched for a target on a monitor placed flat on a stand. On each trial, they stood at a different location around the monitor. The target was frequently located in a fixed area of the monitor, but changes in participants' perspective rendered this area random relative to the participants. Under incidental learning conditions, participants failed to acquire environment-centered learning even when (a) the task and display resembled those of a large-scale task and (b) the search task required locomotion. The difficulty in inducing environment-centered learning on a computer underscores the egocentric nature of visual attention. It supports the idea that spatial scale modulates the reference frame of attention.
Collapse
|
20
|
Won BY, Jiang YV. Spatial working memory interferes with explicit, but not probabilistic cuing of spatial attention. J Exp Psychol Learn Mem Cogn 2014; 41:787-806. [PMID: 25401460 DOI: 10.1037/xlm0000040] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent empirical and theoretical work has depicted a close relationship between visual attention and visual working memory. For example, rehearsal in spatial working memory depends on spatial attention, whereas adding a secondary spatial working memory task impairs attentional deployment in visual search. These findings have led to the proposal that working memory is attention directed toward internal representations. Here, we show that the close relationship between these 2 constructs is limited to some but not all forms of spatial attention. In 5 experiments, participants held color arrays, dot locations, or a sequence of dots in working memory. During the memory retention interval, they performed a T-among-L visual search task. Crucially, the probable target location was cued either implicitly through location probability learning or explicitly with a central arrow or verbal instruction. Our results showed that whereas imposing a visual working memory load diminished the effectiveness of explicit cuing, it did not interfere with probability cuing. We conclude that spatial working memory shares similar mechanisms with explicit, goal-driven attention but is dissociated from implicitly learned attention.
Collapse
|
21
|
Jiang YV, Swallow KM. Changing viewer perspectives reveals constraints to implicit visual statistical learning. J Vis 2014; 14:14.12.3. [PMID: 25294640 DOI: 10.1167/14.12.3] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations.
Collapse
Affiliation(s)
- Yuhong V Jiang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Khena M Swallow
- Department of Psychology, Cornell University, Ithaca, NY, USA
| |
Collapse
|
22
|
Viewpoint-dependent representation of contextual information in visual working memory. Atten Percept Psychophys 2014; 76:663-8. [PMID: 24470259 DOI: 10.3758/s13414-014-0632-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Objects are not represented individually in visual working memory (VWM), but in relation to the contextual information provided by other memorized objects. We studied whether the contextual information provided by the spatial configuration of all memorized objects is viewpoint-dependent. We ran two experiments asking participants to detect changes in locations between memory and probe for one object highlighted in the probe image. We manipulated the changes in viewpoint between memory and probe (Exp. 1: 0°, 30°, 60°; Exp. 2: 0°, 60°), as well as the spatial configuration visible in the probe image (Exp. 1: full configuration, partial configuration; Exp. 2: full configuration, no configuration). Location change detection was higher with the full spatial configuration than with the partial configuration or with no spatial configuration at viewpoint changes of 0°, thus replicating previous findings on the nonindependent representations of individual objects in VWM. Most importantly, the effect of spatial configurations decreased with increasing viewpoint changes, suggesting a viewpoint-dependent representation of contextual information in VWM. We discuss these findings within the context of this special issue, in particular whether research performed within the slots-versus-resources debate and research on the effects of contextual information might focus on two different storage systems within VWM.
Collapse
|
23
|
Jiang YV, Won BY, Swallow KM, Mussack DM. Spatial reference frame of attention in a large outdoor environment. J Exp Psychol Hum Percept Perform 2014; 40:1346-57. [PMID: 24842066 DOI: 10.1037/a0036779] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A central question about spatial attention is whether it is referenced relative to the external environment or to the viewer. This question has received great interest in recent psychological and neuroscience research, with many but not all, finding evidence for a viewer-centered representation. However, these previous findings were confined to computer-based tasks that involved stationary viewers. Because natural search behaviors differ from computer-based tasks in viewer mobility and spatial scale, it is important to understand how spatial attention is coded in the natural environment. To this end, we created an outdoor visual search task in which participants searched a large (690 square ft), concrete, outdoor space to report which side of a coin on the ground faced up. They began search in the middle of the space and were free to move around. Attentional cuing by statistical learning was examined by placing the coin in 1 quadrant of the search space on 50% of the trials. As in computer-based tasks, participants learned and used these regularities to guide search. However, cuing could be referenced to either the environment or the viewer. The spatial reference frame of attention shows greater flexibility in the natural environment than previously found in the lab.
Collapse
|
24
|
Jiang YV, Won BY, Swallow KM. First saccadic eye movement reveals persistent attentional guidance by implicit learning. J Exp Psychol Hum Percept Perform 2014; 40:1161-73. [PMID: 24512610 DOI: 10.1037/a0035961] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Implicit learning about where a visual search target is likely to appear often speeds up search. However, whether implicit learning guides spatial attention or affects postsearch decisional processes remains controversial. Using eye tracking, this study provides compelling evidence that implicit learning guides attention. In a training phase, participants often found the target in a high-frequency, "rich" quadrant of the display. When subsequently tested in a phase during which the target was randomly located, participants were twice as likely to direct the first saccadic eye movement to the previously rich quadrant than to any of the sparse quadrants. The attentional bias persisted for nearly 200 trials after training and was unabated by explicit instructions to distribute attention evenly. We propose that implicit learning guides spatial attention but in a qualitatively different manner than goal-driven attention.
Collapse
Affiliation(s)
| | - Bo-Yeong Won
- Department of Psychology, University of Minnesota
| | | |
Collapse
|