1
|
Becker SI, Hamblin-Frohman Z, Xia H, Qiu Z. Tuning to non-veridical features in attention and perceptual decision-making: An EEG study. Neuropsychologia 2023; 188:108634. [PMID: 37391127 DOI: 10.1016/j.neuropsychologia.2023.108634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 06/11/2023] [Accepted: 06/28/2023] [Indexed: 07/02/2023]
Abstract
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.
Collapse
Affiliation(s)
| | | | - Hongfeng Xia
- School of Psychology, The University of Queensland, Australia
| | - Zeguo Qiu
- School of Psychology, The University of Queensland, Australia
| |
Collapse
|
2
|
Grössle IM, Schubö A, Tünnermann J. Testing a relational account of search templates in visual foraging. Sci Rep 2023; 13:12541. [PMID: 37532742 PMCID: PMC10397186 DOI: 10.1038/s41598-023-38362-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/06/2023] [Indexed: 08/04/2023] Open
Abstract
Search templates guide human visual attention toward relevant targets. Templates are often seen as encoding exact target features, but recent studies suggest that templates rather contain "relational properties" (e.g., they facilitate "redder" stimuli instead of specific hues of red). Such relational guidance seems helpful in naturalistic searches where illumination or perspective renders exact feature values unreliable. So far relational guidance has only been demonstrated in rather artificial single-target search tasks with briefly flashed displays. Here, we investigate whether relational guidance also occurs when humans interact with the search environment for longer durations to collect multiple target elements. In a visual foraging task, participants searched for and collected multiple targets among distractors of different relationships to the target colour. Distractors whose colour differed from the environment in the same direction as the targets reduced foraging efficiency to the same amount as distractors whose colour matched the target colour. Distractors that differed by the same colour distance but in the opposite direction of the target colour did not reduce efficiency. These findings provide evidence that search templates encode relational target features in naturalistic search tasks and suggest that attention guidance based on relational features is a common mode in dynamic, real-world search environments.
Collapse
Affiliation(s)
- Inga M Grössle
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - Jan Tünnermann
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany.
| |
Collapse
|
3
|
Liu K, Zhao N, Huang T, He W, Xu L, Chi X, Yang X. Contributions of linguistic, quantitative, and spatial attention skills to young children's math versus reading: Same, different, or both? INFANT AND CHILD DEVELOPMENT 2022. [DOI: 10.1002/icd.2392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Affiliation(s)
- Kaichun Liu
- Faculty of Psychology Beijing Normal University Beijing People's Republic of China
| | - Ningxin Zhao
- Faculty of Psychology Beijing Normal University Beijing People's Republic of China
| | - Tong Huang
- The Experimental School of Shenzhen Institute of Advanced Technology Shenzhen People's Republic of China
| | - Wei He
- School of Leisure Sports and Management Guangzhou Sport University Guangzhou People's Republic of China
| | - Lan Xu
- School of Psycholgy Shenzhen University Shenzhen People's Republic of China
| | - Xia Chi
- Women's Hospital of Nanjing Medical University Nanjing Maternity and Child Health Care Hospital Nanjing People's Republic of China
| | - Xiujie Yang
- Faculty of Psychology Beijing Normal University Beijing People's Republic of China
| |
Collapse
|
4
|
Xu ZJ, Lleras A, Buetti S. Predicting how surface texture and shape combine in the human visual system to direct attention. Sci Rep 2021; 11:6170. [PMID: 33731840 PMCID: PMC7971056 DOI: 10.1038/s41598-021-85605-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 02/23/2021] [Indexed: 11/12/2022] Open
Abstract
Objects differ from one another along a multitude of visual features. The more distinct an object is from other objects in its surroundings, the easier it is to find it. However, it is still unknown how this distinctiveness advantage emerges in human vision. Here, we studied how visual distinctiveness signals along two feature dimensions—shape and surface texture—combine to determine the overall distinctiveness of an object in the scene. Distinctiveness scores between a target object and distractors were measured separately for shape and texture using a search task. These scores were then used to predict search times when a target differed from distractors along both shape and texture. Model comparison showed that the overall object distinctiveness was best predicted when shape and texture combined using a Euclidian metric, confirming the brain is computing independent distinctiveness scores for shape and texture and combining them to direct attention.
Collapse
Affiliation(s)
- Zoe Jing Xu
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA.
| | - Alejandro Lleras
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| | - Simona Buetti
- University of Illinois, 603 E. Daniel St., Champaign, IL, 61820, USA
| |
Collapse
|
5
|
Abstract
Feature Integration Theory (FIT) set out the groundwork for much of the work in visual cognition since its publication. One of the most important legacies of this theory has been the emphasis on feature-specific processing. Nowadays, visual features are thought of as a sort of currency of visual attention (e.g., features can be attended, processing of attended features is enhanced), and attended features are thought to guide attention towards likely targets in a scene. Here we propose an alternative theory - the Target Contrast Signal Theory - based on the idea that when we search for a specific target, it is not the target-specific features that guide our attention towards the target; rather, what determines behavior is the result of an active comparison between the target template in mind and every element present in the scene. This comparison occurs in parallel and is aimed at rejecting from consideration items that peripheral vision can confidently reject as being non-targets. The speed at which each item is evaluated is determined by the overall contrast between that item and the target template. We present computational simulations to demonstrate the workings of the theory as well as eye-movement data that support core predictions of the theory. The theory is discussed in the context of FIT and other important theories of visual search.
Collapse
|
6
|
Liao MR, Britton MK, Anderson BA. Selection history is relative. Vision Res 2020; 175:23-31. [PMID: 32663647 PMCID: PMC7484361 DOI: 10.1016/j.visres.2020.06.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 06/08/2020] [Accepted: 06/29/2020] [Indexed: 01/08/2023]
Abstract
Visual attention can be tuned to specific features to aid in visual search. The way in which these search strategies are established and maintained is flexible, reflecting goal-directed attentional control, but can exert a persistent effect on selection that remains even when these strategies are no longer advantageous, reflecting an attentional bias driven by selection history. Apart from feature-specific search, recent studies have shown that attention can be tuned to target-nontarget relationships. Here we tested whether a relational search strategy continues to bias attention in a subsequent task, where the relationally better color and former target color both serve as distractors (Experiment 1) or as potential targets (Experiment 2). We demonstrate that a relational bias can persist in a subsequent task in which color serves as a task-irrelevant feature, both impairing and facilitating visual search performance. Our findings extend our understanding of the relational account of attentional control and the nature of selection history effects on attention.
Collapse
Affiliation(s)
- Ming-Ray Liao
- Texas A&M University, Department of Psychological and Brain Sciences, 4235 TAMU, College Station, TX 77843-4235, United States.
| | - Mark K Britton
- Texas A&M University, Department of Psychological and Brain Sciences, 4235 TAMU, College Station, TX 77843-4235, United States.
| | - Brian A Anderson
- Texas A&M University, Department of Psychological and Brain Sciences, 4235 TAMU, College Station, TX 77843-4235, United States.
| |
Collapse
|
7
|
The attentional blink: A relational accountof attentional engagement. Psychon Bull Rev 2020; 28:219-227. [PMID: 32989720 DOI: 10.3758/s13423-020-01813-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2020] [Indexed: 11/08/2022]
Abstract
Visual attention allows selecting relevant information from cluttered visual scenes and is largely determined by our ability to tune or bias visual attention to goal-relevant objects. Originally, it was believed that this top-down bias operates on the specific feature values of objects (e.g., tuning attention to orange). However, subsequent studies showed that attention is tuned to in a context-dependent manner to the relative feature of a sought-after object (e.g., the reddest or yellowest item), which drives covert attention and eye movements in visual search. However, the evidence for the corresponding relational account is still limited to the orienting of spatial attention. The present study tested whether the relational account can be extended to explain attentional engagement and specifically, the attentional blink (AB) in a rapid serial visual presentation (RSVP) task. In two blocked conditions, observers had to identify an orange target letter that could be either redder or yellower than the other letters in the stream. In line with previous work, a target-matching (orange) distractor presented prior to the target produced a robust AB. Extending on prior work, we found an equally large AB in response to relatively matching distractors that matched only the relative color of the target (i.e., red or yellow; depending on whether the target was redder or yellower). Unrelated distractors mostly failed to produce a significant AB. These results closely match previous findings assessing spatial attention and show that the relational account can be extended to attentional engagement and selection of continuously attended objects in time.
Collapse
|
8
|
Conjunction search: Can we simultaneously bias attention to features and relations? Atten Percept Psychophys 2020; 82:246-268. [PMID: 31317396 DOI: 10.3758/s13414-019-01807-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Attention allows selection of sought-after objects by tuning attention in a top-down manner to task-relevant features. Among other possible search modes, attention can be tuned to the exact feature values of a target (e.g., red, large), or to the relative target feature (e.g., reddest, largest item), in which case selection is context dependent. The present study tested whether we can tune attention simultaneously to a specific feature value (e.g., specific size) and a relative target feature (e.g., relative color) of a conjunction target, using a variant of the spatial cueing paradigm. Tuning to the specific feature of the target was encouraged by randomly presenting the conjunction target in a varying context of nontarget items, and feature-specific versus relational tuning was assessed by briefly presenting conjunction cues that either matched or mismatched the relative versus physical features of the target. The results showed that attention could be biased to the specific size and the relative color of the conjunction target or vice versa. These results suggest the existence of local and relatively low-level attentional control mechanisms that operate independently of each other in separate feature dimensions (color, size) to choose the best search strategy in line with current top-down goals.
Collapse
|
9
|
York A, Becker SI. Top-down modulation of gaze capture: Feature similarity, optimal tuning, or tuning to relative features? J Vis 2020; 20:6. [PMID: 32282888 PMCID: PMC7405730 DOI: 10.1167/jov.20.4.6] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 01/07/2020] [Indexed: 11/24/2022] Open
Abstract
It is well-known that we can tune attention to specific features (e.g., colors). Originally, it was believed that attention would always be tuned to the exact feature value of the sought-after target (e.g., orange). However, subsequent studies showed that selection is often geared towards target-dissimilar items, which was variably attributed to (1) tuning attention to the relative target feature that distinguishes the target from other items in the surround (e.g., reddest item; relational tuning), (2) tuning attention to a shifted target feature that allows more optimal target selection (e.g., reddish orange; optimal tuning), or (3) broad attentional tuning and selection of the most salient item that is still similar to the target (combined similarity/saliency). The present study used a color search task and assessed gaze capture by differently coloured distractors to distinguish between the three accounts. The results of the first experiment showed that a very target-dissimilar distractor that matched the relative color of the target but was outside of the area of optimal tuning still captured very strongly. As shown by a control condition and a control experiment, bottom-up saliency modulated capture only weakly, ruling out a combined similarity-saliency account. With this, the results support the relational account that attention is tuned to the relative target feature (e.g., reddest), not an optimal feature value or the target feature.
Collapse
Affiliation(s)
- Ashley York
- The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
10
|
Buetti S, Xu J, Lleras A. Predicting how color and shape combine in the human visual system to direct attention. Sci Rep 2019; 9:20258. [PMID: 31889066 PMCID: PMC6937264 DOI: 10.1038/s41598-019-56238-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 12/07/2019] [Indexed: 11/19/2022] Open
Abstract
Objects in a scene can be distinct from one another along a multitude of visual attributes, such as color and shape, and the more distinct an object is from its surroundings, the easier it is to find it. However, exactly how this distinctiveness advantage arises in vision is not well understood. Here we studied whether and how visual distinctiveness along different visual attributes (color and shape, assessed in four experiments) combine to determine an object’s overall distinctiveness in a scene. Unidimensional distinctiveness scores were used to predict performance in six separate experiments where a target object differed from distractor objects along both color and shape. Results showed that there is mathematical law determining overall distinctiveness as the simple sum of the distinctiveness scores along each visual attribute. Thus, the brain must compute distinctiveness scores independently for each visual attribute before summing them into the overall score that directs human attention.
Collapse
Affiliation(s)
| | - Jing Xu
- University of Illinois, Champaign, United States
| | | |
Collapse
|
11
|
Becker SI, Martin A, Hamblin-Frohman Z. Target templates in singleton search vs. feature-based search modes. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1676352] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
| | - Aimee Martin
- School of Psychology, The University of Queensland, Brisbane, Australia
| | | |
Collapse
|
12
|
Geng JJ, Witkowski P. Template-to-distractor distinctiveness regulates visual search efficiency. Curr Opin Psychol 2019; 29:119-125. [PMID: 30743200 PMCID: PMC6625942 DOI: 10.1016/j.copsyc.2019.01.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 12/13/2018] [Accepted: 01/04/2019] [Indexed: 11/18/2022]
Abstract
All models of attention include the concept of an attentional template (or a target or search template). The template is conceptualized as target information held in memory that is used for prioritizing sensory processing and determining if an object matches the target. It is frequently assumed that the template contains a veridical copy of the target. However, we review recent evidence showing that the template encodes a version of the target that is adapted to the current context (e.g. distractors, task, etc.); information held within the template may include only a subset of target features, real world knowledge, pre-existing perceptual biases, or even be a distorted version of the veridical target. We argue that the template contents are customized in order to maximize the ability to prioritize information that distinguishes targets from distractors. We refer to this as template-to-distractor distinctiveness and hypothesize that it contributes to visual search efficiency by exaggerating target-to-distractor dissimilarity.
Collapse
Affiliation(s)
- Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States.
| | - Phillip Witkowski
- Center for Mind and Brain, University of California Davis, Davis, CA, 95616, United States; Department of Psychology, University of California Davis, Davis, CA, 95616, United States
| |
Collapse
|
13
|
Abstract
The human visual system can actively prioritize task-relevant features to search for a target. Recent studies have reported cases in which the system may suppress irrelevant features by using a template for rejection. However, in those studies, the templates used for rejection were limited to the color domain, and they have yielded mixed results. Our literature review identified three differences among studies that may be responsible for such mixed results: differences in the spatial segmentation of items (i.e., segregated or intermixed across the display), differences in how features are defined and reported (i.e., combined or separate), and differences in cue lead times (short or long). Participants searched for a target-line segment in a shape and identified its orientation from among non-target line-shaped compound shapes that were preceded by one of three cue displays. Positive cues indicated that the target segment would appear in a shape, and negative cues that it would not appear in a shape. Neutral cues indicated that a particular shape would not appear in the current search display. The results demonstrated that reaction times were faster under the negative-cue condition than the neutral-cue condition, reflecting the effect of a shape-based template for rejection (Experiment 1). Experiment 2 replicated the absence of the effect in the shape domain. Experiment 3 indicated that the template-for-rejection effect occurred only when the cue lead time was relatively long, suggesting that time is required (approximately 2,400 ms or longer) for the visual system to form rejection templates. Experiment 4 excluded the possibility that a confound in the target-defining/reporting feature was involved. These results indicated that apparent inconsistencies in research on the template-for-rejection effect can be explained in terms of the time required for templates to be configured.
Collapse
|
14
|
Yu X, Geng JJ. The attentional template is shifted and asymmetrically sharpened by distractor context. J Exp Psychol Hum Percept Perform 2019; 45:336-353. [PMID: 30742475 DOI: 10.1037/xhp0000609] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Theories of attention hypothesize the existence of an "attentional template" that contains target features in working or long-term memory. It is often assumed that the template contents are veridical, but recent studies have found that this is not true when the distractor set is linearly separable from the target (e.g., all distractors are "yellower" than an orange-colored target). In such cases, the target representation in memory shifts away from distractor features (Navalpakkam & Itti, 2007) and develops a sharper boundary with distractors (Geng, DiQuattro, & Helm, 2017). These changes in the target template are presumed to increase the target-to-distractor psychological distinctiveness and lead to better attentional selection, but it remains unclear what characteristics of the distractor context produce shifting versus sharpening. Here, we tested the hypothesis that the template representation shifts whenever the distractor set (i.e., all of the distractors) is linearly separable from the target but asymmetrical sharpening occurs only when linearly separable distractors are highly target-similar. Our results were consistent, suggesting that template shifting and asymmetrical sharpening are 2 mechanisms that increase the representational distinctiveness of targets from expected distractors and improve visual search performance. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
15
|
Kruijne W, Meeter M. You prime what you code: The fAIM model of priming of pop-out. PLoS One 2017; 12:e0187556. [PMID: 29166386 PMCID: PMC5699828 DOI: 10.1371/journal.pone.0187556] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 10/21/2017] [Indexed: 11/18/2022] Open
Abstract
Our visual brain makes use of recent experience to interact with the visual world, and efficiently select relevant information. This is exemplified by speeded search when target- and distractor features repeat across trials versus when they switch, a phenomenon referred to as intertrial priming. Here, we present fAIM, a computational model that demonstrates how priming can be explained by a simple feature-weighting mechanism integrated into an established model of bottom-up vision. In fAIM, such modulations in feature gains are widespread and not just restricted to one or a few features. Consequentially, priming effects result from the overall tuning of visual features to the task at hand. Such tuning allows the model to reproduce priming for different types of stimuli, including for typical stimulus dimensions such as 'color' and for less obvious dimensions such as 'spikiness' of shapes. Moreover, the model explains some puzzling findings from the literature: it shows how priming can be found for target-distractor stimulus relations rather than for their absolute stimulus values per se, without an explicit representation of relations. Similarly, it simulates effects that have been taken to reflect a modulation of priming by an observers' goals-without any representation of goals in the model. We conclude that priming is best considered as a consequence of a general adaptation of the brain to visual input, and not as a peculiarity of visual search.
Collapse
Affiliation(s)
- Wouter Kruijne
- Department of Experimental and Applied Psychology, Faculty of Behavioural and Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| | - Martijn Meeter
- Department of Experimental and Applied Psychology, Faculty of Behavioural and Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
16
|
Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search. Psychon Bull Rev 2017; 24:1611-1619. [DOI: 10.3758/s13423-017-1246-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Intertrial priming due to distractor repetition is eliminated in homogeneous contexts. Atten Percept Psychophys 2016; 78:1935-47. [DOI: 10.3758/s13414-016-1115-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
18
|
Abstract
Visual attention is strongly affected by the past: both by recent experience and by long-term regularities in the environment that are encoded in and retrieved from memory. In visual search, intertrial repetition of targets causes speeded response times (short-term priming). Similarly, targets that are presented more often than others may facilitate search, even long after it is no longer present (long-term priming). In this study, we investigate whether such short-term priming and long-term priming depend on dissociable mechanisms. By recording eye movements while participants searched for one of two conjunction targets, we explored at what stages of visual search different forms of priming manifest. We found both long- and short- term priming effects. Long-term priming persisted long after the bias was present, and was again found even in participants who were unaware of a color bias. Short- and long-term priming affected the same stage of the task; both biased eye movements towards targets with the primed color, already starting with the first eye movement. Neither form of priming affected the response phase of a trial, but response repetition did. The results strongly suggest that both long- and short-term memory can implicitly modulate feedforward visual processing.
Collapse
|
19
|
Schönhammer JG, Grubert A, Kerzel D, Becker SI. Attentional guidance by relative features: Behavioral and electrophysiological evidence. Psychophysiology 2016; 53:1074-83. [PMID: 26990008 DOI: 10.1111/psyp.12645] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Accepted: 02/19/2016] [Indexed: 11/29/2022]
Abstract
Our ability to select task-relevant information from cluttered visual environments is widely believed to be due to our ability to tune attention to the particular elementary feature values of a sought-after target (e.g., red, orange, yellow). By contrast, recent findings showed that attention is often tuned to feature relationships, that is, features that the target has relative to irrelevant features in the context (e.g., redder, yellower). However, the evidence for such a relational account is so far exclusively based on behavioral measures that do not allow a safe inference about early perceptual processes. The present study provides a critical test of the relational account, by measuring an electrophysiological marker in the EEG of participants (N2pc) in response to briefly presented distractors (cues) that could either match the physical features of the target or its relative features. In a first experiment, the target color and nontarget color were kept constant across trials. In line with a relational account, we found that only cues with the same relative color as the target were attended, regardless of whether the cues had the same physical color as the target. In a second experiment, we demonstrate that attention is biased to the exact target feature value when the target is embedded in a randomly varying context. Taken together, these results provide the first electrophysiological evidence that attention can modulate early perceptual processes differently; in a context-dependent manner versus a context-independent manner, resulting in marked differences in the range of colors that can attract attention.
Collapse
Affiliation(s)
- Josef G Schönhammer
- Faculté de Psychologie et des Sciences de l'Éducation, Université de Genève, Geneva, Switzerland
| | - Anna Grubert
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Dirk Kerzel
- Faculté de Psychologie et des Sciences de l'Éducation, Université de Genève, Geneva, Switzerland
| | - Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
20
|
Abstract
Memory affects visual search, as is particularly evident from findings that when target features are repeated from one trial to the next, selection is faster. Two views have emerged on the nature of the memory representations and mechanisms that cause these intertrial priming effects: independent feature weighting versus episodic retrieval of previous trials. Previous research has attempted to disentangle these views focusing on short term effects. Here, we illustrate that the episodic retrieval models make the unique prediction of long-term priming: biasing one target type will result in priming of this target type for a much longer time, well after the bias has disappeared. We demonstrate that such long-term priming is indeed found for the visual feature of color, but only in conjunction search and not in singleton search. Two follow-up experiments showed that it was the kind of search (conjunction versus singleton) and not the difficulty, that determined whether long-term priming occurred. Long term priming persisted unaltered for at least 200 trials, and could not be explained as the result of explicit strategy. We propose that episodic memory may affect search more consistently than previously thought, and that the mechanisms for intertrial priming may be qualitatively different for singleton and conjunction search.
Collapse
|
21
|
Becker SI, Lewis AJ. Oculomotor capture by irrelevant onsets with and without color contrast. Ann N Y Acad Sci 2015; 1339:60-71. [PMID: 25708201 DOI: 10.1111/nyas.12685] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
It is widely known that irrelevant onsets (i.e., items appearing in previously empty locations) can automatically capture attention and attract our gaze. Some studies have shown that onset capture is stronger when the onset distractor matches the target feature, indicating that onset capture can be modulated by feature-based (top-down) tuning to the target. However, it is less clear whether and to what extent the perceptual saliency of the distractor can further modulate this effect. This study examined the effects of target similarity, competition between target and distractor, and bottom-up color contrast on the ability of onset distractor to capture the gaze, by varying the color (contrast) and stimulus-onset asynchrony of the onset distractor. The results clearly show that competition and feature-based attention modulate capture by the irrelevant onset to a large extent, whereas bottom-up color contrasts do not modulate onset capture. These results indicate the need to revise current accounts of gaze control.
Collapse
Affiliation(s)
- Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia; Center for Interdisciplinacy Research, Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
22
|
Target features and target-distractor relation are both primed in visual search. Atten Percept Psychophys 2014; 76:682-94. [PMID: 24415176 DOI: 10.3758/s13414-013-0611-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Intertrial priming in visual search is the finding that repeating target and distractor features from one trial to the next speeds up search, relative to when these features change. Recently, Becker (2008) reported evidence that it is not so much the repetition of absolute feature values that causes priming, but repetition of the relation between target and distractors. For example, in search for a unique size, the size of the search elements may change from trial to trial, but this does not hurt performance as long as the target remains consistently larger (or smaller) than the distractors. Becker (2008) concluded that such findings are difficult to reconcile with existing theory. Here, we replicate the findings in the dimensions of size, color, and luminance and show that these effects are not due to the magnitude of feature changes or to search strategies, as may be induced by blocking versus mixing different types of intertrial changes experienced by observers. However, we show that repeating a feature from one trial to the next does convey a benefit above and beyond repeating the target-distractor relation. We argue that both effects can be readily accounted for within current models of visual search. Priming of relations results when one assumes the existence of cardinal feature channels, as do most models of visual search. Additional priming of specific values results when one assumes broadly distributed, overlapping feature channels.
Collapse
|
23
|
Becker SI, Valuch C, Ansorge U. Color priming in pop-out search depends on the relative color of the target. Front Psychol 2014; 5:289. [PMID: 24782795 PMCID: PMC3986547 DOI: 10.3389/fpsyg.2014.00289] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 03/20/2014] [Indexed: 11/17/2022] Open
Abstract
In visual search for pop-out targets, search times are shorter when the target and non-target colors from the previous trial are repeated than when they change. This priming effect was originally attributed to a feature weighting mechanism that biases attention toward the target features, and away from the non-target features. However, more recent studies have shown that visual selection is strongly context-dependent: according to a relational account of feature priming, the target color is always encoded relative to the non-target color (e.g., as redder or greener). The present study provides a critical test of this hypothesis, by varying the colors of the search items such that either the relative color or the absolute color of the target always remained constant (or both). The results clearly show that color priming depends on the relative color of a target with respect to the non-targets but not on its absolute color value. Moreover, the observed priming effects did not change over the course of the experiment, suggesting that the visual system encodes colors in a relative manner from the start of the experiment. Taken together, these results strongly support a relational account of feature priming in visual search, and are inconsistent with the dominant feature-based views.
Collapse
Affiliation(s)
- Stefanie I Becker
- School of Psychology, The University of Queensland Brisbane, QLD, Australia ; Center for Interdisciplinary Research, Bielefeld University Bielefeld, Germany
| | - Christian Valuch
- Cognitive Research Platform, University of Vienna Vienna, Austria
| | - Ulrich Ansorge
- Faculty of Psychology, University of Vienna Vienna, Austria
| |
Collapse
|