1
|
Fu Y, Gao H, Jing J, Qi M. Task-irrelevant features can be ignored in feature-based encoding. Biol Psychol 2025; 198:109049. [PMID: 40379010 DOI: 10.1016/j.biopsycho.2025.109049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2024] [Revised: 05/09/2025] [Accepted: 05/09/2025] [Indexed: 05/19/2025]
Abstract
The present study aimed to explore whether individuals could selectively remember task-relevant features while ignoring task-irrelevant features for given items. Participants were initially asked to remember the task-relevant feature of one (low load), two (medium load), or four (high load) items, while ignoring their task-irrelevant features. Participants were required to make responses to the target in the subsequent search task, while being presented with distractors that contained either task-irrelevant or task-relevant features. No features matched with the studied items in the neutral trials. The items' color was manipulated as a task-relevant feature in Experiment 1, while their shape was designated as a task-irrelevant feature. Conversely, the items' shape was manipulated as a task-relevant feature in Experiment 2, and their color was designated as task-irrelevant. The event-related potentials evoked by the visual search task were also examined. The results showed that, in both experiments, 1) The response time showed no differences between task-irrelevant trials and neutral trials among different load conditions, suggesting that the task-irrelevant distractors may not slow down the target searching. 2) The magnitude of the target-elicited N2pc was similar between the neutral and the task-irrelevant trials among different load conditions, indicating that the task-irrelevant distractor received no attention and had no effect on the target processing. The results indicated that the task-irrelevant features were suppressed or completely disregarded.
Collapse
Affiliation(s)
- Yao Fu
- School of Psychology, Liaoning Normal University, Dalian 116029, China
| | - Heming Gao
- School of Psychology, Liaoning Normal University, Dalian 116029, China
| | - Jingyan Jing
- School of Psychology, Liaoning Normal University, Dalian 116029, China.
| | - Mingming Qi
- School of Psychology, Liaoning Normal University, Dalian 116029, China.
| |
Collapse
|
2
|
Moriya J. Long-term memory for distractors: Effects of involuntary attention from working memory. Mem Cognit 2024; 52:401-416. [PMID: 37768481 DOI: 10.3758/s13421-023-01469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
In a visual search task, attention to task-irrelevant distractors impedes search performance. However, is it maladaptive to future performance? Here, I showed that attended distractors in a visual search task were better remembered in long-term memory (LTM) in the subsequent surprise recognition task than non-attended distractors. In four experiments, participants performed a visual search task using real-world objects of a single color. They encoded color in working memory (WM) during the task; because each object had a different color, participants directed their attention to the WM-matching colored distractor. Then, in the surprise recognition task, participants were required to indicate whether an object had been shown in the earlier visual search task, regardless of its color. The results showed that attended distractors were remembered better in LTM than non-attended distractors (Experiments 1 and 2). Moreover, the more participants directed their attention to distractors, the better they explicitly remembered them. Participants did not explicitly remember the color of the attended distractors (Experiment 3) but remembered integrated information with object and color (Experiment 4). When the color of the distractors in the recognition task was mismatched with the color in the visual search task, LTM decreased compared to color-matching distractors. These results suggest that attention to distractors impairs search for a target but is helpful in remembering distractors in LTM. When task-irrelevant distractors become task-relevant information in the future, their attention becomes beneficial.
Collapse
Affiliation(s)
- Jun Moriya
- Faculty of Sociology, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka, Japan.
| |
Collapse
|
3
|
Cabbai G, Brown CRH, Dance C, Simner J, Forster S. Mental imagery and visual attentional templates: A dissociation. Cortex 2023; 169:259-278. [PMID: 37967476 DOI: 10.1016/j.cortex.2023.09.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Revised: 08/10/2023] [Accepted: 09/26/2023] [Indexed: 11/17/2023]
Abstract
There is a growing interest in the relationship between mental images and attentional templates as both are considered pictorial representations that involve similar neural mechanisms. Here, we investigated the role of mental imagery in the automatic implementation of attentional templates and their effect on involuntary attention. We developed a novel version of the contingent capture paradigm designed to encourage the generation of a new template on each trial and measure contingent spatial capture by a template-matching visual feature (color). Participants were required to search at four different locations for a specific object indicated at the start of each trial. Immediately prior to the search display, color cues were presented surrounding the potential target locations, one of which matched the target color (e.g., red for strawberry). Across three experiments, our task induced a robust contingent capture effect, reflected by faster responses when the target appeared in the location previously occupied by the target-matching cue. Contrary to our predictions, this effect remained consistent regardless of self-reported individual differences in visual mental imagery (Experiment 1, N = 216) or trial-by-trial variation of voluntary imagery vividness (Experiment 2, N = 121). Moreover, contingent capture was observed even among aphantasic participants, who report no imagery (Experiment 3, N = 91). The magnitude of the effect was not reduced in aphantasics compared to a control sample of non-aphantasics, although the two groups reported substantial differences in their search strategy and exhibited differences in overall speed and accuracy. Our results hence establish a dissociation between the generation and implementation of attentional templates for a visual feature (color) and subjectively experienced imagery.
Collapse
Affiliation(s)
- Giulia Cabbai
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom.
| | | | - Carla Dance
- School of Psychology, University of Sussex, Brighton, United Kingdom
| | - Julia Simner
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| | - Sophie Forster
- School of Psychology, University of Sussex, Brighton, United Kingdom; Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
4
|
The prioritisation of motivationally salient stimuli in hemi-spatial neglect may be underpinned by goal-relevance: a meta-analytic review. Cortex 2022; 150:85-107. [DOI: 10.1016/j.cortex.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/08/2021] [Accepted: 03/01/2022] [Indexed: 11/22/2022]
|
5
|
Memory and Proactive Interference for spatially distributed items. Mem Cognit 2022; 50:782-816. [PMID: 35119628 PMCID: PMC9018653 DOI: 10.3758/s13421-021-01239-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/06/2021] [Indexed: 11/08/2022]
Abstract
Our ability to briefly retain information is often limited. Proactive Interference (PI) might contribute to these limitations (e.g., when items in recognition tests are difficult to reject after having appeared recently). In visual Working Memory (WM), spatial information might protect WM against PI, especially if encoding items together with their spatial locations makes item-location combinations less confusable than simple items without a spatial component. Here, I ask (1) if PI is observed for spatially distributed items, (2) if it arises among simple items or among item-location combinations, and (3) if spatial information affects PI at all. I show that, contrary to views that spatial information protects against PI, PI is reliably observed for spatially distributed items except when it is weak. PI mostly reflects items that appear recently or frequently as memory items, while occurrences as test items play a smaller role, presumably because their temporal context is easier to encode. Through mathematical modeling, I then show that interference occurs among simple items rather than item-location combinations. Finally, to understand the effects of spatial information, I separate the effects of (a) the presence and (b) the predictiveness of spatial information on memory and its susceptibility to PI. Memory is impaired when items are spatially distributed, but, depending on the analysis, unaffected by the predictiveness of spatial information. In contrast, the susceptibility to PI is unaffected by either manipulation. Visual memory is thus impaired by PI for spatially distributed items due to interference from recent memory items (rather than test items or item-location combinations).
Collapse
|
6
|
Allocation of resources in working memory: Theoretical and empirical implications for visual search. Psychon Bull Rev 2021; 28:1093-1111. [PMID: 33733298 PMCID: PMC8367923 DOI: 10.3758/s13423-021-01881-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/08/2021] [Indexed: 01/09/2023]
Abstract
Recently, working memory (WM) has been conceptualized as a limited resource, distributed flexibly and strategically between an unlimited number of representations. In addition to improving the precision of representations in WM, the allocation of resources may also shape how these representations act as attentional templates to guide visual search. Here, we reviewed recent evidence in favor of this assumption and proposed three main principles that govern the relationship between WM resources and template-guided visual search. First, the allocation of resources to an attentional template has an effect on visual search, as it may improve the guidance of visual attention, facilitate target recognition, and/or protect the attentional template against interference. Second, the allocation of the largest amount of resources to a representation in WM is not sufficient to give this representation the status of attentional template and thus, the ability to guide visual search. Third, the representation obtaining the status of attentional template, whether at encoding or during maintenance, receives an amount of WM resources proportional to its relevance for visual search. Thus defined, the resource hypothesis of visual search constitutes a parsimonious and powerful framework, which provides new perspectives on previous debates and complements existing models of template-guided visual search.
Collapse
|
7
|
Sasin E, Fougnie D. Memory-driven capture occurs for individual features of an object. Sci Rep 2020; 10:19499. [PMID: 33177574 PMCID: PMC7658969 DOI: 10.1038/s41598-020-76431-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 10/23/2020] [Indexed: 11/09/2022] Open
Abstract
Items held in working memory (WM) capture attention (memory-driven capture). People can selectively prioritize specific object features in WM. Here, we examined whether feature-specific prioritization within WM modulates memory-driven capture. In Experiment 1, after remembering the color and orientation of a triangle, participants were instructed, via retro-cue, whether the color, the orientation, or both features were relevant. To measure capture, we asked participants to execute a subsequent search task, and we compared performance in displays that did and did not contain the memory-matching feature. Color attracted attention only when it was relevant. No capture by orientation was found. In Experiment 2, we presented the retro-cue at one of the four locations of the search display to direct attention to specific objects. We found capture by color and this capture was larger when it was indicated as relevant. Crucially, orientation also attracted attention, but only when it was relevant. These findings provide evidence for reciprocal interaction between internal prioritization and external attention on the features level. Specifically, internal feature-specific prioritization modulates memory-driven capture but this capture also depends on the salience of the features.
Collapse
Affiliation(s)
- Edyta Sasin
- Department of Psychology, New York University of Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Daryl Fougnie
- Department of Psychology, New York University of Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
8
|
Oculomotor capture by search-irrelevant features in visual working memory: on the crucial role of target-distractor similarity. Atten Percept Psychophys 2020; 82:2379-2392. [PMID: 32166644 PMCID: PMC7343749 DOI: 10.3758/s13414-020-02007-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
When searching for varying targets in the environment, a target template has to be maintained in visual working memory (VWM). Recently, we showed that search-irrelevant features of a VWM template bias attention in an object-based manner, so that objects sharing such features with a VWM template capture the eyes involuntarily. Here, we investigated whether target-distractor similarity modulates capture strength. Participants saccaded to a target accompanied by a distractor. A single feature (e.g., shape) defined the target in each trial indicated by a cue, and the cue also varied in one irrelevant feature (e.g., color). The distractor matched the cue's irrelevant feature in half of the trials. Nine experiments showed that target-distractor similarity consistently influenced the degree of oculomotor capture. High target-distractor dissimilarity in the search-relevant feature reduced capture by the irrelevant feature (Experiments 1, 3, 6, 7). However, capture was reduced by high target-distractor similarity in the search-irrelevant feature (Experiments 1, 4, 5, 8). Strong oculomotor capture was observed if target-distractor similarity was reasonably low in the relevant and high in the irrelevant feature, irrespective of whether color or shape were relevant (Experiments 2 and 5). These findings argue for involuntary and object-based, top-down control by VWM templates, whereas its manifestation in oculomotor capture depends crucially on target-distractor similarity in relevant and irrelevant feature dimensions of the search object.
Collapse
|