1
|
Wang S, Lin Y, Ding X. Unmasking social attention: The key distinction between social and non-social attention emerges in disengagement, not engagement. Cognition 2024; 249:105834. [PMID: 38797054 DOI: 10.1016/j.cognition.2024.105834] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 04/04/2024] [Accepted: 05/22/2024] [Indexed: 05/29/2024]
Abstract
The debate surrounding whether social and non-social attention share the same mechanism has been contentious. While prior studies predominantly focused on engagement, we examined the potential disparity between social and non-social attention from both perspectives of engagement and disengagement, respectively. We developed a two-stage attention-shifting paradigm to capture both attention engagement and disengagement. Combining results from five eye-tracking experiments, we supported that the disengagement of social attention markedly outpaces that of non-social attention, while no significant discrepancy emerges in engagement. We uncovered that the faster disengagement of social attention came from its social nature by eliminating alternative explanations including broader fixation distribution width, reduced directional salience in the peripheral visual field, decreased cue-object categorical consistency, reduced perceived validity, and faster processing time. Our study supported that the distinction between social and non-social attention is rooted in attention disengagement, not engagement.
Collapse
Affiliation(s)
- Shengyuan Wang
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, China
| | - Yanhua Lin
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, China
| | - Xiaowei Ding
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
2
|
Forstinger M, Ansorge U. Top-down suppression of negative features applies flexibly contingent on visual search goals. Atten Percept Psychophys 2024; 86:1120-1147. [PMID: 38627277 PMCID: PMC11093874 DOI: 10.3758/s13414-024-02882-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2024] [Indexed: 05/15/2024]
Abstract
Visually searching for a frequently changing target is assumed to be guided by flexible working memory representations of specific features necessary to discriminate targets from distractors. Here, we tested if these representations allow selective suppression or always facilitate perception based on search goals. Participants searched for a target (i.e., a horizontal bar) defined by one of two different negative features (e.g., not red vs. not blue; Experiment 1) or a positive (e.g., blue) versus a negative feature (Experiments 2 and 3). A prompt informed participants about the target identity, and search tasks alternated or repeated randomly. We used different peripheral singleton cues presented at the same (valid condition) or a different (invalid condition) position as the target to examine if negative features were suppressed depending on current instructions. In all experiments, cues with negative features elicited slower search times in valid than invalid trials, indicating suppression. Additionally, suppression of negative color cues tended to be selective when participants searched for the target by different negative features but generalized to negative and non-matching cue colors when switching between positive and negative search criteria was required. Nevertheless, when the same color - red - was used in positive and negative search tasks, red cues captured attention or were suppressed depending on whether red was positive or negative (Experiment 3). Our results suggest that working memory representations flexibly trigger suppression or attentional capture contingent on a task-relevant feature's functional meaning during visual search, but top-down suppression operates at different levels of specificity depending on current task demands.
Collapse
Affiliation(s)
- Marlene Forstinger
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria.
| | - Ulrich Ansorge
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Liebiggasse 5, 1010, Vienna, Austria
- Cognitive Science Hub, University of Vienna, Vienna, Austria
- Research Platform Mediatised Lifeworlds, University of Vienna, Vienna, Austria
| |
Collapse
|
3
|
Zhang T, Irons JL, Hansen HA, Leber AB. Joint contributions of preview and task instructions on visual search strategy selection. Atten Percept Psychophys 2024; 86:1163-1175. [PMID: 38658517 PMCID: PMC11093844 DOI: 10.3758/s13414-024-02870-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/17/2024] [Indexed: 04/26/2024]
Abstract
People tend to employ suboptimal attention control strategies during visual search. Here we question why people are suboptimal, specifically investigating how knowledge of the optimal strategies and the time available to apply such strategies affect strategy use. We used the Adaptive Choice Visual Search (ACVS), a task designed to assess attentional control optimality. We used explicit strategy instructions to manipulate explicit strategy knowledge, and we used display previews to manipulate time to apply the strategies. In the first two experiments, the strategy instructions increased optimality. However, the preview manipulation did not significantly boost optimality for participants who did not receive strategy instruction. Finally, in Experiments 3A and 3B, we jointly manipulated preview and instruction with a larger sample size. Preview and instruction both produced significant main effects; furthermore, they interacted significantly, such that the beneficial effect of instructions emerged with greater preview time. Taken together, these results have important implications for understanding the strategic use of attentional control. Individuals with explicit knowledge of the optimal strategy are more likely to exploit relevant information in their visual environment, but only to the extent that they have the time to do so.
Collapse
Affiliation(s)
- Tianyu Zhang
- Department of Psychology, The Ohio State University, 225 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA.
| | | | - Heather A Hansen
- Department of Psychology, The Ohio State University, 225 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| | - Andrew B Leber
- Department of Psychology, The Ohio State University, 225 Psychology Building, 1835 Neil Avenue, Columbus, OH, 43210, USA
| |
Collapse
|
4
|
Chapman AF, Störmer VS. Representational structures as a unifying framework for attention. Trends Cogn Sci 2024; 28:416-427. [PMID: 38280837 DOI: 10.1016/j.tics.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/29/2024]
Abstract
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
5
|
Jahn CI, Markov NT, Morea B, Daw ND, Ebitz RB, Buschman TJ. Learning attentional templates for value-based decision-making. Cell 2024; 187:1476-1489.e21. [PMID: 38401541 DOI: 10.1016/j.cell.2024.01.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/18/2023] [Accepted: 01/25/2024] [Indexed: 02/26/2024]
Abstract
Attention filters sensory inputs to enhance task-relevant information. It is guided by an "attentional template" that represents the stimulus features that are currently relevant. To understand how the brain learns and uses templates, we trained monkeys to perform a visual search task that required them to repeatedly learn new attentional templates. Neural recordings found that templates were represented across the prefrontal and parietal cortex in a structured manner, such that perceptually neighboring templates had similar neural representations. When the task changed, a new attentional template was learned by incrementally shifting the template toward rewarded features. Finally, we found that attentional templates transformed stimulus features into a common value representation that allowed the same decision-making mechanisms to deploy attention, regardless of the identity of the template. Altogether, our results provide insight into the neural mechanisms by which the brain learns to control attention and how attention can be flexibly deployed across tasks.
Collapse
Affiliation(s)
- Caroline I Jahn
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA.
| | - Nikola T Markov
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Britney Morea
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Nathaniel D Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA
| | - R Becket Ebitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Neurosciences, Université de Montréal, Montréal, QC H3C 3J7, Canada
| | - Timothy J Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
| |
Collapse
|
6
|
Hughes AE, Nowakowska A, Clarke ADF. Bayesian multi-level modelling for predicting single and double feature visual search. Cortex 2024; 171:178-193. [PMID: 38007862 DOI: 10.1016/j.cortex.2023.10.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 06/19/2023] [Accepted: 10/05/2023] [Indexed: 11/28/2023]
Abstract
Performance in visual search tasks is frequently summarised by "search slopes" - the additional cost in reaction time for each additional distractor. While search tasks with a shallow search slopes are termed efficient (pop-out, parallel, feature), there is no clear dichotomy between efficient and inefficient (serial, conjunction) search. Indeed, a range of search slopes are observed in empirical data. The Target Contrast Signal (TCS) Theory is a rare example of quantitative model that attempts to predict search slopes for efficient visual search. One study using the TCS framework has shown that the search slope in a double-feature search (where the target differs in both colour and shape from the distractors) can be estimated from the slopes of the associated single-feature searches. This estimation is done using a contrast combination model, and a collinear contrast integration model was shown to outperform other options. In our work, we extend TCS to a Bayesian multi-level framework. We investigate modelling using normal and shifted-lognormal distributions, and show that the latter allows for a better fit to previously published data. We run a new fully within-subjects experiment to attempt to replicate the key original findings, and show that overall, TCS does a good job of predicting the data. However, we do not replicate the finding that the collinear combination model outperforms the other contrast combination models, instead finding that it may be difficult to conclusively distinguish between them.
Collapse
Affiliation(s)
- Anna E Hughes
- Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK.
| | - Anna Nowakowska
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, UK; School of Psychology and Vision Sciences, University of Leicester, Leicester, LE1 7RH, UK
| | | |
Collapse
|
7
|
Mu Y, Schubö A, Tünnermann J. Adapting attentional control settings in a shape-changing environment. Atten Percept Psychophys 2024; 86:404-421. [PMID: 38169028 PMCID: PMC10805924 DOI: 10.3758/s13414-023-02818-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2023] [Indexed: 01/05/2024]
Abstract
In rich visual environments, humans have to adjust their attentional control settings in various ways, depending on the task. Especially if the environment changes dynamically, it remains unclear how observers adapt to these changes. In two experiments (online and lab-based versions of the same task), we investigated how observers adapt their target choices while searching for color singletons among shape distractor contexts that changed over trials. The two equally colored targets had shapes that differed from each other and matched a varying number of distractors. Participants were free to select either target. The results show that participants adjusted target choices to the shape ratio of distractors: even though the task could be finished by focusing on color only, participants showed a tendency to choose targets matching with fewer distractors in shape. The time course of this adaptation showed that the regularities in the changing environment were taken into account. A Bayesian modeling approach was used to provide a fine-grained picture of how observers adapted their behavior to the changing shape ratio with three parameters: the strength of adaptation, its delay relative to the objective distractor shape ratio, and a general bias toward specific shapes. Overall, our findings highlight that systematic changes in shape, even when it is not a target-defining feature, influence how searchers adjust their attentional control settings. Furthermore, our comparison between lab-based and online assessments with this paradigm suggests that shape is a good choice as a feature dimension in adaptive choice online experiments.
Collapse
Affiliation(s)
- Yunyun Mu
- Department of Psychology, Cognitive Neuroscience of Perception and Action, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany.
| | - Anna Schubö
- Department of Psychology, Cognitive Neuroscience of Perception and Action, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - Jan Tünnermann
- Department of Psychology, Cognitive Neuroscience of Perception and Action, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| |
Collapse
|
8
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
9
|
Thayer DD, Sprague TC. Feature-Specific Salience Maps in Human Cortex. J Neurosci 2023; 43:8785-8800. [PMID: 37907257 PMCID: PMC10727177 DOI: 10.1523/jneurosci.1104-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/29/2023] [Accepted: 10/24/2023] [Indexed: 11/02/2023] Open
Abstract
Priority map theory is a leading framework for understanding how various aspects of stimulus displays and task demands guide visual attention. Per this theory, the visual system computes a priority map, which is a representation of visual space indexing the relative importance, or priority, of locations in the environment. Priority is computed based on both salience, defined based on image-computable properties; and relevance, defined by an individual's current goals, and is used to direct attention to the highest-priority locations for further processing. Computational theories suggest that priority maps identify salient locations based on individual feature dimensions (e.g., color, motion), which are integrated into an aggregate priority map. While widely accepted, a core assumption of this framework, the existence of independent feature dimension maps in visual cortex, remains untested. Here, we tested the hypothesis that retinotopic regions selective for specific feature dimensions (color or motion) in human cortex act as neural feature dimension maps, indexing salient locations based on their preferred feature. We used fMRI activation patterns to reconstruct spatial maps while male and female human participants viewed stimuli with salient regions defined by relative color or motion direction. Activation in reconstructed spatial maps was localized to the salient stimulus position in the display. Moreover, the strength of the stimulus representation was strongest in the ROI selective for the salience-defining feature. Together, these results suggest that feature-selective extrastriate visual regions highlight salient locations based on local feature contrast within their preferred feature dimensions, supporting their role as neural feature dimension maps.SIGNIFICANCE STATEMENT Identifying salient information is important for navigating the world. For example, it is critical to detect a quickly approaching car when crossing the street. Leading models of computer vision and visual search rely on compartmentalized salience computations based on individual features; however, there has been no direct empirical demonstration identifying neural regions as responsible for performing these dissociable operations. Here, we provide evidence of a critical double dissociation that neural activation patterns from color-selective regions prioritize the location of color-defined salience while minimally representing motion-defined salience, whereas motion-selective regions show the complementary result. These findings reveal that specialized cortical regions act as neural "feature dimension maps" that are used to index salient locations based on specific features to guide attention.
Collapse
Affiliation(s)
- Daniel D Thayer
- Department of Psychological and Brain Sciences, University of California-Santa Barbara, Santa Barbara, California 93106
| | - Thomas C Sprague
- Department of Psychological and Brain Sciences, University of California-Santa Barbara, Santa Barbara, California 93106
| |
Collapse
|
10
|
Becker SI, Hamblin-Frohman Z, Xia H, Qiu Z. Tuning to non-veridical features in attention and perceptual decision-making: An EEG study. Neuropsychologia 2023; 188:108634. [PMID: 37391127 DOI: 10.1016/j.neuropsychologia.2023.108634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 06/11/2023] [Accepted: 06/28/2023] [Indexed: 07/02/2023]
Abstract
When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.
Collapse
Affiliation(s)
| | | | - Hongfeng Xia
- School of Psychology, The University of Queensland, Australia
| | - Zeguo Qiu
- School of Psychology, The University of Queensland, Australia
| |
Collapse
|
11
|
Grössle IM, Schubö A, Tünnermann J. Testing a relational account of search templates in visual foraging. Sci Rep 2023; 13:12541. [PMID: 37532742 PMCID: PMC10397186 DOI: 10.1038/s41598-023-38362-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 07/06/2023] [Indexed: 08/04/2023] Open
Abstract
Search templates guide human visual attention toward relevant targets. Templates are often seen as encoding exact target features, but recent studies suggest that templates rather contain "relational properties" (e.g., they facilitate "redder" stimuli instead of specific hues of red). Such relational guidance seems helpful in naturalistic searches where illumination or perspective renders exact feature values unreliable. So far relational guidance has only been demonstrated in rather artificial single-target search tasks with briefly flashed displays. Here, we investigate whether relational guidance also occurs when humans interact with the search environment for longer durations to collect multiple target elements. In a visual foraging task, participants searched for and collected multiple targets among distractors of different relationships to the target colour. Distractors whose colour differed from the environment in the same direction as the targets reduced foraging efficiency to the same amount as distractors whose colour matched the target colour. Distractors that differed by the same colour distance but in the opposite direction of the target colour did not reduce efficiency. These findings provide evidence that search templates encode relational target features in naturalistic search tasks and suggest that attention guidance based on relational features is a common mode in dynamic, real-world search environments.
Collapse
Affiliation(s)
- Inga M Grössle
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - Jan Tünnermann
- Cognitive Neuroscience of Perception and Action, Department of Psychology, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany.
| |
Collapse
|