1
|
Shang C, Sun M, Zhang Q. The trigger mechanism of the target detection task influencing recognition memory at Stimulus Onset Asynchrony of 0.5 s: evidence from the remember-know paradigm. Memory 2025:1-12. [PMID: 40396476 DOI: 10.1080/09658211.2025.2504594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 05/01/2025] [Indexed: 05/22/2025]
Abstract
Individuals showed better memory performance for target-paired items compared to distractor-paired items during sequential target detection and memory encoding tasks, a phenomenon called target-paired memory enhancement (TPME). The TPME was considered to be triggered by the response when the detection stimulus preceded the memory item by 0.5 s without temporal overlap. However, this hypothesis has not been empirically verified. To test the hypothesis, we instructed participants to detect the target colour before memorizing words, varying the response requirements for the target colour across different tasks. Participants responded only to the target colour in the Go-target-0.5 s task (SOA = 0.5 s) and Go-target-1 s task (SOA = 1 s), to distractor colours in the No-Go-target task, and to all colours with different keys in the response-choice task. The results of the remember-know recognition test showed that TPME was consistent across all tasks for R responses, but only occurred in the Go-target-0.5 s task for corrected K responses. These results suggested that both target detection and response can independently contribute to TPME when the detection stimulus and the memory item were presented successively without temporal overlap. The target detection enhanced recollection and familiarity, while the response enhanced familiarity. The effect on recollection was lasting, while the effect on familiarity was transient.
Collapse
Affiliation(s)
- Chenyang Shang
- Learning and Cognition Key Laboratory of Beijing, School of Psychology, Capital Normal University, Beijing, People's Republic of China
| | - Meng Sun
- The School of Mental Health and Psychological Sciences, Anhui Medical University, Hefei, People's Republic of China
| | - Qin Zhang
- Learning and Cognition Key Laboratory of Beijing, School of Psychology, Capital Normal University, Beijing, People's Republic of China
| |
Collapse
|
2
|
Krzyś KJ, Man LLY, Wammes JD, Castelhano MS. Foreground bias: Semantic consistency effects modulated when searching across depth. Psychon Bull Rev 2024; 31:2776-2790. [PMID: 38806789 DOI: 10.3758/s13423-024-02515-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/30/2024]
Abstract
When processing visual scenes, we tend to prioritize information in the foreground, often at the expense of background information. The foreground bias has been supported by data demonstrating that there are more fixations to foreground, and faster and more accurate detection of targets embedded in foreground. However, it is also known that semantic consistency is associated with more efficient search. Here, we examined whether semantic context interacts with foreground prioritization, either amplifying or mitigating the effect of target semantic consistency. For each scene, targets were placed in the foreground or background and were either semantically consistent or inconsistent with the context of immediately surrounding depth region. Results indicated faster response times (RTs) for foreground and semantically consistent targets, replicating established effects. More importantly, we found the magnitude of the semantic consistency effect was significantly smaller in the foreground than background region. To examine the robustness of this effect, in Experiment 2, we strengthened the reliability of semantics by increasing the proportion of targets consistent with the scene region to 80%. We found the overall results pattern to replicate the incongruous effect of semantic consistency across depth observed in Experiment 1. This suggests foreground bias modulates the effects of semantics so that performance is less impacted in near space.
Collapse
Affiliation(s)
- Karolina J Krzyś
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada.
| | - Louisa L Y Man
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada
| | - Jeffrey D Wammes
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada
| | - Monica S Castelhano
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3N6, Canada
| |
Collapse
|
3
|
Leticevscaia O, Brandman T, Peelen MV. Scene context and attention independently facilitate MEG decoding of object category. Vision Res 2024; 224:108484. [PMID: 39260230 DOI: 10.1016/j.visres.2024.108484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 08/25/2024] [Accepted: 09/02/2024] [Indexed: 09/13/2024]
Abstract
Many of the objects we encounter in our everyday environments would be hard to recognize without any expectations about these objects. For example, a distant silhouette may be perceived as a car because we expect objects of that size, positioned on a road, to be cars. Reflecting the influence of such expectations on visual processing, neuroimaging studies have shown that when objects are poorly visible, expectations derived from scene context facilitate the representations of these objects in visual cortex from around 300 ms after scene onset. The current magnetoencephalography (MEG) study tested whether this facilitation occurs independently of attention and task relevance. Participants viewed degraded objects alone or within scene context while they either attended the scenes (attended condition) or the fixation cross (unattended condition), also temporally directing attention away from the scenes. Results showed that at 300 ms after stimulus onset, multivariate classifiers trained to distinguish clearly visible animate vs inanimate objects generalized to distinguish degraded objects in scenes better than degraded objects alone, despite the added clutter of the scene background. Attention also modulated object representations at this latency, with better category decoding in the attended than the unattended condition. The modulatory effects of context and attention were independent of each other. Finally, data from the current study and a previous study were combined (N = 51) to provide a more detailed temporal characterization of contextual facilitation. These results extend previous work by showing that facilitatory scene-object interactions are independent of the specific task performed on the visual input.
Collapse
Affiliation(s)
- Olga Leticevscaia
- University of Reading, Centre for Integrative Neuroscience and Neurodynamics, United Kingdom
| | - Talia Brandman
- Department of Brain Sciences, Weizmann Institute of Science, Rehovot 76100, Israel
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
4
|
Delhaye E, D'Innocenzo G, Raposo A, Coco MI. The upside of cumulative conceptual interference on exemplar-level mnemonic discrimination. Mem Cognit 2024; 52:1567-1578. [PMID: 38709388 PMCID: PMC11522113 DOI: 10.3758/s13421-024-01563-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/21/2024] [Indexed: 05/07/2024]
Abstract
Although long-term visual memory (LTVM) has a remarkable capacity, the fidelity of its episodic representations can be influenced by at least two intertwined interference mechanisms during the encoding of objects belonging to the same category: the capacity to hold similar episodic traces (e.g., different birds) and the conceptual similarity of the encoded traces (e.g., a sparrow shares more features with a robin than with a penguin). The precision of episodic traces can be tested by having participants discriminate lures (unseen objects) from targets (seen objects) representing different exemplars of the same concept (e.g., two visually similar penguins), which generates interference at retrieval that can be solved if efficient pattern separation happened during encoding. The present study examines the impact of within-category encoding interference on the fidelity of mnemonic object representations, by manipulating an index of cumulative conceptual interference that represents the concurrent impact of capacity and similarity. The precision of mnemonic discrimination was further assessed by measuring the impact of visual similarity between targets and lures in a recognition task. Our results show a significant decrement in the correct identification of targets for increasing interference. Correct rejections of lures were also negatively impacted by cumulative interference as well as by the visual similarity with the target. Most interestingly though, mnemonic discrimination for targets presented with a visually similar lure was more difficult when objects were encoded under lower, not higher, interference. These findings counter a simply additive impact of interference on the fidelity of object representations providing a finer-grained, multi-factorial, understanding of interference in LTVM.
Collapse
Affiliation(s)
- Emma Delhaye
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
- GIGA-CRC In-Vivo Imaging, University of Liège, Liège, Belgium
| | | | - Ana Raposo
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Moreno I Coco
- Department of Psychology, Sapienza University of Rome, Rome, Italy.
- IRCSS Santa Lucia, Roma, Italy.
| |
Collapse
|
5
|
Britt N, Sun HJ. Spatial attention in three-dimensional space: A meta-analysis for the near advantage in target detection and localization. Neurosci Biobehav Rev 2024; 165:105869. [PMID: 39214342 DOI: 10.1016/j.neubiorev.2024.105869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 07/31/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024]
Abstract
Studies have explored how human spatial attention appears allocated in three-dimensional (3D) space. It has been demonstrated that target distance from the viewer can modulate performance in target detection and localization tasks: reaction times are shorter when targets appear nearer to the observer compared to farther distances (i.e., near advantage). Times have reached to quantitatively analyze this literature. In the current meta-analysis, 29 studies (n = 1260 participants) examined target detection and localization across 3-D space. Moderator analyses included: detection vs localization tasks, spatial cueing vs uncued tasks, control of retinal size across depth, central vs peripheral targets, real-space vs stereoscopic vs monocular depth environments, and inclusion of in-trial motion. The analyses revealed a near advantage for spatial attention that was affected by the moderating variables of controlling for retinal size across depth, the use of spatial cueing tasks, and the inclusion of in-trial motion. Overall, these results provide an up-to-date quantification of the effect of depth and provide insight into methodological differences in evaluating spatial attention.
Collapse
Affiliation(s)
- Noah Britt
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Ontario, Canada.
| | - Hong-Jin Sun
- Department of Psychology, Neuroscience, and Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
6
|
Morales-Torres R, Wing EA, Deng L, Davis SW, Cabeza R. Visual Recognition Memory of Scenes Is Driven by Categorical, Not Sensory, Visual Representations. J Neurosci 2024; 44:e1479232024. [PMID: 38569925 PMCID: PMC11112637 DOI: 10.1523/jneurosci.1479-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 02/07/2024] [Accepted: 02/14/2024] [Indexed: 04/05/2024] Open
Abstract
When we perceive a scene, our brain processes various types of visual information simultaneously, ranging from sensory features, such as line orientations and colors, to categorical features, such as objects and their arrangements. Whereas the role of sensory and categorical visual representations in predicting subsequent memory has been studied using isolated objects, their impact on memory for complex scenes remains largely unknown. To address this gap, we conducted an fMRI study in which female and male participants encoded pictures of familiar scenes (e.g., an airport picture) and later recalled them, while rating the vividness of their visual recall. Outside the scanner, participants had to distinguish each seen scene from three similar lures (e.g., three airport pictures). We modeled the sensory and categorical visual features of multiple scenes using both early and late layers of a deep convolutional neural network. Then, we applied representational similarity analysis to determine which brain regions represented stimuli in accordance with the sensory and categorical models. We found that categorical, but not sensory, representations predicted subsequent memory. In line with the previous result, only for the categorical model, the average recognition performance of each scene exhibited a positive correlation with the average visual dissimilarity between the item in question and its respective lures. These results strongly suggest that even in memory tests that ostensibly rely solely on visual cues (such as forced-choice visual recognition with similar distractors), memory decisions for scenes may be primarily influenced by categorical rather than sensory representations.
Collapse
Affiliation(s)
| | - Erik A Wing
- Rotman Research Institute, Baycrest Health Sciences, Toronto, Ontario M6A 2E1, Canada
| | - Lifu Deng
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| | - Simon W Davis
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
- Department of Neurology, Duke University School of Medicine, Durham, North Carolina 27708
| | - Roberto Cabeza
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina 27708
| |
Collapse
|
7
|
Westebbe L, Liang Y, Blaser E. The Accuracy and Precision of Memory for Natural Scenes: A Walk in the Park. Open Mind (Camb) 2024; 8:131-147. [PMID: 38435706 PMCID: PMC10898787 DOI: 10.1162/opmi_a_00122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 01/17/2024] [Indexed: 03/05/2024] Open
Abstract
It is challenging to quantify the accuracy and precision of scene memory because it is unclear what 'space' scenes occupy (how can we quantify error when misremembering a natural scene?). To address this, we exploited the ecologically valid, metric space in which scenes occur and are represented: routes. In a delayed estimation task, participants briefly saw a target scene drawn from a video of an outdoor 'route loop', then used a continuous report wheel of the route to pinpoint the scene. Accuracy was high and unbiased, indicating there was no net boundary extension/contraction. Interestingly, precision was higher for routes that were more self-similar (as characterized by the half-life, in meters, of a route's Multiscale Structural Similarity index), consistent with previous work finding a 'similarity advantage' where memory precision is regulated according to task demands. Overall, scenes were remembered to within a few meters of their actual location.
Collapse
Affiliation(s)
- Leo Westebbe
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Yibiao Liang
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| | - Erik Blaser
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
8
|
Serino G, Mareschal D, Scerif G, Kirkham N. Playing hide and seek: Contextual regularity learning develops between 3 and 5 years of age. J Exp Child Psychol 2024; 238:105795. [PMID: 37862788 DOI: 10.1016/j.jecp.2023.105795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 09/20/2023] [Accepted: 09/21/2023] [Indexed: 10/22/2023]
Abstract
The ability to acquire contextual regularities is fundamental in everyday life because it helps us to navigate the environment, directing our attention where relevant events are more likely to occur. Sensitivity to spatial regularities has been largely reported from infancy. Nevertheless, it is currently unclear when children can use this rapidly acquired contextual knowledge to guide their behavior. Evidence of this ability is indeed mixed in school-aged children and, to date, it has never been explored in younger children and toddlers. The current study investigated the development of contextual regularity learning in children aged 3 to 5 years. To this aim, we designed a new contextual learning paradigm in which young children were presented with recurring configurations of bushes and were asked to guess behind which bush a cartoon monkey was hiding. In a series of two experiments, we manipulated the relevance of color and visuospatial cues for the underlying task goal and tested how this affected young children's behavior. Our results bridge the gap between the infant and adult literatures, showing that sensitivity to spatial configurations persists from infancy to childhood, but it is only around the fifth year of life that children naturally start to integrate multiple cues to guide their behavior.
Collapse
Affiliation(s)
- Giulia Serino
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK.
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
| | - Natasha Kirkham
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| |
Collapse
|
9
|
Mikhailova A, Lightfoot S, Santos-Victor J, Coco MI. Differential effects of intrinsic properties of natural scenes and interference mechanisms on recognition processes in long-term visual memory. Cogn Process 2024; 25:173-187. [PMID: 37831320 DOI: 10.1007/s10339-023-01164-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Accepted: 09/20/2023] [Indexed: 10/14/2023]
Abstract
Humans display remarkable long-term visual memory (LTVM) processes. Even though images may be intrinsically memorable, the fidelity of their visual representations, and consequently the likelihood of successfully retrieving them, hinges on their similarity when concurrently held in LTVM. In this debate, it is still unclear whether intrinsic features of images (perceptual and semantic) may be mediated by mechanisms of interference generated at encoding, or during retrieval, and how these factors impinge on recognition processes. In the current study, participants (32) studied a stream of 120 natural scenes from 8 semantic categories, which varied in frequencies (4, 8, 16 or 32 exemplars per category) to generate different levels of category interference, in preparation for a recognition test. Then they were asked to indicate which of two images, presented side by side (i.e. two-alternative forced-choice), they remembered. The two images belonged to the same semantic category but varied in their perceptual similarity (similar or dissimilar). Participants also expressed their confidence (sure/not sure) about their recognition response, enabling us to tap into their metacognitive efficacy (meta-d'). Additionally, we extracted the activation of perceptual and semantic features in images (i.e. their informational richness) through deep neural network modelling and examined their impact on recognition processes. Corroborating previous literature, we found that category interference and perceptual similarity negatively impact recognition processes, as well as response times and metacognitive efficacy. Moreover, images semantically rich were less likely remembered, an effect that trumped a positive memorability boost coming from perceptual information. Critically, we did not observe any significant interaction between intrinsic features of images and interference generated either at encoding or during retrieval. All in all, our study calls for a more integrative understanding of the representational dynamics during encoding and recognition enabling us to form, maintain and access visual information.
Collapse
Affiliation(s)
- Anastasiia Mikhailova
- Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal.
| | | | - José Santos-Victor
- Institute for Systems and Robotics, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Moreno I Coco
- Sapienza, University of Rome, Rome, Italy.
- I.R.C.C.S. Santa Lucia, Fondazione Santa Lucia, Roma, Italy.
| |
Collapse
|
10
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
11
|
Peelen MV, Berlot E, de Lange FP. Predictive processing of scenes and objects. NATURE REVIEWS PSYCHOLOGY 2024; 3:13-26. [PMID: 38989004 PMCID: PMC7616164 DOI: 10.1038/s44159-023-00254-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2023] [Indexed: 07/12/2024]
Abstract
Real-world visual input consists of rich scenes that are meaningfully composed of multiple objects which interact in complex, but predictable, ways. Despite this complexity, we recognize scenes, and objects within these scenes, from a brief glance at an image. In this review, we synthesize recent behavioral and neural findings that elucidate the mechanisms underlying this impressive ability. First, we review evidence that visual object and scene processing is partly implemented in parallel, allowing for a rapid initial gist of both objects and scenes concurrently. Next, we discuss recent evidence for bidirectional interactions between object and scene processing, with scene information modulating the visual processing of objects, and object information modulating the visual processing of scenes. Finally, we review evidence that objects also combine with each other to form object constellations, modulating the processing of individual objects within the object pathway. Altogether, these findings can be understood by conceptualizing object and scene perception as the outcome of a joint probabilistic inference, in which "best guesses" about objects act as priors for scene perception and vice versa, in order to concurrently optimize visual inference of objects and scenes.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Eva Berlot
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Doidy F, Desaunay P, Rebillard C, Clochon P, Lambrechts A, Wantzen P, Guénolé F, Baleyte JM, Eustache F, Bowler DM, Lebreton K, Guillery-Girard B. How scene encoding affects memory discrimination: Analysing eye movements data using data driven methods. VISUAL COGNITION 2023. [DOI: 10.1080/13506285.2023.2188335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023]
Affiliation(s)
- F. Doidy
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - P. Desaunay
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, CHU de Caen, Caen, France
| | - C. Rebillard
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - P. Clochon
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - A. Lambrechts
- Autism Research Group, Department of Psychology, City, University of London, London, UK
| | - P. Wantzen
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - F. Guénolé
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, CHU de Caen, Caen, France
| | - J. M. Baleyte
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
- Service de Psychiatrie de l’enfant et de l’adolescent, Centre Hospitalier Interuniversitaire de Créteil, Créteil, France
| | - F. Eustache
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - D. M. Bowler
- Autism Research Group, Department of Psychology, City, University of London, London, UK
| | - K. Lebreton
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| | - B. Guillery-Girard
- Normandie Université, UNICAEN, PSL Université Paris, EPHE, INSERM, U1077, CHU de Caen, GIP Cyceron, Neuropsychologie et Imagerie de la Mémoire Humaine, Caen, France
| |
Collapse
|
13
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
14
|
Xie X, Cai J, Fang H, Tang X, Yamanaka T. Effects of colored lights on an individual's affective impressions in the observation process. Front Psychol 2022; 13:938636. [DOI: 10.3389/fpsyg.2022.938636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 11/07/2022] [Indexed: 12/04/2022] Open
Abstract
The lighting environment has an important influence on the psychological and physical aspects of a person. On certain occasions, reasonable lighting design can regulate people's emotions and improve their feelings of comfort in a space. Besides, specific lighting can create a specific atmosphere according to space requirements. However, in the study of an individual's affective impressions, there is still some uncertainty about how colored lights affect an individual's moods and impressions toward visual objects. This research improves the understanding of the emotional impact of colored light in space. To better understand the lighting environment in the observation process, the project studied the effects of four groups of lights (green, blue, red, and yellow) on the participants' moods and impressions. Participants watched two sets of visual images under four different lighting conditions and provided feedback on their emotions and evaluations through the Multiple Mood States Scale, Two-Dimensional Mood Scale, and Semantic Differential Scale. The results show that different colors of light have a significant effect on mood, and red light can arouse emotional changes to calm, irritated, relaxed, nervous, stability, and pleasure. At the same time, different colors of light have a certain relevance to participants' impressions and this provides further research value for the design of the colored light environment in an individual's affective impressions. Therefore, this study discusses the feasibility of colored lights as a display method, which has potential application prospects for constructing different space atmospheres.
Collapse
|
15
|
Thorat S, Quek GL, Peelen MV. Statistical learning of distractor co-occurrences facilitates visual search. J Vis 2022; 22:2. [PMID: 36053133 PMCID: PMC9440606 DOI: 10.1167/jov.22.10.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search is facilitated by knowledge of the relationship between the target and the distractors, including both where the target is likely to be among the distractors and how it differs from the distractors. Whether the statistical structure among distractors themselves, unrelated to target properties, facilitates search is less well understood. Here, we assessed the benefit of distractor structure using novel shapes whose relationship to each other was learned implicitly during visual search. Participants searched for target items in arrays of shapes that comprised either four pairs of co-occurring distractor shapes (structured scenes) or eight distractor shapes randomly partitioned into four pairs on each trial (unstructured scenes). Across five online experiments (N = 1,140), we found that after a period of search training, participants were more efficient when searching for targets in structured than unstructured scenes. This structure benefit emerged independently of whether the position of the shapes within each pair was fixed or variable and despite participants having no explicit knowledge of the structured pairs they had seen. These results show that implicitly learned co-occurrence statistics between distractor shapes increases search efficiency. Increased efficiency in the rejection of regularly co-occurring distractors may contribute to the efficiency of visual search in natural scenes, where such regularities are abundant.
Collapse
Affiliation(s)
- Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| | - Genevieve L Quek
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.,
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,
| |
Collapse
|
16
|
Nuthmann A, Canas-Bajo T. Visual search in naturalistic scenes from foveal to peripheral vision: A comparison between dynamic and static displays. J Vis 2022; 22:10. [PMID: 35044436 PMCID: PMC8802022 DOI: 10.1167/jov.22.1.10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Accepted: 12/03/2021] [Indexed: 11/24/2022] Open
Abstract
How important foveal, parafoveal, and peripheral vision are depends on the task. For object search and letter search in static images of real-world scenes, peripheral vision is crucial for efficient search guidance, whereas foveal vision is relatively unimportant. Extending this research, we used gaze-contingent Blindspots and Spotlights to investigate visual search in complex dynamic and static naturalistic scenes. In Experiment 1, we used dynamic scenes only, whereas in Experiments 2 and 3, we directly compared dynamic and static scenes. Each scene contained a static, contextually irrelevant target (i.e., a gray annulus). Scene motion was not predictive of target location. For dynamic scenes, the search-time results from all three experiments converge on the novel finding that neither foveal nor central vision was necessary to attain normal search proficiency. Since motion is known to attract attention and gaze, we explored whether guidance to the target was equally efficient in dynamic as compared to static scenes. We found that the very first saccade was guided by motion in the scene. This was not the case for subsequent saccades made during the scanning epoch, representing the actual search process. Thus, effects of task-irrelevant motion were fast-acting and short-lived. Furthermore, when motion was potentially present (Spotlights) or absent (Blindspots) in foveal or central vision only, we observed differences in verification times for dynamic and static scenes (Experiment 2). When using scenes with greater visual complexity and more motion (Experiment 3), however, the differences between dynamic and static scenes were much reduced.
Collapse
Affiliation(s)
- Antje Nuthmann
- Institute of Psychology, Kiel University, Kiel, Germany
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
- http://orcid.org/0000-0003-3338-3434
| | - Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
17
|
Holm SK, Häikiö T, Olli K, Kaakinen JK. Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos. J Eye Mov Res 2021; 14. [PMID: 34745442 PMCID: PMC8566014 DOI: 10.16910/jemr.14.2.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
The role of individual differences during dynamic scene viewing was explored. Participants
(N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their
eye movements were recorded. In addition, the participants’ skills in three visual attention
tasks (attentional blink, visual search, and multiple object tracking) were assessed. The
results showed that individual differences in visual attention tasks were associated with eye
movement patterns observed during viewing of the gameplay video. The differences were
noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes
and fixation distances from the center of the screen. The individual differences
showed during specific events of the video as well as during the video as a whole. The results
highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual
differences in dynamic scene viewing.
Collapse
|
18
|
Abstract
During natural vision, our brains are constantly exposed to complex, but regularly structured environments. Real-world scenes are defined by typical part-whole relationships, where the meaning of the whole scene emerges from configurations of localized information present in individual parts of the scene. Such typical part-whole relationships suggest that information from individual scene parts is not processed independently, but that there are mutual influences between the parts and the whole during scene analysis. Here, we review recent research that used a straightforward, but effective approach to study such mutual influences: By dissecting scenes into multiple arbitrary pieces, these studies provide new insights into how the processing of whole scenes is shaped by their constituent parts and, conversely, how the processing of individual parts is determined by their role within the whole scene. We highlight three facets of this research: First, we discuss studies demonstrating that the spatial configuration of multiple scene parts has a profound impact on the neural processing of the whole scene. Second, we review work showing that cortical responses to individual scene parts are shaped by the context in which these parts typically appear within the environment. Third, we discuss studies demonstrating that missing scene parts are interpolated from the surrounding scene context. Bridging these findings, we argue that efficient scene processing relies on an active use of the scene's part-whole structure, where the visual brain matches scene inputs with internal models of what the world should look like.
Collapse
Affiliation(s)
- Daniel Kaiser
- Justus-Liebig-Universität Gießen, Germany.,Philipps-Universität Marburg, Germany.,University of York, United Kingdom
| | - Radoslaw M Cichy
- Freie Universität Berlin, Germany.,Humboldt-Universität zu Berlin, Germany.,Bernstein Centre for Computational Neuroscience Berlin, Germany
| |
Collapse
|
19
|
Abstract
Cognitive processes-from basic sensory analysis to language understanding-are typically contextualized. While the importance of considering context for understanding cognition has long been recognized in psychology and philosophy, it has not yet had much impact on cognitive neuroscience research, where cognition is often studied in decontextualized paradigms. Here, we present examples of recent studies showing that context changes the neural basis of diverse cognitive processes, including perception, attention, memory, and language. Within the domains of perception and language, we review neuroimaging results showing that context interacts with stimulus processing, changes activity in classical perception and language regions, and recruits additional brain regions that contribute crucially to naturalistic perception and language. We discuss how contextualized cognitive neuroscience will allow for discovering new principles of the mind and brain.
Collapse
Affiliation(s)
- Roel M Willems
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.,Centre for Language Studies, Radboud University, Nijmegen, the Netherlands.,Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|