1
|
Ramey MM, Henderson JM, Yonelinas AP. Episodic memory and semantic knowledge interact to guide eye movements during visual search in scenes: Distinct effects of conscious and unconscious memory. Psychon Bull Rev 2025:10.3758/s13423-025-02686-6. [PMID: 40399748 DOI: 10.3758/s13423-025-02686-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/25/2025] [Indexed: 05/23/2025]
Abstract
Episodic memory and semantic knowledge can each exert strong influences on visual attention when we search through real-world scenes. However, there is debate surrounding how they interact when both are present; specifically, results conflict as to whether memory consistently improves visual search when semantic knowledge is available to guide search. These conflicting results could be driven by distinct effects of different types of episodic memory, but this possibility has not been examined. To test this, we tracked participants' eyes while they searched for objects in semantically congruent and incongruent locations within scenes during a study and test phase. In the test phase containing studied and new scenes, participants gave confidence-based recognition memory judgments that indexed different types of episodic memory (i.e., recollection, familiarity, unconscious memory) for the background scenes, then they searched for the target. We found that semantic knowledge consistently influenced both early and late eye movements, but the influence of memory depended on the type of memory involved. Recollection improved first saccade accuracy in terms of heading towards the target in both congruent and incongruent scenes. In contrast, unconscious memory gradually improved scanpath efficiency over the course of search, but only when semantic knowledge was relatively ineffective (i.e., incongruent scenes). Together, these findings indicate that episodic memory and semantic knowledge are rationally integrated to optimize attentional guidance, such that the most precise or effective forms of information available - which depends on the type of episodic memory available - are prioritized.
Collapse
Affiliation(s)
- Michelle M Ramey
- Department of Psychological Science, University of Arkansas, Fayetteville, AR, USA.
| | - John M Henderson
- Department of Psychology, University of California, Davis, CA, USA
- Center for Mind and Brain, University of California, Davis, CA, USA
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, USA
- Center for Neuroscience, University of California, Davis, CA, USA
| |
Collapse
|
2
|
Fischer M, Moscovitch M, Alain C. Memory-guided perception is shaped by dynamic two-stage theta- and alpha-mediated retrieval. Ann N Y Acad Sci 2025; 1544:159-171. [PMID: 39901582 PMCID: PMC11829322 DOI: 10.1111/nyas.15287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2025]
Abstract
How does memory influence auditory perception, and what are the underlying mechanisms that drive these interactions? Most empirical studies on the neural correlates of memory-guided perception have used static visual tasks, resulting in a bias in the literature that contrasts with recent research highlighting the dynamic nature of memory retrieval. Here, we used electroencephalography to track the retrieval of auditory associative memories in a cue-target paradigm. Participants (N = 64) listened to real-world soundscapes that were either predictive of an upcoming target tone or nonpredictive. Three key results emerged. First, targets were detected faster when embedded in predictive than in nonpredictive soundscapes (memory-guided perceptual benefit). Second, changes in theta and alpha power differentiated soundscape contexts that were predictive from nonpredictive contexts at two distinct temporal intervals from soundscape onset (early-950 ms peak for theta and alpha, and late-1650 ms peak for alpha only). Third, early theta activity in the left anterior temporal lobe was correlated with memory-guided perceptual benefits. Together, these findings underscore the role of distinct neural processes at different time points during associative retrieval. By emphasizing temporal sensitivity and by isolating cue-related activity, we reveal a two-stage retrieval mechanism that advances our understanding of how memory influences auditory perception.
Collapse
Affiliation(s)
- Manda Fischer
- The Brain and Mind InstituteUniversity of Western OntarioLondonOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Rotman Research InstituteBaycrest Centre for Geriatric CareTorontoOntarioCanada
| | - Morris Moscovitch
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Rotman Research InstituteBaycrest Centre for Geriatric CareTorontoOntarioCanada
| | - Claude Alain
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Rotman Research InstituteBaycrest Centre for Geriatric CareTorontoOntarioCanada
| |
Collapse
|
3
|
Arató J, Rothkopf CA, Fiser J. Eye movements reflect active statistical learning. J Vis 2024; 24:17. [PMID: 38819805 PMCID: PMC11146064 DOI: 10.1167/jov.24.5.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 04/23/2024] [Indexed: 06/01/2024] Open
Abstract
What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze-contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. This suggests that eye movements are potential indicators of active learning, a process where long-term knowledge, current visual stimuli and an inherent tendency to reduce uncertainty about the visual environment jointly determine where we look.
Collapse
Affiliation(s)
- József Arató
- Department of Cognitive Science, Central European University, Vienna, Austria
- Center for Cognitive Computation, Central European University, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Constantin A Rothkopf
- Center for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| | - József Fiser
- Department of Cognitive Science, Central European University, Vienna, Austria
- Center for Cognitive Computation, Central European University, Vienna, Austria
| |
Collapse
|
4
|
Andrade MÂ, Cipriano M, Raposo A. ObScene database: Semantic congruency norms for 898 pairs of object-scene pictures. Behav Res Methods 2024; 56:3058-3071. [PMID: 37488464 PMCID: PMC11133025 DOI: 10.3758/s13428-023-02181-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 07/26/2023]
Abstract
Research on the interaction between object and scene processing has a long history in the fields of perception and visual memory. Most databases have established norms for pictures where the object is embedded in the scene. In this study, we provide a diverse and controlled stimulus set comprising real-world pictures of 375 objects (e.g., suitcase), 245 scenes (e.g., airport), and 898 object-scene pairs (e.g., suitcase-airport), with object and scene presented separately. Our goal was twofold. First, to create a database of object and scene pictures, normed for the same variables to have comparable measures for both types of pictures. Second, to acquire normative data for the semantic relationships between objects and scenes presented separately, which offers more flexibility in the use of the pictures and allows disentangling the processing of the object and its context (the scene). Along three experiments, participants evaluated each object or scene picture on name agreement, familiarity, and visual complexity, and rated object-scene pairs on semantic congruency. A total of 125 septuplets of one scene and six objects (three congruent, three incongruent), and 120 triplets of one object and two scenes (in congruent and incongruent pairings) were built. In future studies, these objects and scenes can be used separately or combined, while controlling for their key features. Additionally, as object-scene pairs received semantic congruency ratings along the entire scale, researchers may select among a wide range of congruency values. ObScene is a comprehensive and ecologically valid database, useful for psychology and neuroscience studies of visual object and scene processing.
Collapse
Affiliation(s)
- Miguel Ângelo Andrade
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Margarida Cipriano
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Ana Raposo
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
5
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
6
|
Mahr JB, Schacter DL. A language of episodic thought? Behav Brain Sci 2023; 46:e283. [PMID: 37766653 DOI: 10.1017/s0140525x2300198x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
We propose that episodic thought (i.e., episodic memory and imagination) is a domain where the language-of-thought hypothesis (LoTH) could be fruitfully applied. On the one hand, LoTH could explain the structure of what is encoded into and retrieved from long-term memory. On the other, LoTH can help make sense of how episodic contents come to play such a large variety of different cognitive roles after they have been retrieved.
Collapse
Affiliation(s)
- Johannes B Mahr
- Department of Psychology, Harvard University, Cambridge, MA, USA ;
| | | |
Collapse
|
7
|
Kallmayer A, Võ MLH, Draschkow D. Viewpoint dependence and scene context effects generalize to depth rotated three-dimensional objects. J Vis 2023; 23:9. [PMID: 37707802 PMCID: PMC10506680 DOI: 10.1167/jov.23.10.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 08/17/2023] [Indexed: 09/15/2023] Open
Abstract
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from "noncanonical" viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalize to strongly noncanonical orientations of three-dimensional (3D) models of objects. Using 3D models allowed us to probe a broad range of viewpoints and empirically establish viewpoints with very strong noncanonical and canonical orientations. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in color (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the viewpoint effect in Experiments 1a and 1b, we could empirically determine the most canonical and noncanonical viewpoints from our set of viewpoints to use in Experiment 2. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that scene context supports object recognition even when using extremely noncanonical orientations of depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy especially under conditions of high uncertainty.
Collapse
Affiliation(s)
- Aylin Kallmayer
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
8
|
Beitner J, Helbing J, Draschkow D, David EJ, Võ MLH. Flipping the world upside down: Using eye tracking in virtual reality to study visual search in inverted scenes. J Eye Mov Res 2023; 15:10.16910/jemr.15.3.5. [PMID: 37215533 PMCID: PMC10195094 DOI: 10.16910/jemr.15.3.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2024] Open
Abstract
Image inversion is a powerful tool for investigating cognitive mechanisms of visual perception. However, studies have mainly used inversion in paradigms presented on twodimensional computer screens. It remains open whether disruptive effects of inversion also hold true in more naturalistic scenarios. In our study, we used scene inversion in virtual reality in combination with eye tracking to investigate the mechanisms of repeated visual search through three-dimensional immersive indoor scenes. Scene inversion affected all gaze and head measures except fixation durations and saccade amplitudes. Our behavioral results, surprisingly, did not entirely follow as hypothesized: While search efficiency dropped significantly in inverted scenes, participants did not utilize more memory as measured by search time slopes. This indicates that despite the disruption, participants did not try to compensate the increased difficulty by using more memory. Our study highlights the importance of investigating classical experimental paradigms in more naturalistic scenarios to advance research on daily human behavior.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Germany
- Corresponding author,
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Germany
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, UK
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, UK
| | - Erwan J David
- Department of Psychology, Goethe University Frankfurt, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, Germany
| |
Collapse
|
9
|
Abstract
Research has recently shown that efficient selection relies on the implicit extraction of environmental regularities, known as statistical learning. Although this has been demonstrated for scenes, similar learning arguably also occurs for objects. To test this, we developed a paradigm that allowed us to track attentional priority at specific object locations irrespective of the object's orientation in three experiments with young adults (all Ns = 80). Experiments 1a and 1b established within-object statistical learning by demonstrating increased attentional priority at relevant object parts (e.g., hammerhead). Experiment 2 extended this finding by demonstrating that learned priority generalized to viewpoints in which learning never took place. Together, these findings demonstrate that as a function of statistical learning, the visual system not only is able to tune attention relative to specific locations in space but also can develop preferential biases for specific parts of an object independently of the viewpoint of that object.
Collapse
Affiliation(s)
- Dirk van Moorselaar
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam.,Institute of Brain and Behaviour Amsterdam (iBBA), The Netherlands
| | - Jan Theeuwes
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam.,Institute of Brain and Behaviour Amsterdam (iBBA), The Netherlands.,William James Center for Research, ISPA-Instituto Universitario
| |
Collapse
|
10
|
Peacock CE, Singh P, Hayes TR, Rehrig G, Henderson JM. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes. Q J Exp Psychol (Hove) 2023; 76:632-648. [PMID: 35510885 PMCID: PMC11132926 DOI: 10.1177/17470218221101334] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Praveena Singh
- Center for Neuroscience, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
11
|
Botch TL, Garcia BD, Choi YB, Feffer N, Robertson CE. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Brenda D Garcia
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Yeo Bi Choi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Nicholas Feffer
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
12
|
Long-term memory representations for audio-visual scenes. Mem Cognit 2023; 51:349-370. [PMID: 36100821 PMCID: PMC9950240 DOI: 10.3758/s13421-022-01355-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2022] [Indexed: 11/08/2022]
Abstract
In this study, we investigated the nature of long-term memory representations for naturalistic audio-visual scenes. Whereas previous research has shown that audio-visual scenes are recognized more accurately than their unimodal counterparts, it remains unclear whether this benefit stems from audio-visually integrated long-term memory representations or a summation of independent retrieval cues. We tested two predictions for audio-visually integrated memory representations. First, we used a modeling approach to test whether recognition performance for audio-visual scenes is more accurate than would be expected from independent retrieval cues. This analysis shows that audio-visual integration is not necessary to explain the benefit of audio-visual scenes relative to purely auditory or purely visual scenes. Second, we report a series of experiments investigating the occurrence of study-test congruency effects for unimodal and audio-visual scenes. Most importantly, visually encoded information was immune to additional auditory information presented during testing, whereas auditory encoded information was susceptible to additional visual information presented during testing. This renders a true integration of visual and auditory information in long-term memory representations unlikely. In sum, our results instead provide evidence for visual dominance in long-term memory. Whereas associative auditory information is capable of enhancing memory performance, the long-term memory representations appear to be primarily visual.
Collapse
|
13
|
Mason LA, Thomas AK, Taylor HA. On the proposed role of metacognition in environment learning: recommendations for research. Cogn Res Princ Implic 2022; 7:104. [PMID: 36575318 PMCID: PMC9794647 DOI: 10.1186/s41235-022-00454-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 12/09/2022] [Indexed: 12/29/2022] Open
Abstract
Metacognition plays a role in environment learning (EL). When navigating, we monitor environment information to judge our likelihood to remember our way, and we engage in control by using tools to prevent getting lost. Yet, the relationship between metacognition and EL is understudied. In this paper, we examine the possibility of leveraging metacognition to support EL. However, traditional metacognitive theories and methodologies were not developed with EL in mind. Here, we use traditional metacognitive theories and approaches as a foundation for a new examination of metacognition in EL. We highlight three critical considerations about EL. Namely: (1) EL is a complex process that unfolds sequentially and is thereby enriched with multiple different types of cues, (2) EL is inherently driven by a series of ecologically relevant motivations and constraints, and (3) monitoring and control interact to support EL. In doing so, we describe how task demands and learning motivations inherent to EL should shape how metacognition is explored. With these considerations, we provide three methodological recommendations for investigating metacognition during EL. Specifically, researchers should: (1) instantiate EL goals to impact learning, metacognition, and retrieval processes, (2) prompt learners to make frequent metacognitive judgments and consider metacognitive accuracy as a primary performance metric, and (3) incorporate insights from both transfer appropriate processing and monitoring hypotheses when designing EL assessments. In summary, to effectively investigate how metacognition impacts EL, both ecological and methodological considerations need to be weighed.
Collapse
Affiliation(s)
- Lauren A Mason
- Department of Psychology, Tufts University, Medford, MA, USA.
| | - Ayanna K Thomas
- Department of Psychology, Tufts University, Medford, MA, USA
| | - Holly A Taylor
- Department of Psychology, Tufts University, Medford, MA, USA
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| |
Collapse
|
14
|
Turini J, Võ MLH. Hierarchical organization of objects in scenes is reflected in mental representations of objects. Sci Rep 2022; 12:20068. [PMID: 36418411 PMCID: PMC9684142 DOI: 10.1038/s41598-022-24505-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 11/16/2022] [Indexed: 11/25/2022] Open
Abstract
The arrangement of objects in scenes follows certain rules ("Scene Grammar"), which we exploit to perceive and interact efficiently with our environment. We have proposed that Scene Grammar is hierarchically organized: scenes are divided into clusters of objects ("phrases", e.g., the sink phrase); within every phrase, one object ("anchor", e.g., the sink) holds strong predictions about identity and position of other objects ("local objects", e.g., a toothbrush). To investigate if this hierarchy is reflected in the mental representations of objects, we collected pairwise similarity judgments for everyday object pictures and for the corresponding words. Similarity judgments were stronger not only for object pairs appearing in the same scene, but also object pairs appearing within the same phrase of the same scene as opposed to appearing in different phrases of the same scene. Besides, object pairs with the same status in the scenes (i.e., being both anchors or both local objects) were judged as more similar than pairs of different status. Comparing effects between pictures and words, we found similar, significant impact of scene hierarchy on the organization of mental representation of objects, independent of stimulus modality. We conclude that the hierarchical structure of visual environment is incorporated into abstract, domain general mental representations of the world.
Collapse
Affiliation(s)
- Jacopo Turini
- Scene Grammar Lab, Department of Psychology and Sports Sciences, Goethe University, Frankfurt am Main, Germany.
- Scene Grammar Lab, Institut Für Psychologie, PEG, Room 5.G105, Theodor-W.-Adorno Platz 6, 60323, Frankfurt am Main, Germany.
| | - Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Psychology and Sports Sciences, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
15
|
Lukashova-Sanz O, Agarwala R, Wahl S. Context matters during pick-and-place in VR: Impact on search and transport phases. Front Psychol 2022; 13:881269. [PMID: 36160516 PMCID: PMC9493493 DOI: 10.3389/fpsyg.2022.881269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 08/19/2022] [Indexed: 11/13/2022] Open
Abstract
When considering external assistive systems for people with motor impairments, gaze has been shown to be a powerful tool as it is anticipatory to motor actions and is promising for understanding intentions of an individual even before the action. Up until now, the vast majority of studies investigating the coordinated eye and hand movement in a grasping task focused on single objects manipulation without placing them in a meaningful scene. Very little is known about the impact of the scene context on how we manipulate objects in an interactive task. In the present study, it was investigated how the scene context affects human object manipulation in a pick-and-place task in a realistic scenario implemented in VR. During the experiment, participants were instructed to find the target object in a room, pick it up, and transport it to a predefined final location. Thereafter, the impact of the scene context on different stages of the task was examined using head and hand movement, as well as eye tracking. As the main result, the scene context had a significant effect on the search and transport phases, but not on the reach phase of the task. The present work provides insights into the development of potential supporting intention predicting systems, revealing the dynamics of the pick-and-place task behavior once it is realized in a realistic context-rich scenario.
Collapse
Affiliation(s)
- Olga Lukashova-Sanz
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International Gesellschaft mit beschränkter Haftung (GmbH), Aalen, Germany
| | - Rajat Agarwala
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Siegfried Wahl
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International Gesellschaft mit beschränkter Haftung (GmbH), Aalen, Germany
| |
Collapse
|
16
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
17
|
Anderson EM, Seemiller ES, Smith LB. Scene saliencies in egocentric vision and their creation by parents and infants. Cognition 2022; 229:105256. [PMID: 35988453 DOI: 10.1016/j.cognition.2022.105256] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 08/09/2022] [Accepted: 08/11/2022] [Indexed: 11/15/2022]
Abstract
Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant's egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.
Collapse
Affiliation(s)
| | | | - Linda B Smith
- Psychological and Brain Sciences, Indiana University, USA
| |
Collapse
|
18
|
Ramey MM, Henderson JM, Yonelinas AP. Episodic memory processes modulate how schema knowledge is used in spatial memory decisions. Cognition 2022; 225:105111. [PMID: 35487103 PMCID: PMC11179179 DOI: 10.1016/j.cognition.2022.105111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 03/13/2022] [Accepted: 03/22/2022] [Indexed: 11/24/2022]
Abstract
Schema knowledge can dramatically affect how we encode and retrieve memories. Current models propose that schema information is combined with episodic memory at retrieval to influence memory decisions, but it is not known how the strength or type of episodic memory (i.e., unconscious memory versus familiarity versus recollection) influences the extent to which schema information is incorporated into memory decisions. To address this question, we had participants search for target objects in semantically expected (i.e., congruent) locations or in unusual (i.e., incongruent) locations within scenes. In a subsequent test, participants indicated where in each scene the target had been located previously, then provided confidence-based recognition memory judgments that indexed recollection, familiarity strength, and unconscious memory for the scenes. In both an initial online study (n = 133) and replication (n = 59), target location recall was more accurate for targets that had been located in schema-congruent rather than incongruent locations; importantly, this effect was strongest for new scenes, decreased with unconscious memory, decreased further with familiarity strength, and was eliminated entirely for recollected scenes. Moreover, when participants recollected an incongruent scene but did not correctly remember the target location, they were still biased away from congruent regions-suggesting that detrimental schema bias was suppressed in the presence of recollection even when precise target location information was not remembered. The results indicate that episodic memory modulates how schemas are used: Schema knowledge contributes to spatial memory judgments primarily when episodic memory fails to provide precise information, and recollection can override schema bias completely.
Collapse
Affiliation(s)
- Michelle M Ramey
- Department of Psychology, University of California, Davis, CA, USA; Center for Neuroscience, University of California, Davis, CA, USA; Center for Mind and Brain, University of California, Davis, CA, USA.
| | - John M Henderson
- Department of Psychology, University of California, Davis, CA, USA; Center for Mind and Brain, University of California, Davis, CA, USA
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, CA, USA; Center for Neuroscience, University of California, Davis, CA, USA
| |
Collapse
|
19
|
Wynn JS, Van Genugten RDI, Sheldon S, Schacter DL. Schema-related eye movements support episodic simulation. Conscious Cogn 2022; 100:103302. [PMID: 35240421 PMCID: PMC9007866 DOI: 10.1016/j.concog.2022.103302] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 02/02/2022] [Accepted: 02/16/2022] [Indexed: 11/29/2022]
Abstract
Recent work indicates that eye movements support the retrieval of episodic memories by reactivating the spatiotemporal context in which they were encoded. Although similar mechanisms have been thought to support simulation of future episodes, there is currently no evidence favoring this proposal. In the present study, we investigated the role of eye movements in episodic simulation by comparing the gaze patterns of individual participants imagining future scene and event scenarios to across-participant gaze templates for those same scenarios, reflecting their shared features (i.e., schemas). Our results provide novel evidence that eye movements during episodic simulation in the face of distracting visual noise are (1) schema-specific and (2) predictive of simulation success. Together, these findings suggest that eye movements support episodic simulation via reinstatement of scene and event schemas, and more broadly, that interactions between the memory and oculomotor effector systems may underlie critical cognitive processes including constructive episodic simulation.
Collapse
Affiliation(s)
- Jordana S Wynn
- Department of Psychology, Harvard University, Cambridge, USA.
| | | | - Signy Sheldon
- Department of Psychology, McGill University, Montreal, Canada
| | | |
Collapse
|
20
|
Stewart EEM, Ludwig CJH, Schütz AC. Humans represent the precision and utility of information acquired across fixations. Sci Rep 2022; 12:2411. [PMID: 35165336 PMCID: PMC8844410 DOI: 10.1038/s41598-022-06357-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 01/27/2022] [Indexed: 11/28/2022] Open
Abstract
Our environment contains an abundance of objects which humans interact with daily, gathering visual information using sequences of eye-movements to choose which object is best-suited for a particular task. This process is not trivial, and requires a complex strategy where task affordance defines the search strategy, and the estimated precision of the visual information gathered from each object may be used to track perceptual confidence for object selection. This study addresses the fundamental problem of how such visual information is metacognitively represented and used for subsequent behaviour, and reveals a complex interplay between task affordance, visual information gathering, and metacogntive decision making. People fixate higher-utility objects, and most importantly retain metaknowledge about how much information they have gathered about these objects, which is used to guide perceptual report choices. These findings suggest that such metacognitive knowledge is important in situations where decisions are based on information acquired in a temporal sequence.
Collapse
Affiliation(s)
- Emma E M Stewart
- Department of Experimental Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Str. 10F, 35394, Giessen, Germany.
| | | | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
- Center for Mind, Brain and Behaviour, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
21
|
Peacock CE, Cronin DA, Hayes TR, Henderson JM. Meaning and expected surfaces combine to guide attention during visual search in scenes. J Vis 2021; 21:1. [PMID: 34609475 PMCID: PMC8496418 DOI: 10.1167/jov.21.11.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 09/02/2021] [Indexed: 11/24/2022] Open
Abstract
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Deborah A Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
22
|
Li W, Guan J, Shi W. Increasing the load on executive working memory reduces the search performance in the natural scenes: Evidence from eye movements. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02270-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
23
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany
| | | |
Collapse
|
24
|
Rehrig GL, Cheng M, McMahan BC, Shome R. Why are the batteries in the microwave?: Use of semantic information under uncertainty in a search task. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:32. [PMID: 33855644 PMCID: PMC8046897 DOI: 10.1186/s41235-021-00294-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 03/23/2021] [Indexed: 11/10/2022]
Abstract
A major problem in human cognition is to understand how newly acquired information and long-standing beliefs about the environment combine to make decisions and plan behaviors. Over-dependence on long-standing beliefs may be a significant source of suboptimal decision-making in unusual circumstances. While the contribution of long-standing beliefs about the environment to search in real-world scenes is well-studied, less is known about how new evidence informs search decisions, and it is unclear whether the two sources of information are used together optimally to guide search. The present study expanded on the literature on semantic guidance in visual search by modeling a Bayesian ideal observer's use of long-standing semantic beliefs and recent experience in an active search task. The ability to adjust expectations to the task environment was simulated using the Bayesian ideal observer, and subjects' performance was compared to ideal observers that depended on prior knowledge and recent experience to varying degrees. Target locations were either congruent with scene semantics, incongruent with what would be expected from scene semantics, or random. Half of the subjects were able to learn to search for the target in incongruent locations over repeated experimental sessions when it was optimal to do so. These results suggest that searchers can learn to prioritize recent experience over knowledge of scenes in a near-optimal fashion when it is beneficial to do so, as long as the evidence from recent experience was learnable.
Collapse
Affiliation(s)
- Gwendolyn L Rehrig
- Department of Psychology, University of California, Davis, CA, 95616, USA.
| | - Michelle Cheng
- School of Social Sciences, Nanyang Technological University, Singapore, 639798, Singapore
| | - Brian C McMahan
- Department of Computer Science, Rutgers University-New Brunswick, New Brunswick, USA
| | - Rahul Shome
- Department of Computer Science, Rice University, Houston, USA
| |
Collapse
|
25
|
Võ MLH. The meaning and structure of scenes. Vision Res 2021; 181:10-20. [PMID: 33429218 DOI: 10.1016/j.visres.2020.11.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 10/31/2020] [Accepted: 11/03/2020] [Indexed: 01/09/2023]
Abstract
We live in a rich, three dimensional world with complex arrangements of meaningful objects. For decades, however, theories of visual attention and perception have been based on findings generated from lines and color patches. While these theories have been indispensable for our field, the time has come to move on from this rather impoverished view of the world and (at least try to) get closer to the real thing. After all, our visual environment consists of objects that we not only look at, but constantly interact with. Having incorporated the meaning and structure of scenes, i.e. its "grammar", then allows us to easily understand objects and scenes we have never encountered before. Studying this grammar provides us with the fascinating opportunity to gain new insights into the complex workings of attention, perception, and cognition. In this review, I will discuss how the meaning and the complex, yet predictive structure of real-world scenes influence attention allocation, search, and object identification.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Department of Psychology, Johann Wolfgang-Goethe-Universität, Frankfurt, Germany. https://www.scenegrammarlab.com/
| |
Collapse
|
26
|
Pollmann S, Rosenblum L, Linnhoff S, Porracin E, Geringswald F, Herbik A, Renner K, Hoffmann MB. Preserved Contextual Cueing in Realistic Scenes in Patients with Age-Related Macular Degeneration. Brain Sci 2020; 10:brainsci10120941. [PMID: 33297319 PMCID: PMC7762266 DOI: 10.3390/brainsci10120941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/30/2020] [Accepted: 12/04/2020] [Indexed: 11/25/2022] Open
Abstract
Foveal vision loss has been shown to reduce efficient visual search guidance due to contextual cueing by incidentally learned contexts. However, previous studies used artificial (T- among L-shape) search paradigms that prevent the memorization of a target in a semantically meaningful scene. Here, we investigated contextual cueing in real-life scenes that allow explicit memory of target locations in semantically rich scenes. In contrast to the contextual cueing deficits in artificial scenes, contextual cueing in patients with age-related macular degeneration (AMD) did not differ from age-matched normal-sighted controls. We discuss this in the context of visuospatial working-memory demands for which both eye movement control in the presence of central vision loss and memory-guided search may compete. Memory-guided search in semantically rich scenes may depend less on visuospatial working memory than search in abstract displays, potentially explaining intact contextual cueing in the former but not the latter. In a practical sense, our findings may indicate that patients with AMD are less deficient than expected after previous lab experiments. This shows the usefulness of realistic stimuli in experimental clinical research.
Collapse
Affiliation(s)
- Stefan Pollmann
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
- Beijing Key Laboratory of Learning and Cognition and School of Psychology, Capital Normal University, Beijing 100048, China
- Correspondence: ; Tel.: +49-391-67-58474; Fax: +49-391-67-11947
| | - Lisa Rosenblum
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
| | - Stefanie Linnhoff
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
| | - Eleonora Porracin
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
| | - Franziska Geringswald
- Department of Experimental Psychology, Otto-von-Guericke-University, Postfach 4120, 39016 Magdeburg, Germany; (L.R.); (S.L.); (E.P.); (F.G.)
- Laboratoire de Neurosciences Cognitives UMR 7291, Aix-Marseille Université & CNRS, 13331 Marseille, France
| | - Anne Herbik
- Department of Ophthalmology, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
| | - Katja Renner
- Eye Clinic Am Johannisplatz, 04103 Leipzig, Germany;
| | - Michael B. Hoffmann
- Center for Behavioral Brain Sciences, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
- Department of Ophthalmology, Otto-von-Guericke-University, 39016 Magdeburg, Germany;
| |
Collapse
|
27
|
Episodic and semantic memory processes in the boundary extension effect: An investigation using the remember/know paradigm. Acta Psychol (Amst) 2020; 211:103190. [PMID: 33130488 DOI: 10.1016/j.actpsy.2020.103190] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 08/31/2020] [Accepted: 09/24/2020] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Boundary extension (BE) is a phenomenon where participants report from memory that they have experienced more information of a scene than was initially presented. The goal of the current study was to investigate whether BE is fully based on episodic memory or also involves semantic scheme knowledge. METHODS The study incorporated the remember/know paradigm into a BE task. Scenes were first learned incidentally, with participants later indicating whether they remembered or knew that they had seen the scene before. Next, they had to rate 3 views - zoomed in, zoomed out or unchanged - of the original picture on similarity in closeness in order to measure BE. RESULTS The results showed a systematic BE pattern, but no difference in the amount of BE for episodic ('remember') and semantic ('know') memory. Additionally, the remember/know paradigm used in this study showed good sensitivity for both the remember and know responses. DISCUSSION The results suggest that BE might not critically depend on the contextual information provided by episodic memory, but rather depends on schematic knowledge shared by episodic and semantic memory. Schematic knowledge might be involved in BE by providing an expectation of what likely lies beyond the boundaries of the scene based on semantic guidance. GEL CLASSIFICATION 2343 learning & memory.
Collapse
|
28
|
Ramey MM, Yonelinas AP, Henderson JM. Why do we retrace our visual steps? Semantic and episodic memory in gaze reinstatement. ACTA ACUST UNITED AC 2020; 27:275-283. [PMID: 32540917 PMCID: PMC7301753 DOI: 10.1101/lm.051227.119] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2019] [Accepted: 05/14/2020] [Indexed: 12/05/2022]
Abstract
When we look at repeated scenes, we tend to visit similar regions each time—a phenomenon known as resampling. Resampling has long been attributed to episodic memory, but the relationship between resampling and episodic memory has recently been found to be less consistent than assumed. A possibility that has yet to be fully considered is that factors unrelated to episodic memory may generate resampling: for example, other factors such as semantic memory and visual salience that are consistently present each time an image is viewed and are independent of specific prior viewing instances. We addressed this possibility by tracking participants’ eyes during scene viewing to examine how semantic memory, indexed by the semantic informativeness of scene regions (i.e., meaning), is involved in resampling. We found that viewing more meaningful regions predicted resampling, as did episodic familiarity strength. Furthermore, we found that meaning interacted with familiarity strength to predict resampling. Specifically, the effect of meaning on resampling was attenuated in the presence of strong episodic memory, and vice versa. These results suggest that episodic and semantic memory are each involved in resampling behavior and are in competition rather than synergistically increasing resampling. More generally, this suggests that episodic and semantic memory may compete to guide attention.
Collapse
Affiliation(s)
- Michelle M Ramey
- Department of Psychology, University of California, Davis, California 95616, USA.,Center for Neuroscience, University of California, Davis, California 95618, USA.,Center for Mind and Brain, University of California, Davis, California 95618, USA
| | - Andrew P Yonelinas
- Department of Psychology, University of California, Davis, California 95616, USA.,Center for Neuroscience, University of California, Davis, California 95618, USA
| | - John M Henderson
- Department of Psychology, University of California, Davis, California 95616, USA.,Center for Mind and Brain, University of California, Davis, California 95618, USA
| |
Collapse
|
29
|
Öhlschläger S, Võ MLH. Development of scene knowledge: Evidence from explicit and implicit scene knowledge measures. J Exp Child Psychol 2020; 194:104782. [DOI: 10.1016/j.jecp.2019.104782] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 12/01/2019] [Accepted: 12/02/2019] [Indexed: 10/24/2022]
|
30
|
Yoo SA, Rosenbaum RS, Tsotsos JK, Fallah M, Hoffman KL. Long-term memory and hippocampal function support predictive gaze control during goal-directed search. J Vis 2020; 20:10. [PMID: 32455429 PMCID: PMC7409592 DOI: 10.1167/jov.20.5.10] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Eye movements during visual search change with prior experience for search stimuli. Previous studies measured these gaze effects shortly after initial viewing, typically during free viewing; it remains open whether the effects are preserved across long delays and for goal-directed search, and which memory system guides gaze. In Experiment 1, we analyzed eye movements of healthy adults viewing novel and repeated scenes while searching for a scene-embedded target. The task was performed across different time points to examine the repetition effects in long-term memory, and memory types were grouped based on explicit recall of targets. In Experiment 2, an amnesic person with bilateral extended hippocampal damage and the age-matched control group performed the same task with shorter intervals to determine whether or not the repetition effects depend on hippocampal function. When healthy adults explicitly remembered repeated target-scene pairs, search time and fixation duration decreased, and gaze was directed closer to the target region, than when they forgot targets. These effects were seen even after a one-month delay from their initial viewing, suggesting the effects are associated with long-term, explicit memory. Saccadic amplitude was not strongly modulated by scene repetition or explicit recall of targets. The amnesic person did not show explicit recall or implicit repetition effects, whereas his control group showed similar patterns to those seen in Experiment 1. The results reveal several aspects of gaze control that are influenced by long-term memory. The dependence of gaze effects on medial temporal lobe integrity support a role for this region in predictive gaze control.
Collapse
|
31
|
Shen Z, Zhang L, Xiao X, Li R, Liang R. Icon Familiarity Affects the Performance of Complex Cognitive Tasks. Iperception 2020; 11:2041669520910167. [PMID: 32180935 PMCID: PMC7059235 DOI: 10.1177/2041669520910167] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Accepted: 02/10/2020] [Indexed: 11/16/2022] Open
Abstract
The purpose of this study was to investigate whether and how users' familiarity with symbols affects the performance of complex cognitive tasks which place considerable demands on working memory resources. We combined a modified math task paradigm with our previous icon familiarity training paradigm. Participants were required to complete a mathematical task involving icons to test their ability to perform complex cognitive tasks. The complexity of the task was manipulated using three independent variables: icon familiarity (high-frequency vs. low-frequency), whether or not the equation requires substitution (substitution vs. no-substitution), and the number of steps required for solution (one step vs. two steps). The results showed that participants performed better on the equation-solving task when it used icons they were more extensively trained on. Importantly, icon familiarity interacted with the complexity of the task and the familiarity effect on performance (accuracy and response time) became greater when the complexity increased. These findings provide evidence that familiarity affects not only the ease of information retrieval but also the ease of subsequent processing activities associated with these information, which extends our understanding of how familiarity affects working memory. Moreover, our findings have practical implications for improving interaction efficiency. Before the operators formally use a digital system, they need to learn the precise meaning of those complex or unfamiliar symbols in a certain context as much as possible.
Collapse
Affiliation(s)
| | | | - Xing Xiao
- School of Design,
Jiangnan
University, Wuxi, China
| | - Rui Li
- School of Design,
Jiangnan
University, Wuxi, China
| | - Ruoyu Liang
- School of Design,
Jiangnan
University, Wuxi, China
| |
Collapse
|
32
|
Ryan JD, Shen K, Liu Z. The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Ann N Y Acad Sci 2020; 1464:115-141. [PMID: 31617589 PMCID: PMC7154681 DOI: 10.1111/nyas.14256] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/29/2019] [Accepted: 09/19/2019] [Indexed: 12/28/2022]
Abstract
Decades of cognitive neuroscience research has shown that where we look is intimately connected to what we remember. In this article, we review findings from human and nonhuman animals, using behavioral, neuropsychological, neuroimaging, and computational modeling methods, to show that the oculomotor and hippocampal memory systems interact in a reciprocal manner, on a moment-to-moment basis, mediated by a vast structural and functional network. Visual exploration serves to efficiently gather information from the environment for the purpose of creating new memories, updating existing memories, and reconstructing the rich, vivid details from memory. Conversely, memory increases the efficiency of visual exploration. We call for models of oculomotor control to consider the influence of the hippocampal memory system on the cognitive control of eye movements, and for models of hippocampal and broader medial temporal lobe function to consider the influence of the oculomotor system on the development and expression of memory. We describe eye movement-based applications for the detection of neurodegeneration and delivery of therapeutic interventions for mental health disorders for which the hippocampus is implicated and memory dysfunctions are at the forefront.
Collapse
Affiliation(s)
- Jennifer D. Ryan
- Rotman Research InstituteBaycrestTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Kelly Shen
- Rotman Research InstituteBaycrestTorontoOntarioCanada
| | - Zhong‐Xu Liu
- Department of Behavioral SciencesUniversity of Michigan‐DearbornDearbornMichigan
| |
Collapse
|
33
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
34
|
Cronin DA, Hall EH, Goold JE, Hayes TR, Henderson JM. Eye Movements in Real-World Scene Photographs: General Characteristics and Effects of Viewing Task. Front Psychol 2020; 10:2915. [PMID: 32010016 PMCID: PMC6971407 DOI: 10.3389/fpsyg.2019.02915] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
The present study examines eye movement behavior in real-world scenes with a large (N = 100) sample. We report baseline measures of eye movement behavior in our sample, including mean fixation duration, saccade amplitude, and initial saccade latency. We also characterize how eye movement behaviors change over the course of a 12 s trial. These baseline measures will be of use to future work studying eye movement behavior in scenes in a variety of literatures. We also examine effects of viewing task on when and where the eyes move in real-world scenes: participants engaged in a memorization and an aesthetic judgment task while viewing 100 scenes. While we find no difference at the mean-level between the two tasks, temporal- and distribution-level analyses reveal significant task-driven differences in eye movement behavior.
Collapse
Affiliation(s)
- Deborah A. Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Elizabeth H. Hall
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| | - Jessica E. Goold
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Taylor R. Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - John M. Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
- Department of Psychology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
35
|
Henderson JM. Meaning and attention in scenes. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
36
|
Ramzaoui H, Faure S, Spotorno S. Alzheimer's Disease, Visual Search, and Instrumental Activities of Daily Living: A Review and a New Perspective on Attention and Eye Movements. J Alzheimers Dis 2019; 66:901-925. [PMID: 30400086 DOI: 10.3233/jad-180043] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Many instrumental activities of daily living (IADLs), like cooking and managing finances and medications, involve finding efficiently and in a timely manner one or several objects within complex environments. They may thus be disrupted by visual search deficits. These deficits, present in Alzheimer's disease (AD) from its early stages, arise from impairments in multiple attentional and memory mechanisms. A growing body of research on visual search in AD has examined several factors underlying search impairments in simple arrays. Little is known about how AD patients search in real-world scenes and in real settings, and about how such impairments affect patients' functional autonomy. Here, we review studies on visuospatial attention and visual search in AD. We then consider why analysis of patients' oculomotor behavior is promising to improve understanding of the specific search deficits in AD, and of their role in impairing IADL performance. We also highlight why paradigms developed in research on real-world scenes and real settings in healthy individuals are valuable to investigate visual search in AD. Finally, we indicate future research directions that may offer new insights to improve visual search abilities and autonomy in AD patients.
Collapse
Affiliation(s)
- Hanane Ramzaoui
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales, Université Côte d'Azur, France
| | - Sylvane Faure
- Laboratoire d'Anthropologie et de Psychologie Cliniques, Cognitives et Sociales, Université Côte d'Azur, France
| | - Sara Spotorno
- School of Psychology, University of Aberdeen, UK.,Institute of Neuroscience and Psychology, University of Glasgow, UK
| |
Collapse
|
37
|
Võ MLH, Boettcher SEP, Draschkow D. Reading scenes: how scene grammar guides attention and aids perception in real-world environments. Curr Opin Psychol 2019; 29:205-210. [DOI: 10.1016/j.copsyc.2019.03.009] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 03/07/2019] [Accepted: 03/13/2019] [Indexed: 11/30/2022]
|
38
|
Wolfe JM, Utochkin IS. What is a preattentive feature? Curr Opin Psychol 2019; 29:19-26. [PMID: 30472539 PMCID: PMC6513732 DOI: 10.1016/j.copsyc.2018.11.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/01/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022]
Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Corresponding author Visual Attention Lab, Department
of Surgery, Brigham & Women's Hospital, Departments of Ophthalmology
and Radiology, Harvard Medical School, 64 Sidney St. Suite. 170, Cambridge, MA
02139-4170,
| | - Igor S Utochkin
- National Research University Higher School of
Economics, Moscow, Russian Federation Address: 101000, Armyansky per. 4, Moscow,
Russian Federation,
| |
Collapse
|
39
|
Parsing rooms: the role of the PPA and RSC in perceiving object relations and spatial layout. Brain Struct Funct 2019; 224:2505-2524. [PMID: 31317256 PMCID: PMC6698272 DOI: 10.1007/s00429-019-01901-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 06/01/2019] [Indexed: 11/25/2022]
Abstract
The perception of a scene involves grasping the global space of the scene, usually called the spatial layout, as well as the objects in the scene and the relations between them. The main brain areas involved in scene perception, the parahippocampal place area (PPA) and retrosplenial cortex (RSC), are supposed to mostly support the processing of spatial layout. Here we manipulated the objects and their relations either by arranging objects within rooms in a common way or by scattering them randomly. The rooms were then varied for spatial layout by keeping or removing the walls of the room, a typical layout manipulation. We then combined a visual search paradigm, where participants actively search for an object within the room, with multivariate pattern analysis (MVPA). Both left and right PPA were sensitive to the layout properties, but the right PPA was also sensitive to the object relations even when the information about objects and their relations is used in the cross-categorization procedure on novel stimuli. The left and right RSC were sensitive to both spatial layout and object relations, but could only use the information about object relations for cross-categorization to novel stimuli. These effects were restricted to the PPA and RSC, as other control brain areas did not display the same pattern of results. Our results underline the importance of employing paradigms that require participants to explicitly retrieve domain-specific processes and indicate that objects and their relations are processed in the scene areas to a larger extent than previously assumed.
Collapse
|
40
|
Williams CC, Castelhano MS. The Changing Landscape: High-Level Influences on Eye Movement Guidance in Scenes. Vision (Basel) 2019; 3:E33. [PMID: 31735834 PMCID: PMC6802790 DOI: 10.3390/vision3030033] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Revised: 06/20/2019] [Accepted: 06/24/2019] [Indexed: 11/16/2022] Open
Abstract
The use of eye movements to explore scene processing has exploded over the last decade. Eye movements provide distinct advantages when examining scene processing because they are both fast and spatially measurable. By using eye movements, researchers have investigated many questions about scene processing. Our review will focus on research performed in the last decade examining: (1) attention and eye movements; (2) where you look; (3) influence of task; (4) memory and scene representations; and (5) dynamic scenes and eye movements. Although typically addressed as separate issues, we argue that these distinctions are now holding back research progress. Instead, it is time to examine the intersections of these seemingly separate influences and examine the intersectionality of how these influences interact to more completely understand what eye movements can tell us about scene processing.
Collapse
Affiliation(s)
- Carrick C. Williams
- Department of Psychology, California State University San Marcos, San Marcos, CA 92069, USA
| | | |
Collapse
|
41
|
Boettcher SEP, Draschkow D, Dienhart E, Võ MLH. Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search. J Vis 2018; 18:11. [DOI: 10.1167/18.13.11] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Dejan Draschkow
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Eric Dienhart
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Melissa L.-H. Võ
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
42
|
Henderson JM, Hayes TR. Meaning guides attention in real-world scene images: Evidence from eye movements and meaning maps. J Vis 2018; 18:10. [PMID: 30029216 PMCID: PMC6012218 DOI: 10.1167/18.6.10] [Citation(s) in RCA: 60] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Accepted: 04/18/2018] [Indexed: 11/24/2022] Open
Abstract
We compared the influence of meaning and of salience on attentional guidance in scene images. Meaning was captured by "meaning maps" representing the spatial distribution of semantic information in scenes. Meaning maps were coded in a format that could be directly compared to maps of image salience generated from image features. We investigated the degree to which meaning versus image salience predicted human viewers' spatiotemporal distribution of attention over scenes. Extending previous work, here the distribution of attention was operationalized as duration-weighted fixation density. The results showed that both meaning and image salience predicted the duration-weighted distribution of attention, but that when the correlation between meaning and salience was statistically controlled, meaning accounted for unique variance in attention whereas salience did not. This pattern was observed in early as well as late fixations, fixations including and excluding the centers of the scenes, and fixations following short as well as long saccades. The results strongly suggest that meaning guides attention in real-world scenes. We discuss the results from the perspective of a cognitive-relevance theory of attentional guidance.
Collapse
Affiliation(s)
- John M Henderson
- Center for Mind and Brain, University of California, Davis, CA, USA
- Department of Psychology, University of California, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, CA, USA
| |
Collapse
|
43
|
Abstract
Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.
Collapse
Affiliation(s)
- Chia-Ling Li
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA.
| | - M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
44
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
45
|
Hannula DE. Attention and long-term memory: Bidirectional interactions and their effects on behavior. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
46
|
Abstract
Abstract. The present study investigated the temporal dynamics of the object-scene congruity during a categorization task of objects embedded in a scene. Participants (n = 28) categorized objects in scenes as natural or man-made while event-related brain potentials (ERPs) were recorded. The object-scene associations were either congruous (e.g., a tent in a field) or incongruous (e.g., a fridge in a desert). The results confirmed that contextual congruity affects item processing in the 300–500 ms time window with larger N300/N400 complex in the incongruous than in the congruous condition. However, unlike previous work which found an effect of congruity starting at ~ 250 ms poststimulus on fronto-central regions, the earliest sign of a reliable context congruity effect arose at ~ 170 ms at left centro-parietal regions in the present study. The present results are in line with those of previous studies showing that object and context are processed in parallel with continuous interactions from 150 to 500 ms, possibly through feed-forward co-activation of populations of neurons selective to the processing of the object and its context. The present finding provides novel evidence suggesting that online context violations might affect earlier visual processes and routines of matching between possible scene-congruent activated schemas and the upcoming information about the item to process.
Collapse
Affiliation(s)
- Fabrice Guillaume
- Laboratoire de Psychologie Cognitive (CNRS UMR 7290), Aix-Marseille Université, Marseille, France
| | - Sophie Tinard
- Laboratoire de Psychologie Cognitive (CNRS UMR 7290), Aix-Marseille Université, Marseille, France
| | - Sophia Baier
- Laboratoire d’Anthropologie et de Psychologie Cognitive et Sociale (EA 7278), Université de Nice Sophia Antipolis, Nice, France
| | - Stéphane Dufau
- Laboratoire de Psychologie Cognitive (CNRS UMR 7290), Aix-Marseille Université, Marseille, France
| |
Collapse
|
47
|
Draschkow D, Võ MLH. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Sci Rep 2017; 7:16471. [PMID: 29184115 PMCID: PMC5705766 DOI: 10.1038/s41598-017-16739-x] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/16/2017] [Indexed: 11/09/2022] Open
Abstract
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Collapse
Affiliation(s)
- Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany.
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
48
|
Abstract
Humans are remarkably capable of finding desired objects in the world, despite the scale and complexity of naturalistic environments. Broadly, this ability is supported by an interplay between exploratory search and guidance from episodic memory for previously observed target locations. Here we examined how the environment itself may influence this interplay. In particular, we examined how partitions in the environment-like buildings, rooms, and furniture-can impact memory during repeated search. We report that the presence of partitions in a display, independent of item configuration, reliably improves episodic memory for item locations. Repeated search through partitioned displays was faster overall and was characterized by more rapid ballistic orienting in later repetitions. Explicit recall was also both faster and more accurate when displays were partitioned. Finally, we found that search paths were more regular and systematic when displays were partitioned. Given the ubiquity of partitions in real-world environments, these results provide important insights into the mechanisms of naturalistic search and its relation to memory.
Collapse
|
49
|
Meaning in learning: Contextual cueing relies on objects' visual features and not on objects' meaning. Mem Cognit 2017; 46:58-67. [PMID: 28770539 DOI: 10.3758/s13421-017-0745-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People easily learn regularities embedded in the environment and utilize them to facilitate visual search. Using images of real-world objects, it has been recently shown that this learning, termed contextual cueing (CC), occurs even in complex, heterogeneous environments, but only when the same distractors are repeated at the same locations. Yet it is not clear what exactly is being learned under these conditions: the visual features of the objects or their meaning. In this study, Experiment 1 demonstrated that meaning is not necessary for this type of learning, as a similar pattern of results was found even when the objects' meaning was largely removed. Experiments 2 and 3 showed that after learning meaningful objects, CC was not diminished by a manipulation that distorted the objects' meaning but preserved most of their visual properties. By contrast, CC was eliminated when the learned objects were replaced with different category exemplars that preserved the objects' meaning but altered their visual properties. Together, these data strongly suggest that the acquired context that facilitates real-world objects search relies primarily on the visual properties and the spatial locations of the objects, but not on their meaning.
Collapse
|
50
|
Li CL, Aivar MP, Kit DM, Tong MH, Hayhoe MM. Memory and visual search in naturalistic 2D and 3D environments. J Vis 2017; 16:9. [PMID: 27299769 PMCID: PMC4913723 DOI: 10.1167/16.8.9] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Collapse
|