1
|
Nikolov TY, Allen RJ, Havelka J, Darling S, van de Vegte B, Morey CC. Navigating the mind's eye: Understanding gaze shifts in visuospatial bootstrapping. Q J Exp Psychol (Hove) 2025; 78:391-404. [PMID: 39225162 DOI: 10.1177/17470218241282426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Visuospatial bootstrapping refers to the well-replicated phenomena in which serial recall in a purely verbal task is boosted by presenting digits within the familiar spatial layout of a typical telephone keypad. The visuospatial bootstrapping phenomena indicates that additional support comes from long-term knowledge of a fixed spatial pattern, and prior experimentation supports the idea that access to this benefit depends on the availability of the visuospatial motor system. We investigate this by tracking participants' eye movements during encoding and retention of verbal lists to learn whether gaze patterns support verbal memory differently when verbal information is presented in the familiar visual layout. Participants' gaze was recorded during attempts to recall lists of seven digits in three formats: centre of the screen, typical telephone keypad, or a spatially identical layout with randomised number placement. Performance was better with the typical than with the novel layout. Our data show that eye movements differ when encoding and retaining verbal information that has a familiar layout compared with the same verbal information presented in a novel layout, suggesting recruitment of different spatial rehearsal strategies. However, no clear link between gaze pattern and recall accuracy was observed, which suggests that gazes play a limited role in retention, at best.
Collapse
|
2
|
Isasi-Isasmendi A, Andrews C, Flecken M, Laka I, Daum MM, Meyer M, Bickel B, Sauppe S. The Agent Preference in Visual Event Apprehension. Open Mind (Camb) 2023; 7:240-282. [PMID: 37416075 PMCID: PMC10320828 DOI: 10.1162/opmi_a_00083] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/19/2023] [Indexed: 07/08/2023] Open
Abstract
A central aspect of human experience and communication is understanding events in terms of agent ("doer") and patient ("undergoer" of action) roles. These event roles are rooted in general cognition and prominently encoded in language, with agents appearing as more salient and preferred over patients. An unresolved question is whether this preference for agents already operates during apprehension, that is, the earliest stage of event processing, and if so, whether the effect persists across different animacy configurations and task demands. Here we contrast event apprehension in two tasks and two languages that encode agents differently; Basque, a language that explicitly case-marks agents ('ergative'), and Spanish, which does not mark agents. In two brief exposure experiments, native Basque and Spanish speakers saw pictures for only 300 ms, and subsequently described them or answered probe questions about them. We compared eye fixations and behavioral correlates of event role extraction with Bayesian regression. Agents received more attention and were recognized better across languages and tasks. At the same time, language and task demands affected the attention to agents. Our findings show that a general preference for agents exists in event apprehension, but it can be modulated by task and language demands.
Collapse
Affiliation(s)
- Arrate Isasi-Isasmendi
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Caroline Andrews
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Monique Flecken
- Department of Linguistics, Amsterdam Centre for Language and Communication, University of Amsterdam, Amsterdam, The Netherlands
| | - Itziar Laka
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Moritz M. Daum
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, University of Klagenfurt, Klagenfurt, Austria
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
3
|
Kinjo H, Fooken J, Spering M. Do eye movements enhance visual memory retrieval? Vision Res 2020; 176:80-90. [PMID: 32827879 DOI: 10.1016/j.visres.2020.07.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 07/10/2020] [Accepted: 07/15/2020] [Indexed: 10/23/2022]
Abstract
When remembering an object at a given location, participants tend to return their gaze to that location even after the object has disappeared, known as Looking-at-Nothing (LAN). However, it is unclear whether LAN is associated with better memory performance. Previous studies reporting beneficial effects of LAN have often not systematically manipulated or assessed eye movements. We asked 20 participants to remember the location and identity of eight objects arranged in a circle, shown for 5 s. Participants were prompted to judge whether a location statement (e.g., "Star Right") was correct or incorrect, or referred to a previously unseen object. During memory retrieval, participants either fixated in the screen center or were free to move their eyes. Results reveal no difference in memory accuracy and response time between free-viewing and fixation while a LAN effect was found for saccades during free viewing, but not for microsaccades during fixation. Memory performance was better in those free-viewing trials in which participants made a saccade to the critical location, and scaled with saccade accuracy. These results indicate that saccade kinematics might be related to both memory performance and memory retrieval processes, but the strength of their link would differ between individuals and task demands.
Collapse
Affiliation(s)
- Hikari Kinjo
- Faculty of Psychology, Meiji Gakuin University, Tokyo, Japan; Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada.
| | - Jolande Fooken
- Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| | - Miriam Spering
- Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada; Center for Brain Health, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
4
|
Jones MW, Kuipers JR, Nugent S, Miley A, Oppenheim G. Episodic traces and statistical regularities: Paired associate learning in typical and dyslexic readers. Cognition 2018; 177:214-225. [DOI: 10.1016/j.cognition.2018.04.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 04/10/2018] [Accepted: 04/10/2018] [Indexed: 12/23/2022]
|
5
|
Johansson R, Oren F, Holmqvist K. Gaze patterns reveal how situation models and text representations contribute to episodic text memory. Cognition 2018; 175:53-68. [PMID: 29471198 DOI: 10.1016/j.cognition.2018.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 02/12/2018] [Accepted: 02/13/2018] [Indexed: 10/18/2022]
Abstract
When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play.
Collapse
Affiliation(s)
| | - Franziska Oren
- Department of Psychology, University of Copenhagen, Denmark.
| | - Kenneth Holmqvist
- UPSET, North-West University Vaal, South Africa; Faculty of Arts, Masaryk University, Brno, Czech Republic.
| |
Collapse
|