1
|
El Haj M. When you look at your past: Eye movement during autobiographical retrieval. Conscious Cogn 2024; 118:103652. [PMID: 38301389 DOI: 10.1016/j.concog.2024.103652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 01/19/2024] [Accepted: 01/22/2024] [Indexed: 02/03/2024]
Abstract
Until recently, little was known about whether or how autobiographical memory (i.e., memory of personal information) activates eye movement. This issue is now being addressed by several studies demonstrating not only how autobiographical memory activates eye movement, but also how eye movement influences the characteristics of autobiographical retrieval. This paper summarizes this research and presents a hypothesis according to which fixations and saccades during autobiographical retrieval mirror the construction of the visual image of the retrieved event. This hypothesis suggests that eye movements during autobiographical retrieval mirror the attempts of the visual system to generate and manipulate mental representations of autobiographical retrieval. It offers a theoretical framework for a burgeoning area of research that provides a rigorous behavioral evaluation of the phenomenological experience of memory.
Collapse
|
2
|
Martarelli CS, Chiquet S, Ertl M. Keeping track of reality: embedding visual memory in natural behaviour. Memory 2023; 31:1295-1305. [PMID: 37727126 DOI: 10.1080/09658211.2023.2260148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 07/21/2023] [Indexed: 09/21/2023]
Abstract
Since immersive virtual reality (IVR) emerged as a research method in the 1980s, the focus has been on the similarities between IVR and actual reality. In this vein, it has been suggested that IVR methodology might fill the gap between laboratory studies and real life. IVR allows for high internal validity (i.e., a high degree of experimental control and experimental replicability), as well as high external validity by letting participants engage with the environment in an almost natural manner. Despite internal validity being crucial to experimental designs, external validity also matters in terms of the generalizability of results. In this paper, we first highlight and summarise the similarities and differences between IVR, desktop situations (both non-immersive VR and computer experiments), and reality. In the second step, we propose that IVR is a promising tool for visual memory research in terms of investigating the representation of visual information embedded in natural behaviour. We encourage researchers to carry out experiments on both two-dimensional computer screens and in immersive virtual environments to investigate visual memory and validate and replicate the findings. IVR is valuable because of its potential to improve theoretical understanding and increase the psychological relevance of the findings.
Collapse
Affiliation(s)
| | - Sandra Chiquet
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Matthias Ertl
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
3
|
Rahavi A, Malaspina M, Albonico A, Barton JJS. "Looking at nothing": An implicit ocular motor index of face recognition in developmental prosopagnosia. Cogn Neuropsychol 2023; 40:59-70. [PMID: 37612792 DOI: 10.1080/02643294.2023.2250510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 06/29/2023] [Accepted: 07/02/2023] [Indexed: 08/25/2023]
Abstract
Subjects often look towards to previous location of a stimulus related to a task even when that stimulus is no longer visible. In this study we asked whether this effect would be preserved or reduced in subjects with developmental prosopagnosia. Participants learned faces presented in video-clips and then saw a brief montage of four faces, which was replaced by a screen with empty boxes, at which time they indicated whether the learned face had been present in the montage. Control subjects were more likely to look at the blank location where the learned face had appeared, on both hit and miss trials, though the effect was larger on hit trials. Prosopagnosic subjects showed a reduced effect, though still better on hit than on miss trials. We conclude that explicit accuracy and our implicit looking at nothing effect are parallel effects reflecting the strength of the neural activity underlying face recognition.
Collapse
Affiliation(s)
- Aida Rahavi
- Human Vision and Eye Movement Laboratory, Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Manuela Malaspina
- Human Vision and Eye Movement Laboratory, Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Andrea Albonico
- Human Vision and Eye Movement Laboratory, Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada
| | - Jason J S Barton
- Departments of Medicine (Neurology), University of British Columbia, Vancouver, Canada
| |
Collapse
|
4
|
Chiquet S, Martarelli CS, Mast FW. Imagery-related eye movements in 3D space depend on individual differences in visual object imagery. Sci Rep 2022; 12:14136. [PMID: 35986076 PMCID: PMC9391428 DOI: 10.1038/s41598-022-18080-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 08/04/2022] [Indexed: 11/09/2022] Open
Abstract
During recall of visual information people tend to move their eyes even though there is nothing to see. Previous studies indicated that such eye movements are related to the spatial location of previously seen items on 2D screens, but they also showed that eye movement behavior varies significantly across individuals. The reason for these differences remains unclear. In the present study we used immersive virtual reality to investigate how individual tendencies to process and represent visual information contribute to eye fixation patterns in visual imagery of previously inspected objects in three-dimensional (3D) space. We show that participants also look back to relevant locations when they are free to move in 3D space. Furthermore, we found that looking back to relevant locations depends on individual differences in visual object imagery abilities. We suggest that object visualizers rely less on spatial information because they tend to process and represent the visual information in terms of color and shape rather than in terms of spatial layout. This finding indicates that eye movements during imagery are subject to individual strategies, and the immersive setting in 3D space made individual differences more likely to unfold.
Collapse
|
5
|
Reinstating location improves mnemonic access but not fidelity of visual mental representations. Cortex 2022; 156:39-53. [DOI: 10.1016/j.cortex.2022.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/03/2022] [Accepted: 08/04/2022] [Indexed: 11/18/2022]
|
6
|
Malaspina M, Albonico A, Rahavi A, Barton JJ. An ocular motor index of rapid face recognition: the ‘looking-at-nothing’ effect. Brain Res 2022; 1783:147839. [DOI: 10.1016/j.brainres.2022.147839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 01/18/2022] [Accepted: 02/15/2022] [Indexed: 11/13/2022]
|
7
|
A consensus-based elastic matching algorithm for mapping recall fixations onto encoding fixations in the looking-at-nothing paradigm. Behav Res Methods 2021; 53:2049-2068. [PMID: 33754324 PMCID: PMC8516795 DOI: 10.3758/s13428-020-01513-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2020] [Indexed: 11/08/2022]
Abstract
We present an algorithmic method for aligning recall fixations with encoding fixations, to be used in looking-at-nothing paradigms that either record recall eye movements during silence or want to speed up data analysis with recordings of recall data during speech. The algorithm utilizes a novel consensus-based elastic matching algorithm to estimate which encoding fixations correspond to later recall fixations. This is not a scanpath comparison method, as fixation sequence order is ignored and only position configurations are used. The algorithm has three internal parameters and is reasonable stable over a wide range of parameter values. We then evaluate the performance of our algorithm by investigating whether the recalled objects identified by the algorithm correspond with independent assessments of what objects in the image are marked as subjectively important. Our results show that the mapped recall fixations align well with important regions of the images. This result is exemplified in four groups of use cases: to investigate the roles of low-level visual features, faces, signs and text, and people of different sizes, in recall of encoded scenes. The plots from these examples corroborate the finding that the algorithm aligns recall fixations with the most likely important regions in the images. Examples also illustrate how the algorithm can differentiate between image objects that have been fixated during silent recall vs those objects that have not been visually attended, even though they were fixated during encoding.
Collapse
|
8
|
Martarelli CS, Mast FW. Pictorial low-level features in mental images: evidence from eye fixations. PSYCHOLOGICAL RESEARCH 2021; 86:350-363. [PMID: 33751199 DOI: 10.1007/s00426-021-01497-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Accepted: 02/26/2021] [Indexed: 11/29/2022]
Abstract
It is known that eye movements during object imagery reflect areas visited during encoding. But will eye movements also reflect pictorial low-level features of imagined stimuli? In this paper, three experiments are reported in which we investigate whether low-level properties of mental images elicit specific eye movements. Based on the conceptualization of mental images as depictive representations, we expected low-level visual features to influence eye fixations during mental imagery, in the absence of a visual input. In a first experiment, twenty-five participants performed a visual imagery task with high vs. low spatial frequency and high vs. low contrast gratings. We found that both during visual perception and during mental imagery, first fixations were more often allocated to the low spatial frequency-high contrast grating, thus showing that eye fixations were influenced not only by physical properties of visual stimuli but also by its imagined counterpart. In a second experiment, twenty-two participants imagined high contrast and low contrast stimuli that they had not encoded before. Again, participants allocated more fixations to the high contrast mental images than to the low contrast mental images. In a third experiment, we ruled out task difficulty as confounding variable. Our results reveal that low-level visual features are represented in the mind's eye and thus, they contribute to the characterization of mental images in terms of how much perceptual information is re-instantiated during mental imagery.
Collapse
Affiliation(s)
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
9
|
Gurtner LM, Hartmann M, Mast FW. Eye movements during visual imagery and perception show spatial correspondence but have unique temporal signatures. Cognition 2021; 210:104597. [PMID: 33508576 DOI: 10.1016/j.cognition.2021.104597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 01/07/2021] [Accepted: 01/08/2021] [Indexed: 11/20/2022]
Abstract
Eye fixation patterns during mental imagery are similar to those during perception of the same picture, suggesting that oculomotor mechanisms play a role in mental imagery (i.e., the "looking at nothing" effect). Previous research has focused on the spatial similarities of eye movements during perception and mental imagery. The primary aim of this study was to assess whether the spatial similarity translates to the temporal domain. We used recurrence quantification analysis (RQA) to assess the temporal structure of eye fixations in visual perception and mental imagery and we compared the temporal as well as the spatial characteristics in mental imagery with perception by means of Bayesian hierarchical regression models. We further investigated how person and picture-specific characteristics contribute to eye movement behavior in mental imagery. Working memory capacity and mental imagery abilities were assessed to either predict gaze dynamics in visual imagery or to moderate a possible correspondence between spatial or temporal gaze dynamics in perception and mental imagery. We were able to show the spatial similarity of fixations between visual perception and imagery and we provide first evidence for its moderation by working memory capacity. Interestingly, the temporal gaze dynamics in mental imagery were unrelated to those in perception and their variance between participants was not explained by variance in visuo-spatial working memory capacity or vividness of mental images. The semantic content of the imagined pictures was the only meaningful predictor of temporal gaze dynamics. The spatial correspondence reflects shared spatial structure of mental images and perceived pictures, while the unique temporal gaze behavior could be driven by generation, maintenance and protection processes specific to visual imagery. The unique temporal gaze dynamics offer a window to new insights into the genuine process of mental imagery independent of its similarity to perception.
Collapse
Affiliation(s)
- Lilla M Gurtner
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland.
| | - Matthias Hartmann
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland; Faculty of Psychology, UniDistance Suisse, Überlandstrasse 12, 3900 Brig, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland
| |
Collapse
|
10
|
Umar H, Mast FW, Cacchione T, Martarelli CS. The prioritization of visuo-spatial associations during mental imagery. Cogn Process 2021; 22:227-237. [PMID: 33404898 DOI: 10.1007/s10339-020-01010-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 12/10/2020] [Indexed: 10/22/2022]
Abstract
While previous research has shown that during mental imagery participants look back to areas visited during encoding it is unclear what happens when information presented during encoding is incongruent. To investigate this research question, we presented 30 participants with incongruent audio-visual associations (e.g. the image of a car paired with the sound of a cat) and later asked them to create a congruent mental representation based on the auditory cue (e.g. to create a mental representation of a cat while hearing the sound of a cat). The results revealed that participants spent more time in the areas where they previously saw the object and that incongruent audio-visual information during encoding did not appear to interfere with the generation and maintenance of mental images. This finding suggests that eye movements can be flexibly employed during mental imagery depending on the demands of the task.
Collapse
Affiliation(s)
- Hafidah Umar
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kelantan, Malaysia.,Brain and Behaviour Cluster, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian, Kelantan, Malaysia.,Department of Psychology, University of Bern, Bern, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Trix Cacchione
- Department of Developmental Psychology, School of Education, University of Applied Sciences and Arts Northwestern Switzerland, Windisch, Switzerland
| | | |
Collapse
|
11
|
Beitner J, Helbing J, Draschkow D, Võ MLH. Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality. Brain Sci 2021; 11:44. [PMID: 33406655 PMCID: PMC7823740 DOI: 10.3390/brainsci11010044] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 11/21/2022] Open
Abstract
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Collapse
Affiliation(s)
- Julia Beitner
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Jason Helbing
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK;
| | - Melissa L.-H. Võ
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| |
Collapse
|
12
|
Kinjo H, Fooken J, Spering M. Do eye movements enhance visual memory retrieval? Vision Res 2020; 176:80-90. [PMID: 32827879 DOI: 10.1016/j.visres.2020.07.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 07/10/2020] [Accepted: 07/15/2020] [Indexed: 10/23/2022]
Abstract
When remembering an object at a given location, participants tend to return their gaze to that location even after the object has disappeared, known as Looking-at-Nothing (LAN). However, it is unclear whether LAN is associated with better memory performance. Previous studies reporting beneficial effects of LAN have often not systematically manipulated or assessed eye movements. We asked 20 participants to remember the location and identity of eight objects arranged in a circle, shown for 5 s. Participants were prompted to judge whether a location statement (e.g., "Star Right") was correct or incorrect, or referred to a previously unseen object. During memory retrieval, participants either fixated in the screen center or were free to move their eyes. Results reveal no difference in memory accuracy and response time between free-viewing and fixation while a LAN effect was found for saccades during free viewing, but not for microsaccades during fixation. Memory performance was better in those free-viewing trials in which participants made a saccade to the critical location, and scaled with saccade accuracy. These results indicate that saccade kinematics might be related to both memory performance and memory retrieval processes, but the strength of their link would differ between individuals and task demands.
Collapse
Affiliation(s)
- Hikari Kinjo
- Faculty of Psychology, Meiji Gakuin University, Tokyo, Japan; Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada.
| | - Jolande Fooken
- Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada
| | - Miriam Spering
- Dept Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, BC, Canada; Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, BC, Canada; Center for Brain Health, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
13
|
Bone MB, St-Laurent M, Dang C, McQuiggan DA, Ryan JD, Buchsbaum BR. Eye Movement Reinstatement and Neural Reactivation During Mental Imagery. Cereb Cortex 2020; 29:1075-1089. [PMID: 29415220 DOI: 10.1093/cercor/bhy014] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 01/09/2018] [Indexed: 11/14/2022] Open
Abstract
Half a century ago, Donald Hebb posited that mental imagery is a constructive process that emulates perception. Specifically, Hebb claimed that visual imagery results from the reactivation of neural activity associated with viewing images. He also argued that neural reactivation and imagery benefit from the re-enactment of eye movement patterns that first occurred at viewing (fixation reinstatement). To investigate these claims, we applied multivariate pattern analyses to functional MRI (fMRI) and eye tracking data collected while healthy human participants repeatedly viewed and visualized complex images. We observed that the specificity of neural reactivation correlated positively with vivid imagery and with memory for stimulus image details. Moreover, neural reactivation correlated positively with fixation reinstatement, meaning that image-specific eye movements accompanied image-specific patterns of brain activity during visualization. These findings support the conception of mental imagery as a simulation of perception, and provide evidence consistent with the supportive role of eye movement in neural reactivation.
Collapse
Affiliation(s)
- Michael B Bone
- Rotman Research Institute at Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | | | - Christa Dang
- Rotman Research Institute at Baycrest, Toronto, Ontario, Canada
| | | | - Jennifer D Ryan
- Rotman Research Institute at Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Bradley R Buchsbaum
- Rotman Research Institute at Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Deitcher Y, Sachar Y, Vakil E. Effect of eye movement reactivation on visual memory among individuals with moderate-to-severe traumatic brain injury (TBI). J Clin Exp Neuropsychol 2019; 42:208-221. [DOI: 10.1080/13803395.2019.1704223] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Yishai Deitcher
- Psychology Department, Bar-Ilan University, Ramat-Gan, Israel
| | - Yaron Sachar
- Brain Injury rehabilitation, Loewenstein Hospital, Raanana, Israel
| | - Eli Vakil
- Psychology Department, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
15
|
Bochynska A, Vulchanova M, Vulchanov V, Landau B. Spatial language difficulties reflect the structure of intact spatial representation: Evidence from high-functioning autism. Cogn Psychol 2019; 116:101249. [PMID: 31743869 DOI: 10.1016/j.cogpsych.2019.101249] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2018] [Revised: 10/16/2019] [Accepted: 10/22/2019] [Indexed: 11/24/2022]
Abstract
Previous studies have shown that the basic properties of the visual representation of space are reflected in spatial language. This close relationship between linguistic and non-linguistic spatial systems has been observed both in typical development and in some developmental disorders. Here we provide novel evidence for structural parallels along with a degree of autonomy between these two systems among individuals with Autism Spectrum Disorder, a developmental disorder with uneven cognitive and linguistic profiles. In four experiments, we investigated language and memory for locations organized around an axis-based reference system. Crucially, we also recorded participants' eye movements during the tasks in order to provide new insights into the online processes underlying spatial thinking. Twenty-three intellectually high-functioning individuals with autism (HFA) and 23 typically developing controls (TD), all native speakers of Norwegian matched on chronological age and cognitive abilities, participated in the studies. The results revealed a well-preserved axial reference system in HFA and weakness in the representation of direction within the axis, which was especially evident in spatial language. Performance on the non-linguistic tasks did not differ between HFA and control participants, and we observed clear structural parallels between spatial language and spatial representation in both groups. However, there were some subtle differences in the use of spatial language in HFA compared to TD, suggesting that despite the structural parallels, some aspects of spatial language in HFA deviated from the typical pattern. These findings provide novel insights into the prominence of the axial reference systems in non-linguistic spatial representations and spatial language, as well as the possibility that the two systems are, to some degree, autonomous.
Collapse
Affiliation(s)
- Agata Bochynska
- Department of Language and Literature, Norwegian University of Science and Technology, NTNU Trondheim, Norway; Department of Psychology, New York University, New York, NY, USA.
| | - Mila Vulchanova
- Department of Language and Literature, Norwegian University of Science and Technology, NTNU Trondheim, Norway
| | - Valentin Vulchanov
- Department of Language and Literature, Norwegian University of Science and Technology, NTNU Trondheim, Norway
| | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
16
|
Wynn JS, Shen K, Ryan JD. Eye Movements Actively Reinstate Spatiotemporal Mnemonic Content. Vision (Basel) 2019; 3:E21. [PMID: 31735822 PMCID: PMC6802778 DOI: 10.3390/vision3020021] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 05/09/2019] [Accepted: 05/10/2019] [Indexed: 12/23/2022] Open
Abstract
Eye movements support memory encoding by binding distinct elements of the visual world into coherent representations. However, the role of eye movements in memory retrieval is less clear. We propose that eye movements play a functional role in retrieval by reinstating the encoding context. By overtly shifting attention in a manner that broadly recapitulates the spatial locations and temporal order of encoded content, eye movements facilitate access to, and reactivation of, associated details. Such mnemonic gaze reinstatement may be obligatorily recruited when task demands exceed cognitive resources, as is often observed in older adults. We review research linking gaze reinstatement to retrieval, describe the neural integration between the oculomotor and memory systems, and discuss implications for models of oculomotor control, memory, and aging.
Collapse
Affiliation(s)
- Jordana S. Wynn
- Rotman Research Institute, Baycrest, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, 100 St George St., Toronto, ON M5S 3G3, Canada
| | - Kelly Shen
- Rotman Research Institute, Baycrest, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada
| | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest, 3560 Bathurst St., Toronto, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, 100 St George St., Toronto, ON M5S 3G3, Canada
- Department of Psychiatry, University of Toronto, 250 College St., Toronto, ON M5T 1R8, Canada
| |
Collapse
|
17
|
van Ede F, Chekroud SR, Nobre AC. Human gaze tracks attentional focusing in memorized visual space. Nat Hum Behav 2019; 3:462-470. [PMID: 31089296 PMCID: PMC6546593 DOI: 10.1038/s41562-019-0549-y] [Citation(s) in RCA: 93] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 01/28/2019] [Indexed: 12/02/2022]
Abstract
Brain areas that control gaze are also recruited for covert shifts of spatial attention1-9. In the external space of perception, there is a natural ecological link between the control of gaze and spatial attention, as information sampled at covertly attended locations can inform where to look next2,10,11. Attention can also be directed internally to representations held within the spatial layout of visual working memory12-16. In such cases, the incentive for using attention to direct gaze disappears, as there are no external targets to scan. Here we investigate whether the oculomotor system of the brain also participates in attention focusing within the internal space of memory. Paradoxically, we reveal this participation through gaze behaviour itself. We demonstrate that selecting an item from visual working memory biases gaze in the direction of the memorized location of that item, despite there being nothing to look at and location memory never explicitly being probed. This retrospective 'gaze bias' occurs only when an item is not already in the internal focus of attention, and it predicts the performance benefit associated with the focusing of internal attention. We conclude that the oculomotor system also participates in focusing attention within memorized space, leaving traces all the way to the eyes.
Collapse
Affiliation(s)
- Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Sammi R Chekroud
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
18
|
Gurtner LM, Bischof WF, Mast FW. Recurrence quantification analysis of eye movements during mental imagery. J Vis 2019; 19:17. [PMID: 30699229 DOI: 10.1167/19.1.17] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Several studies demonstrated similarities of eye fixations during mental imagery and visual perception but-to our knowledge-the temporal characteristics of eye movements during imagery have not yet been considered in detail. To fill this gap, the same data is analyzed with conventional spatial techniques such as analysis of areas of interest (AOI), ScanMatch, and MultiMatch and with recurrence quantification analysis (RQA), a new way of analyzing gaze data by tracking re-fixations and their temporal dynamics. Participants viewed and afterwards imagined three different kinds of pictures (art, faces, and landscapes) while their eye movements were recorded. While fixation locations during imagery were related to those during perception, participants returned more often to areas they had previously looked at during imagery and their scan paths were more clustered and more repetitive when compared to visual perception. Furthermore, refixations of the same area occurred sooner after initial fixation during mental imagery. The results highlight not only content-driven spatial similarities between imagery and perception but also shed light on the processes of mental imagery maintenance and interindividual differences in these processes.
Collapse
Affiliation(s)
- Lilla M Gurtner
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Walter F Bischof
- Department of Psychology, University of British Columbia, Vancouver BC, Canada
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
19
|
Less imageable words lead to more looks to blank locations during memory retrieval. PSYCHOLOGICAL RESEARCH 2018; 84:667-684. [PMID: 30173279 PMCID: PMC7109172 DOI: 10.1007/s00426-018-1084-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2017] [Accepted: 08/21/2018] [Indexed: 11/07/2022]
Abstract
People revisit spatial locations of visually encoded information when they are asked to retrieve that information, even when the visual image is no longer present. Such “looking at nothing” during retrieval is likely modulated by memory load (i.e., mental effort to maintain and reconstruct information) and the strength of mental representations. We investigated whether words that are more difficult to remember also lead to more looks to relevant, blank locations. Participants were presented four nouns on a two by two grid. A number of lexico-semantic variables were controlled to form high-difficulty and low-difficulty noun sets. Results reveal more frequent looks to blank locations during retrieval of high-difficulty nouns compared to low-difficulty ones. Mixed-effects modelling demonstrates that imagery-related semantic factors (imageability and concreteness) predict looking at nothing during retrieval. Results provide the first direct evidence that looking at nothing is modulated by word difficulty and in particular, word imageability. Overall, the research provides substantial support to the integrated memory account for linguistic stimuli and looking at nothing as a form of mental imagery.
Collapse
|
20
|
Johansson R, Oren F, Holmqvist K. Gaze patterns reveal how situation models and text representations contribute to episodic text memory. Cognition 2018; 175:53-68. [PMID: 29471198 DOI: 10.1016/j.cognition.2018.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 02/12/2018] [Accepted: 02/13/2018] [Indexed: 10/18/2022]
Abstract
When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play.
Collapse
Affiliation(s)
| | - Franziska Oren
- Department of Psychology, University of Copenhagen, Denmark.
| | - Kenneth Holmqvist
- UPSET, North-West University Vaal, South Africa; Faculty of Arts, Masaryk University, Brno, Czech Republic.
| |
Collapse
|
21
|
Foster JJ, Bsales EM, Jaffe RJ, Awh E. Alpha-Band Activity Reveals Spontaneous Representations of Spatial Position in Visual Working Memory. Curr Biol 2017; 27:3216-3223.e6. [PMID: 29033335 DOI: 10.1016/j.cub.2017.09.031] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 08/10/2017] [Accepted: 09/14/2017] [Indexed: 10/18/2022]
Abstract
An emerging view suggests that spatial position is an integral component of working memory (WM), such that non-spatial features are bound to locations regardless of whether space is relevant [1, 2]. For instance, past work has shown that stimulus position is spontaneously remembered when non-spatial features are stored. Item recognition is enhanced when memoranda appear at the same location where they were encoded [3-5], and accessing non-spatial information elicits shifts of spatial attention to the original position of the stimulus [6, 7]. However, these findings do not establish that a persistent, active representation of stimulus position is maintained in WM because similar effects have also been documented following storage in long-term memory [8, 9]. Here we show that the spatial position of the memorandum is actively coded by persistent neural activity during a non-spatial WM task. We used a spatial encoding model in conjunction with electroencephalogram (EEG) measurements of oscillatory alpha-band (8-12 Hz) activity to track active representations of spatial position. The position of the stimulus varied trial to trial but was wholly irrelevant to the tasks. We nevertheless observed active neural representations of the original stimulus position that persisted throughout the retention interval. Further experiments established that these spatial representations are dependent on the volitional storage of non-spatial features rather than being a lingering effect of sensory energy or initial encoding demands. These findings provide strong evidence that online spatial representations are spontaneously maintained in WM-regardless of task relevance-during the storage of non-spatial features.
Collapse
Affiliation(s)
- Joshua J Foster
- Department of Psychology and Institute for Mind and Biology, University of Chicago, Chicago, IL 60637.
| | - Emma M Bsales
- Department of Psychology and Institute for Mind and Biology, University of Chicago, Chicago, IL 60637
| | - Russell J Jaffe
- Department of Psychology and Institute for Mind and Biology, University of Chicago, Chicago, IL 60637
| | - Edward Awh
- Department of Psychology and Institute for Mind and Biology, University of Chicago, Chicago, IL 60637.
| |
Collapse
|
22
|
Covert shifts of attention can account for the functional role of “eye movements to nothing”. Mem Cognit 2017; 46:230-243. [DOI: 10.3758/s13421-017-0760-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
23
|
Wantz AL, Lobmaier JS, Mast FW, Senn W. Spatial But Not Oculomotor Information Biases Perceptual Memory: Evidence From Face Perception and Cognitive Modeling. Cogn Sci 2016; 41:1533-1554. [PMID: 27859647 DOI: 10.1111/cogs.12437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Revised: 02/02/2016] [Accepted: 06/30/2016] [Indexed: 12/01/2022]
Abstract
Recent research put forward the hypothesis that eye movements are integrated in memory representations and are reactivated when later recalled. However, "looking back to nothing" during recall might be a consequence of spatial memory retrieval. Here, we aimed at distinguishing between the effect of spatial and oculomotor information on perceptual memory. Participants' task was to judge whether a morph looked rather like the first or second previously presented face. Crucially, faces and morphs were presented in a way that the morph reactivated oculomotor and/or spatial information associated with one of the previously encoded faces. Perceptual face memory was largely influenced by these manipulations. We considered a simple computational model with an excellent match (4.3% error) that expresses these biases as a linear combination of recency, saccade, and location. Surprisingly, saccades did not play a role. The results suggest that spatial and temporal rather than oculomotor information biases perceptual face memory.
Collapse
Affiliation(s)
- Andrea L Wantz
- Department of Psychology, University of Bern.,Center for Cognition, Learning and Memory, University of Bern
| | - Janek S Lobmaier
- Department of Psychology, University of Bern.,Center for Cognition, Learning and Memory, University of Bern
| | - Fred W Mast
- Department of Psychology, University of Bern.,Center for Cognition, Learning and Memory, University of Bern
| | - Walter Senn
- Center for Cognition, Learning and Memory, University of Bern.,Department of Physiology, University of Bern
| |
Collapse
|
24
|
Time in the eye of the beholder: Gaze position reveals spatial-temporal associations during encoding and memory retrieval of future and past. Mem Cognit 2016; 45:40-48. [DOI: 10.3758/s13421-016-0639-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Martarelli CS, Chiquet S, Laeng B, Mast FW. Using space to represent categories: insights from gaze position. PSYCHOLOGICAL RESEARCH 2016; 81:721-729. [PMID: 27306547 DOI: 10.1007/s00426-016-0781-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2015] [Accepted: 06/04/2016] [Indexed: 11/30/2022]
Abstract
We investigated the boundaries among imagery, memory, and perception by measuring gaze during retrieved versus imagined visual information. Eye fixations during recall were bound to the location at which a specific stimulus was encoded. However, eye position information generalized to novel objects of the same category that had not been seen before. For example, encoding an image of a dog in a specific location enhanced the likelihood of looking at the same location during subsequent mental imagery of other mammals. The results suggest that eye movements can also be launched by abstract representations of categories and not exclusively by a single episode or a specific visual exemplar.
Collapse
Affiliation(s)
- Corinna S Martarelli
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland. .,Center for Cognition, Learning and Memory, University of Bern, Bern, Switzerland.
| | - Sandra Chiquet
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland.,Center for Cognition, Learning and Memory, University of Bern, Bern, Switzerland
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
| | - Fred W Mast
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland.,Center for Cognition, Learning and Memory, University of Bern, Bern, Switzerland
| |
Collapse
|
26
|
Tracking down the path of memory: eye scanpaths facilitate retrieval of visuospatial information. Cogn Process 2016; 16 Suppl 1:159-63. [PMID: 26259650 PMCID: PMC4553155 DOI: 10.1007/s10339-015-0690-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent research points to a crucial role of eye fixations on the same spatial locations where an item appeared when learned, for the successful retrieval of stored information (e.g., Laeng et al. in Cognition 131:263–283, 2014. doi:10.1016/j.cognition.2014.01.003). However, evidence about whether the specific temporal sequence (i.e., scanpath) of these eye fixations is also relevant for the accuracy of memory remains unclear. In the current study, eye fixations were recorded while looking at a checkerboard-like pattern. In a recognition session (48 h later), animations were shown where each square that formed the pattern was presented one by one, either according to the same, idiosyncratic, temporal sequence in which they were originally viewed by each participant or in a shuffled sequence although the squares were, in both conditions, always in their correct positions. Afterward, participants judged whether they had seen the same pattern before or not. Showing the elements serially according to the original scanpath’s sequence yielded a significantly better recognition performance than the shuffled condition. In a forced fixation condition, where the gaze was maintained on the center of the screen, the advantage of memory accuracy for same versus shuffled scanpaths disappeared. Concluding, gaze scanpaths (i.e., the order of fixations and not simply their positions) are functional to visual memory and physical reenacting of the original, embodied, perception can facilitate retrieval.
Collapse
|
27
|
|
28
|
Exploring the numerical mind by eye-tracking: a special issue. PSYCHOLOGICAL RESEARCH 2016; 80:325-33. [PMID: 26927470 DOI: 10.1007/s00426-016-0759-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2016] [Accepted: 02/11/2016] [Indexed: 12/16/2022]
|
29
|
Wantz AL, Martarelli CS, Mast FW. When looking back to nothing goes back to nothing. Cogn Process 2015; 17:105-14. [DOI: 10.1007/s10339-015-0741-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Accepted: 10/16/2015] [Indexed: 11/28/2022]
|
30
|
Scholz A, von Helversen B, Rieskamp J. Eye movements reveal memory processes during similarity- and rule-based decision making. Cognition 2015; 136:228-46. [DOI: 10.1016/j.cognition.2014.11.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 11/13/2014] [Accepted: 11/17/2014] [Indexed: 11/25/2022]
|
31
|
Scholz A, Mehlhorn K, Krems JF. Listen up, eye movements play a role in verbal memory retrieval. PSYCHOLOGICAL RESEARCH 2014; 80:149-58. [DOI: 10.1007/s00426-014-0639-4] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2014] [Accepted: 12/06/2014] [Indexed: 11/24/2022]
|
32
|
Pearson DG, Ball K, Smith DT. Oculomotor preparation as a rehearsal mechanism in spatial working memory. Cognition 2014; 132:416-28. [DOI: 10.1016/j.cognition.2014.05.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Revised: 05/08/2014] [Accepted: 05/10/2014] [Indexed: 12/01/2022]
|
33
|
Eye movements disrupt spatial but not visual mental imagery. Cogn Process 2014; 15:543-9. [DOI: 10.1007/s10339-014-0617-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2014] [Accepted: 04/17/2014] [Indexed: 10/25/2022]
|
34
|
Laeng B, Bloem IM, D’Ascenzo S, Tommasi L. Scrutinizing visual images: The role of gaze in mental imagery and memory. Cognition 2014; 131:263-83. [DOI: 10.1016/j.cognition.2014.01.003] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2012] [Revised: 01/13/2014] [Accepted: 01/16/2014] [Indexed: 10/25/2022]
|
35
|
D'Ascenzo S, Tommasi L, Laeng B. Imagining sex and adapting to it: different aftereffects after perceiving versus imagining faces. Vision Res 2014; 96:45-52. [PMID: 24440811 DOI: 10.1016/j.visres.2014.01.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Revised: 01/03/2014] [Accepted: 01/07/2014] [Indexed: 10/25/2022]
Abstract
A prolonged exposure (i.e., perceptual adaptation) to a male or a female face can produce changes (i.e., aftereffects) in the subsequent gender attribution of a neutral or average face, so that it appears respectively more female or more male. Studies using imagery adaptation and its aftereffects have yielded conflicting results. In the present study we used an adaptation paradigm with both imagined and perceived faces as adaptors, and assessed the aftereffects in judged masculinity/femininity when viewing an androgynous test face. We monitored eye movements and pupillary responses as a way to confirm whether participants did actively engage in visual imagery. The results indicated that both perceptual and imagery adaptation produce aftereffects, but that they run in opposite directions: a contrast effect with perception (e.g., after visual exposure to a female face, the androgynous appears as more male) and an assimilation effect with imagery (e.g., after imaginative exposure to a female face, the androgynous face appears as more female). The pupillary responses revealed dilations consistent with increased cognitive effort during the imagery phase, suggesting that the assimilation aftereffect occurred in the presence of an active and effortful mental imagery process, as also witnessed by the pattern of eye movements recorded during the imagery adaptation phase.
Collapse
|
36
|
Ball K, Pearson DG, Smith DT. Oculomotor involvement in spatial working memory is task-specific. Cognition 2013; 129:439-46. [DOI: 10.1016/j.cognition.2013.08.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Revised: 06/17/2013] [Accepted: 08/05/2013] [Indexed: 10/26/2022]
|
37
|
Abstract
Research on episodic memory has established that spontaneous eye movements occur to spaces associated with retrieved information even if those spaces are blank at the time of retrieval. Although it has been claimed that such looks to "nothing" can function as facilitatory retrieval cues, there is currently no conclusive evidence for such an effect. In the present study, we addressed this fundamental issue using four direct eye manipulations in the retrieval phase of an episodic memory task: (a) free viewing on a blank screen, (b) maintaining central fixation, (c) looking inside a square congruent with the location of the to-be-recalled objects, and (d) looking inside a square incongruent with the location of the to-be-recalled objects. Our results provide novel evidence of an active and facilitatory role of gaze position during memory retrieval and demonstrate that memory for the spatial relationship between objects is more readily affected than memory for intrinsic object features.
Collapse
|
38
|
Iterative fragmentation of cognitive maps in a visual imagery task. PLoS One 2013; 8:e68560. [PMID: 23874672 PMCID: PMC3714244 DOI: 10.1371/journal.pone.0068560] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Accepted: 05/30/2013] [Indexed: 11/19/2022] Open
Abstract
It remains unclear whether spontaneous eye movements during visual imagery reflect the mental generation of a visual image (i.e. the arrangement of the component parts of a mental representation). To address this specificity, we recorded eye movements in an imagery task and in a phonological fluency (non-imagery) task, both consisting in naming French towns from long-term memory. Only in the condition of visual imagery the spontaneous eye positions reflected the geographic position of the towns evoked by the subjects. This demonstrates that eye positions closely reflect the mapping of mental images. Advanced analysis of gaze positions using the bi-dimensional regression model confirmed the spatial correlation of gaze and towns’ locations in every single individual in the visual imagery task and in none of the individuals when no imagery accompanied memory retrieval. In addition, the evolution of the bi-dimensional regression’s coefficient of determination revealed, in each individual, a process of generating several iterative series of a limited number of towns mapped with the same spatial distortion, despite different individual order of towns’ evocation and different individual mappings. Such consistency across subjects revealed by gaze (the mind’s eye) gives empirical support to theories postulating that visual imagery, like visual sampling, is an iterative fragmented processing.
Collapse
|