1
|
Falon SL, Jobson L, Liddell BJ. Does culture moderate the encoding and recognition of negative cues? Evidence from an eye-tracking study. PLoS One 2024; 19:e0295301. [PMID: 38630733 PMCID: PMC11023573 DOI: 10.1371/journal.pone.0295301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 11/20/2023] [Indexed: 04/19/2024] Open
Abstract
Cross-cultural research has elucidated many important differences between people from Western European and East Asian cultural backgrounds regarding how each group encodes and consolidates the contents of complex visual stimuli. While Western European groups typically demonstrate a perceptual bias towards centralised information, East Asian groups favour a perceptual bias towards background information. However, this research has largely focused on the perception of neutral cues and thus questions remain regarding cultural group differences in both the perception and recognition of negative, emotionally significant cues. The present study therefore compared Western European (n = 42) and East Asian (n = 40) participants on a free-viewing task and a subsequent memory task utilising negative and neutral social cues. Attentional deployment to the centralised versus background components of negative and neutral social cues was indexed via eye-tracking, and memory was assessed with a cued-recognition task two days later. While both groups demonstrated an attentional bias towards the centralised components of the neutral cues, only the Western European group demonstrated this bias in the case of the negative cues. There were no significant differences observed between Western European and East Asian groups in terms of memory accuracy, although the Western European group was unexpectedly less sensitive to the centralised components of the negative cues. These findings suggest that culture modulates low-level attentional deployment to negative information, however not higher-level recognition after a temporal interval. This paper is, to our knowledge, the first to concurrently consider the effect of culture on both attentional outcomes and memory for both negative and neutral cues.
Collapse
Affiliation(s)
| | - Laura Jobson
- School of Psychological Sciences, Monash University, Clayton, Australia
| | | |
Collapse
|
2
|
Popov T, Staudigl T. Cortico-ocular Coupling in the Service of Episodic Memory Formation. Prog Neurobiol 2023; 227:102476. [PMID: 37268034 DOI: 10.1016/j.pneurobio.2023.102476] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 05/25/2023] [Accepted: 05/30/2023] [Indexed: 06/04/2023]
Abstract
Encoding of visual information is a necessary requirement for most types of episodic memories. In search for a neural signature of memory formation, amplitude modulation of neural activity has been repeatedly shown to correlate with and suggested to be functionally involved in successful memory encoding. We here report a complementary view on why and how brain activity relates to memory, indicating a functional role of cortico-ocular interactions for episodic memory formation. Recording simultaneous magnetoencephalography and eye tracking in 35 human participants, we demonstrate that gaze variability and amplitude modulations of alpha/beta oscillations (10-20Hz) in visual cortex covary and predict subsequent memory performance between and within participants. Amplitude variation during pre-stimulus baseline was associated with gaze direction variability, echoing the co-variation observed during scene encoding. We conclude that encoding of visual information engages unison coupling between oculomotor and visual areas in the service of memory formation.
Collapse
Affiliation(s)
- Tzvetan Popov
- Methods of Plasticity Research, Department of Psychology, University of Zurich, Zurich, Switzerland; Department of Psychology, University of Konstanz, Konstanz, Germany.
| | - Tobias Staudigl
- Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
3
|
A proposed attention-based model for spatial memory formation and retrieval. Cogn Process 2022; 24:199-212. [PMID: 36576704 DOI: 10.1007/s10339-022-01121-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 12/16/2022] [Indexed: 12/29/2022]
Abstract
Animals use sensory information and memory to build internal representations of space. It has been shown that such representations extend beyond the geometry of an environment and also encode rich sensory experiences usually referred to as context. In mammals, contextual inputs from sensory cortices appear to be converging on the hippocampus as a key area for spatial representations and memory. How metric and external sensory inputs (e.g., visual context) are combined into a coherent and stable place representation is not fully understood. Here, I review the evidence of attentional effects along the ventral visual pathway and in the medial temporal lobe and propose an attention-based model for the integration of visual context in spatial representations. I further suggest that attention-based retrieval of spatial memories supports a feedback mechanism that allows consolidation of old memories and new sensory experiences related to the same place, thereby contributing to the stability of spatial representations. The resulting model has the potential to generate new hypotheses to explain complex responses of spatial cells such as place cells in the hippocampus.
Collapse
|
4
|
Johansson R, Nyström M, Dewhurst R, Johansson M. Eye-movement replay supports episodic remembering. Proc Biol Sci 2022; 289:20220964. [PMID: 35703049 PMCID: PMC9198773 DOI: 10.1098/rspb.2022.0964] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
When we bring to mind something we have seen before, our eyes spontaneously unfold in a sequential pattern strikingly similar to that made during the original encounter, even in the absence of supporting visual input. Oculomotor movements of the eye may then serve the opposite purpose of acquiring new visual information; they may serve as self-generated cues, pointing to stored memories. Over 50 years ago Donald Hebb, the forefather of cognitive neuroscience, posited that such a sequential replay of eye movements supports our ability to mentally recreate visuospatial relations during episodic remembering. However, direct evidence for this influential claim is lacking. Here we isolate the sequential properties of spontaneous eye movements during encoding and retrieval in a pure recall memory task and capture their encoding-retrieval overlap. Critically, we show that the fidelity with which a series of consecutive eye movements from initial encoding is sequentially retained during subsequent retrieval predicts the quality of the recalled memory. Our findings provide direct evidence that such scanpaths are replayed to assemble and reconstruct spatio-temporal relations as we remember and further suggest that distinct scanpath properties differentially contribute depending on the nature of the goal-relevant memory.
Collapse
|
5
|
Ryan JD, Wynn JS, Shen K, Liu ZX. Aging changes the interactions between the oculomotor and memory systems. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2022; 29:418-442. [PMID: 34856890 DOI: 10.1080/13825585.2021.2007841] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 11/12/2021] [Indexed: 06/13/2023]
Abstract
The use of multi-modal approaches, particularly in conjunction with multivariate analytic techniques, can enrich models of cognition, brain function, and how they change with age. Recently, multivariate approaches have been applied to the study of eye movements in a manner akin to that of neural activity (i.e., pattern similarity). Here, we review the literature regarding multi-modal and/or multivariate approaches, with specific reference to the use of eyetracking to characterize age-related changes in memory. By applying multi-modal and multivariate approaches to the study of aging, research has shown that aging is characterized by moment-to-moment alterations in the amount and pattern of visual exploration, and by extension, alterations in the activity and function of the hippocampus and broader medial temporal lobe (MTL). These methodological advances suggest that age-related declines in the integrity of the memory system has consequences for oculomotor behavior in the moment, in a reciprocal fashion. Age-related changes in hippocampal and MTL structure and function may lead to an increase in, and change in the patterns of, visual exploration in an effort to upregulate the encoding of information. However, such visual exploration patterns may be non-optimal and actually reduce the amount and/or type of incoming information that is bound into a lasting memory representation. This research indicates that age-related cognitive impairments are considerably broader in scope than previously realized.
Collapse
Affiliation(s)
- Jennifer D Ryan
- Rotman Research Institute at Baycrest Health Sciences, Toronto, ON, Canada
- Departments of Psychology, Psychiatry, University of Toronto, Toronto, ON, Canada
| | - Jordana S Wynn
- Department of Psychology, Harvard University, Cambridge MA, USA
| | - Kelly Shen
- Rotman Research Institute at Baycrest Health Sciences, Toronto, ON, Canada
| | - Zhong-Xu Liu
- Department of Behavioral Sciences, University of Michigan-Dearborn, Dearborn MI, USA
| |
Collapse
|
6
|
Wynn JS, Liu ZX, Ryan JD. Neural Correlates of Subsequent Memory-Related Gaze Reinstatement. J Cogn Neurosci 2021; 34:1547-1562. [PMID: 34272959 DOI: 10.1162/jocn_a_01761] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Mounting evidence linking gaze reinstatement-the recapitulation of encoding-related gaze patterns during retrieval-to behavioral measures of memory suggests that eye movements play an important role in mnemonic processing. Yet, the nature of the gaze scanpath, including its informational content and neural correlates, has remained in question. In this study, we examined eye movement and neural data from a recognition memory task to further elucidate the behavioral and neural bases of functional gaze reinstatement. Consistent with previous work, gaze reinstatement during retrieval of freely viewed scene images was greater than chance and predictive of recognition memory performance. Gaze reinstatement was also associated with viewing of informationally salient image regions at encoding, suggesting that scanpaths may encode and contain high-level scene content. At the brain level, gaze reinstatement was predicted by encoding-related activity in the occipital pole and BG, neural regions associated with visual processing and oculomotor control. Finally, cross-voxel brain pattern similarity analysis revealed overlapping subsequent memory and subsequent gaze reinstatement modulation effects in the parahippocampal place area and hippocampus, in addition to the occipital pole and BG. Together, these findings suggest that encoding-related activity in brain regions associated with scene processing, oculomotor control, and memory supports the formation, and subsequent recapitulation, of functional scanpaths. More broadly, these findings lend support to scanpath theory's assertion that eye movements both encode, and are themselves embedded in, mnemonic representations.
Collapse
Affiliation(s)
| | | | - Jennifer D Ryan
- Rotman Research Institute at Baycrest Health Sciences.,University of Toronto
| |
Collapse
|
7
|
Suslow T, Günther V, Hensch T, Kersting A, Bodenschatz CM. Alexithymia Is Associated With Deficits in Visual Search for Emotional Faces in Clinical Depression. Front Psychiatry 2021; 12:668019. [PMID: 34267686 PMCID: PMC8275928 DOI: 10.3389/fpsyt.2021.668019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/03/2021] [Indexed: 11/13/2022] Open
Abstract
Background: The concept of alexithymia is characterized by difficulties identifying and describing one's emotions. Alexithymic individuals are impaired in the recognition of others' emotional facial expressions. Alexithymia is quite common in patients suffering from major depressive disorder. The face-in-the-crowd task is a visual search paradigm that assesses processing of multiple facial emotions. In the present eye-tracking study, the relationship between alexithymia and visual processing of facial emotions was examined in clinical depression. Materials and Methods: Gaze behavior and manual response times of 20 alexithymic and 19 non-alexithymic depressed patients were compared in a face-in-the-crowd task. Alexithymia was empirically measured via the 20-item Toronto Alexithymia-Scale. Angry, happy, and neutral facial expressions of different individuals were shown as target and distractor stimuli. Our analyses of gaze behavior focused on latency to the target face, number of distractor faces fixated before fixating the target, number of target fixations, and number of distractor faces fixated after fixating the target. Results: Alexithymic patients exhibited in general slower decision latencies compared to non-alexithymic patients in the face-in-the-crowd task. Patient groups did not differ in latency to target, number of target fixations, and number of distractors fixated prior to target fixation. However, after having looked at the target, alexithymic patients fixated more distractors than non-alexithymic patients, regardless of expression condition. Discussion: According to our results, alexithymia goes along with impairments in visual processing of multiple facial emotions in clinical depression. Alexithymia appears to be associated with delayed manual reaction times and prolonged scanning after the first target fixation in depression, but it might have no impact on the early search phase. The observed deficits could indicate difficulties in target identification and/or decision-making when processing multiple emotional facial expressions. Impairments of alexithymic depressed patients in processing emotions in crowds of faces seem not limited to a specific affective valence. In group situations, alexithymic depressed patients might be slowed in processing interindividual differences in emotional expressions compared with non-alexithymic depressed patients. This could represent a disadvantage in understanding non-verbal communication in groups.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Vivien Günther
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Tilman Hensch
- Department of Psychiatry and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
- Department of Psychology, IU International University of Applied Science, Erfurt, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Charlott Maria Bodenschatz
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|