1
|
Delhaye E, D'Innocenzo G, Raposo A, Coco MI. The upside of cumulative conceptual interference on exemplar-level mnemonic discrimination. Mem Cognit 2024:10.3758/s13421-024-01563-2. [PMID: 38709388 DOI: 10.3758/s13421-024-01563-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/21/2024] [Indexed: 05/07/2024]
Abstract
Although long-term visual memory (LTVM) has a remarkable capacity, the fidelity of its episodic representations can be influenced by at least two intertwined interference mechanisms during the encoding of objects belonging to the same category: the capacity to hold similar episodic traces (e.g., different birds) and the conceptual similarity of the encoded traces (e.g., a sparrow shares more features with a robin than with a penguin). The precision of episodic traces can be tested by having participants discriminate lures (unseen objects) from targets (seen objects) representing different exemplars of the same concept (e.g., two visually similar penguins), which generates interference at retrieval that can be solved if efficient pattern separation happened during encoding. The present study examines the impact of within-category encoding interference on the fidelity of mnemonic object representations, by manipulating an index of cumulative conceptual interference that represents the concurrent impact of capacity and similarity. The precision of mnemonic discrimination was further assessed by measuring the impact of visual similarity between targets and lures in a recognition task. Our results show a significant decrement in the correct identification of targets for increasing interference. Correct rejections of lures were also negatively impacted by cumulative interference as well as by the visual similarity with the target. Most interestingly though, mnemonic discrimination for targets presented with a visually similar lure was more difficult when objects were encoded under lower, not higher, interference. These findings counter a simply additive impact of interference on the fidelity of object representations providing a finer-grained, multi-factorial, understanding of interference in LTVM.
Collapse
Affiliation(s)
- Emma Delhaye
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
- GIGA-CRC In-Vivo Imaging, University of Liège, Liège, Belgium
| | | | - Ana Raposo
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Moreno I Coco
- Department of Psychology, Sapienza University of Rome, Rome, Italy.
- IRCSS Santa Lucia, Roma, Italy.
| |
Collapse
|
2
|
Andrade MÂ, Cipriano M, Raposo A. ObScene database: Semantic congruency norms for 898 pairs of object-scene pictures. Behav Res Methods 2024; 56:3058-3071. [PMID: 37488464 PMCID: PMC11133025 DOI: 10.3758/s13428-023-02181-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2023] [Indexed: 07/26/2023]
Abstract
Research on the interaction between object and scene processing has a long history in the fields of perception and visual memory. Most databases have established norms for pictures where the object is embedded in the scene. In this study, we provide a diverse and controlled stimulus set comprising real-world pictures of 375 objects (e.g., suitcase), 245 scenes (e.g., airport), and 898 object-scene pairs (e.g., suitcase-airport), with object and scene presented separately. Our goal was twofold. First, to create a database of object and scene pictures, normed for the same variables to have comparable measures for both types of pictures. Second, to acquire normative data for the semantic relationships between objects and scenes presented separately, which offers more flexibility in the use of the pictures and allows disentangling the processing of the object and its context (the scene). Along three experiments, participants evaluated each object or scene picture on name agreement, familiarity, and visual complexity, and rated object-scene pairs on semantic congruency. A total of 125 septuplets of one scene and six objects (three congruent, three incongruent), and 120 triplets of one object and two scenes (in congruent and incongruent pairings) were built. In future studies, these objects and scenes can be used separately or combined, while controlling for their key features. Additionally, as object-scene pairs received semantic congruency ratings along the entire scale, researchers may select among a wide range of congruency values. ObScene is a comprehensive and ecologically valid database, useful for psychology and neuroscience studies of visual object and scene processing.
Collapse
Affiliation(s)
- Miguel Ângelo Andrade
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Margarida Cipriano
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Ana Raposo
- Research Center for Psychological Science, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
3
|
Barnett B, Andersen LM, Fleming SM, Dijkstra N. Identifying content-invariant neural signatures of perceptual vividness. PNAS NEXUS 2024; 3:pgae061. [PMID: 38415219 PMCID: PMC10898512 DOI: 10.1093/pnasnexus/pgae061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 01/31/2024] [Indexed: 02/29/2024]
Abstract
Some conscious experiences are more vivid than others. Although perceptual vividness is a key component of human consciousness, how variation in this magnitude property is registered by the human brain is unknown. A striking feature of neural codes for magnitude in other psychological domains, such as number or reward, is that the magnitude property is represented independently of its sensory features. To test whether perceptual vividness also covaries with neural codes that are invariant to sensory content, we reanalyzed existing magnetoencephalography and functional MRI data from two distinct studies which quantified perceptual vividness via subjective ratings of awareness and visibility. Using representational similarity and decoding analyses, we find evidence for content-invariant neural signatures of perceptual vividness distributed across visual, parietal, and frontal cortices. Our findings indicate that the neural correlates of subjective vividness may share similar properties to magnitude codes in other cognitive domains.
Collapse
Affiliation(s)
- Benjy Barnett
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Lau M Andersen
- Aarhus Institute of Advanced Studies, 8000 Aarhus C, Denmark
- Center of Functionally Integrative Neuroscience, 8000 Aarhus C, Denmark
- Department for Linguistics, Cognitive Science and Semiotics, Aarhus University, 8000 Aarhus C, Denmark
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London WC1B 5EH, UK
| | - Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
| |
Collapse
|
4
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2023; 7:165. [PMID: 37274451 PMCID: PMC10238820 DOI: 10.12688/wellcomeopenres.17856.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2023] [Indexed: 07/22/2023] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
Affiliation(s)
| | | | - Alexandra Krugliak
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| |
Collapse
|
5
|
Enge A, Süß F, Abdel Rahman R. Instant Effects of Semantic Information on Visual Perception. J Neurosci 2023; 43:4896-4906. [PMID: 37286353 PMCID: PMC10312055 DOI: 10.1523/jneurosci.2038-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 03/16/2023] [Accepted: 04/17/2023] [Indexed: 06/09/2023] Open
Abstract
Does our perception of an object change once we discover what function it serves? We showed human participants (n = 48, 31 females and 17 males) pictures of unfamiliar objects either together with keywords matching their function, leading to semantically informed perception, or together with nonmatching keywords, resulting in uninformed perception. We measured event-related potentials to investigate at which stages in the visual processing hierarchy these two types of object perception differed from one another. We found that semantically informed compared with uninformed perception was associated with larger amplitudes in the N170 component (150-200 ms), reduced amplitudes in the N400 component (400-700 ms), and a late decrease in alpha/beta band power. When the same objects were presented once more without any information, the N400 and event-related power effects persisted, and we also observed enlarged amplitudes in the P1 component (100-150 ms) in response to objects for which semantically informed perception had taken place. Consistent with previous work, this suggests that obtaining semantic information about previously unfamiliar objects alters aspects of their lower-level visual perception (P1 component), higher-level visual perception (N170 component), and semantic processing (N400 component, event-related power). Our study is the first to show that such effects occur instantly after semantic information has been provided for the first time, without requiring extensive learning.SIGNIFICANCE STATEMENT There has been a long-standing debate about whether or not higher-level cognitive capacities, such as semantic knowledge, can influence lower-level perceptual processing in a top-down fashion. Here we could show, for the first time, that information about the function of previously unfamiliar objects immediately influences cortical processing within less than 200 ms. Of note, this influence does not require training or experience with the objects and related semantic information. Therefore, our study is the first to show effects of cognition on perception while ruling out the possibility that prior knowledge merely acts by preactivating or altering stored visual representations. Instead, this knowledge seems to alter perception online, thus providing a compelling case against the impenetrability of perception by cognition.
Collapse
Affiliation(s)
- Alexander Enge
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
- Max Planck Institute for Human Cognitive, Research Group Learning in Early Childhood and Brain Sciences, 04103, Leipzig, Germany
| | - Franziska Süß
- Fachhochschule des Mittelstands, 96050, Bamberg, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
- Cluster of Excellence "Science of Intelligence," 10587, Berlin, Germany
| |
Collapse
|
6
|
The effects of search-irrelevant working memory content on visual search. Atten Percept Psychophys 2023; 85:293-300. [PMID: 36596986 DOI: 10.3758/s13414-022-02634-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2022] [Indexed: 01/04/2023]
Abstract
Previous experiments investigating visual search have shown that distractors that are semantically related to a search target can capture attention and slow the search process. In two experiments, we examine if distractors exactly matching, or semantically related to, search-irrelevant information held in working memory (WM) can also influence visual search while ruling out potential effects of color similarity. Participants first viewed and memorized an image of an everyday object, then they determined if a target item was present or absent in a two-object search array. On exact-match trials, the memorized object appeared as a distractor; on semantic-match trials, an object semantically related to the memorized object appeared as a distractor. Both exact-match and semantic-match distractors slowed search when the target was present in the search array. Our findings extend previous findings by demonstrating WM-driven attentional guidance by complex objects rather than simple features. The results also suggest that visual search can be influenced by distractors sharing only semantic features with a search-irrelevant, but active, WM representation.
Collapse
|
7
|
Pham T, Archibald LMD. The role of working memory loads on immediate and long-term sentence recall. Memory 2023; 31:61-76. [PMID: 36107807 DOI: 10.1080/09658211.2022.2122999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
It is well-established that both phonological and semantic knowledge influence verbal working memory. However, the focus has primarily been on understanding phonological effects despite evidence of semantic influences. Articulatory suppression is a well-established task for preventing phonological processing. Methods to prevent semantic processing have rarely been used in the past, highlighting a need for developing a semantic interference task. We, therefore, conceptualised two novel tasks - an animacy categorisation and semantic relatedness judgement task. This study explored the impact of phonological (articulatory suppression) and semantic loads (animacy categorisation and semantic relatedness judgement) on immediate and delayed sentence recall. Additionally, sentence concreteness (concrete vs. abstract sentences) indexed semantic knowledge in verbal working memory. Across two studies, immediate recall revealed that articulatory suppression (preventing phonological processing) increased the size of the concreteness effect, while the novel semantic tasks (preventing semantic processing) reduced it suggesting that our semantic tasks were indeed imposing a semantic load. Further, relative long-term performance showed that more new words were remembered in articulatory suppression, whereas recall was disproportionately impaired in the semantic relatedness task. Our experimental paradigm offers phonological and semantic suppression tasks that can be used in parallel to investigate the interactions between working memory and language.
Collapse
Affiliation(s)
- Theresa Pham
- School of Communication Sciences and Disorders, University of Western Ontario, London, Canada
| | - Lisa M D Archibald
- School of Communication Sciences and Disorders, University of Western Ontario, London, Canada
| |
Collapse
|
8
|
Multisensory synchrony of contextual boundaries affects temporal order memory, but not encoding or recognition. PSYCHOLOGICAL RESEARCH 2023; 87:583-597. [PMID: 35482089 PMCID: PMC9047581 DOI: 10.1007/s00426-022-01682-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 04/03/2022] [Indexed: 11/24/2022]
Abstract
We memorize our daily life experiences, which are often multisensory in nature, by segmenting them into distinct event models, in accordance with perceived contextual or situational changes. However, very little is known about how multisensory boundaries affect segmentation, as most studies have focused on unisensory (visual or audio) segmentation. In three experiments, we investigated the effect of multisensory boundaries on segmentation in memory and perception. In Experiment 1, participants encoded lists of pictures while audio and visual contexts changed synchronously or asynchronously. After each list, we tested recognition and temporal associative memory for pictures that were encoded in the same audio-visual context or that crossed a synchronous or an asynchronous multisensory change. We found no effect of multisensory synchrony for recognition memory: synchronous and asynchronous changes similarly impaired recognition for pictures encoded at those changes, compared to pictures encoded further away from those changes. Multisensory synchrony did affect temporal associative memory, which was worse for pictures encoded at synchronous than at asynchronous changes. Follow up experiments showed that this effect was not due to the higher dimensionality of multisensory over unisensory contexts (Experiment 2), nor that it was due to the temporal unpredictability of contextual changes inherent to Experiment 1 (Experiment 3). We argue that participants formed situational expectations through multisensory synchronicity, such that synchronous multisensory changes deviated more strongly from those expectations than asynchronous changes. We discuss our findings in light of supportive and conflicting findings of uni- and multi-sensory segmentation.
Collapse
|
9
|
Aveni K, Ahmed J, Borovsky A, McRae K, Jenkins ME, Sprengel K, Fraser JA, Orange JB, Knowles T, Roberts AC. Predictive language comprehension in Parkinson's disease. PLoS One 2023; 18:e0262504. [PMID: 36753529 PMCID: PMC9907838 DOI: 10.1371/journal.pone.0262504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 12/27/2021] [Indexed: 02/09/2023] Open
Abstract
Verb and action knowledge deficits are reported in persons with Parkinson's disease (PD), even in the absence of dementia or mild cognitive impairment. However, the impact of these deficits on combinatorial semantic processing is less well understood. Following on previous verb and action knowledge findings, we tested the hypothesis that PD impairs the ability to integrate event-based thematic fit information during online sentence processing. Specifically, we anticipated persons with PD with age-typical cognitive abilities would perform more poorly than healthy controls during a visual world paradigm task requiring participants to predict a target object constrained by the thematic fit of the agent-verb combination. Twenty-four PD and 24 healthy age-matched participants completed comprehensive neuropsychological assessments. We recorded participants' eye movements as they heard predictive sentences (The fisherman rocks the boat) alongside target, agent-related, verb-related, and unrelated images. We tested effects of group (PD/control) on gaze using growth curve models. There were no significant differences between PD and control participants, suggesting that PD participants successfully and rapidly use combinatory thematic fit information to predict upcoming language. Baseline sentences with no predictive information (e.g., Look at the drum) confirmed that groups showed equivalent sentence processing and eye movement patterns. Additionally, we conducted an exploratory analysis contrasting PD and controls' performance on low-motion-content versus high-motion-content verbs. This analysis revealed fewer predictive fixations in high-motion sentences only for healthy older adults. PD participants may adapt to their disease by relying on spared, non-action-simulation-based language processing mechanisms, although this conclusion is speculative, as the analyses of high- vs. low-motion items was highly limited by the study design. These findings provide novel evidence that individuals with PD match healthy adults in their ability to use verb meaning to predict upcoming nouns despite previous findings of verb semantic impairment in PD across a variety of tasks.
Collapse
Affiliation(s)
- Katharine Aveni
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
| | - Juweiriya Ahmed
- Department of Psychology, Western University, London, ON, Canada
| | - Arielle Borovsky
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States of America
| | - Ken McRae
- Department of Psychology, Western University, London, ON, Canada
| | - Mary E. Jenkins
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
| | - Katherine Sprengel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
| | - J. Alexander Fraser
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Department of Ophthalmology, Western University, St. Jo122seph’s Health Care, London, ON, Canada
| | - Joseph B. Orange
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
- Canadian Centre for Activity and Aging, Western University, London, ON, Canada
| | - Thea Knowles
- Department of Psychology, Western University, London, ON, Canada
| | - Angela C. Roberts
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States of America
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
- * E-mail:
| |
Collapse
|
10
|
Almeida-Antunes N, Vasconcelos M, Crego A, Rodrigues R, Sampaio A, López-Caneda E. Forgetting Alcohol: A Double-Blind, Randomized Controlled Trial Investigating Memory Inhibition Training in Young Binge Drinkers. Front Neurosci 2022; 16:914213. [PMID: 35844233 PMCID: PMC9278062 DOI: 10.3389/fnins.2022.914213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 05/24/2022] [Indexed: 11/13/2022] Open
Abstract
Background Binge Drinking (BD) has been associated with altered inhibitory control and augmented alcohol-cue reactivity. Memory inhibition (MI), the ability to voluntarily suppress unwanted thoughts/memories, may lead to forgetting of memories in several psychiatric conditions. However, despite its potential clinical implications, no study to date has explored the MI abilities in populations with substance misuse, such as binge drinkers (BDs). Method This study—registered in the NIH Clinical Trials Database (ClinicalTrials.gov identifier: NCT05237414)—aims firstly to examine the behavioral and electroencephalographic (EEG) correlates of MI among college BDs. For this purpose, 45 BDs and 45 age-matched non/low-drinkers (50% female) will be assessed by EEG while performing the Think/No-Think Alcohol task, a paradigm that evaluates alcohol-related MI. Additionally, this work aims to evaluate an alcohol-specific MI intervention protocol using cognitive training (CT) and transcranial direct current stimulation (tDCS) while its effects on behavioral and EEG outcomes are assessed. BDs will be randomly assigned to one MI training group: combined [CT and verum tDCS applied over the right dorsolateral prefrontal cortex (DLPFC)], cognitive (CT and sham tDCS), or control (sham CT and sham tDCS). Training will occur in three consecutive days, in three sessions. MI will be re-assessed in BDs through a post-training EEG assessment. Alcohol use and craving will be measured at the first EEG assessment, and both 10-days and 3-months post-training. In addition, behavioral and EEG data will be collected during the performance of an alcohol cue reactivity (ACR) task, which evaluates attentional bias toward alcoholic stimuli, before, and after the MI training sessions. Discussion This study protocol will provide the first behavioral and neurofunctional MI assessment in BDs. Along with poor MI abilities, BDs are expected to show alterations in event-related potentials and functional connectivity patterns associated with MI. Results should also demonstrate the effectiveness of the protocol, with BDs exhibiting an improved capacity to suppress alcohol-related memories after both combined and cognitive training, along with a reduction in alcohol use and craving in the short/medium-term. Collectively, these findings might have major implications for the understanding and treatment of alcohol misuse. Clinical Trial Registration [www.ClinicalTrials.gov], identifier [NCT05237414].
Collapse
Affiliation(s)
- Natália Almeida-Antunes
- Psychological Neuroscience Laboratory, Psychology Research Center, University of Minho, Braga, Portugal
| | - Margarida Vasconcelos
- Psychological Neuroscience Laboratory, Psychology Research Center, University of Minho, Braga, Portugal
| | - Alberto Crego
- Psychological Neuroscience Laboratory, Psychology Research Center, University of Minho, Braga, Portugal
| | - Rui Rodrigues
- Psychological Neuroscience Laboratory, Psychology Research Center, University of Minho, Braga, Portugal
| | - Adriana Sampaio
- Psychological Neuroscience Laboratory, Psychology Research Center, University of Minho, Braga, Portugal
| | - Eduardo López-Caneda
- Psychological Neuroscience Laboratory, Psychology Research Center, University of Minho, Braga, Portugal
| |
Collapse
|
11
|
Cue overlap supports preretrieval selection in episodic memory: ERP evidence. COGNITIVE, AFFECTIVE, & BEHAVIORAL NEUROSCIENCE 2022; 22:492-508. [PMID: 34966982 PMCID: PMC9090896 DOI: 10.3758/s13415-021-00971-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/02/2021] [Indexed: 11/08/2022]
Abstract
AbstractPeople often want to recall events of a particular kind, but this selective remembering is not always possible. We contrasted two candidate mechanisms: the overlap between retrieval cues and stored memory traces, and the ease of recollection. In two preregistered experiments (Ns = 28), we used event-related potentials (ERPs) to quantify selection occurring before retrieval and the goal states — retrieval orientations — thought to achieve this selection. Participants viewed object pictures or heard object names, and one of these sources was designated as targets in each memory test. We manipulated cue overlap by probing memory with visual names (Experiment 1) or line drawings (Experiment 2). Results revealed that regardless of which source was targeted, the left parietal ERP effect indexing recollection was selective when test cues overlapped more with the targeted than non-targeted information, despite consistently better memory for pictures. ERPs for unstudied items also were more positive-going when cue overlap was high, suggesting that engagement of retrieval orientations reflected availability of external cues matching the targeted source. The data support the view that selection can act before recollection if there is sufficient overlap between retrieval cues and targeted versus competing memory traces.
Collapse
|
12
|
Nicholls VI, Alsbury-Nealy B, Krugliak A, Clarke A. Context effects on object recognition in real-world environments: A study protocol. Wellcome Open Res 2022. [DOI: 10.12688/wellcomeopenres.17856.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.
Collapse
|
13
|
Abstract
We tend to mentally segment a series of events according to perceptual contextual changes, such that items from a shared context are more strongly associated in memory than items from different contexts. It is also known that timing context provides a scaffold to structure experiences in memory, but its role in event segmentation has not been investigated. We adapted a previous paradigm, which was used to investigate event segmentation using visual contexts, to study the effects of changes in timing contexts on event segmentation in associative memory. In two experiments, we presented lists of 36 items in which the interstimulus intervals (ISIs) changed after a series of six items ranging between 0.5 and 4 s in 0.5 s steps. After each list, participants judged which one of two test items were shown first (temporal order judgment) for items that were either drawn from the same context (within an ISI) or from consecutive contexts (across ISIs). Further, participants judged from memory whether the ISI associated to an item lasted longer than a standard interval (2.25 s) that was not previously shown (temporal source memory). Experiment 2 further included a time-item encoding task. Results revealed an effect of timing context changes in temporal order judgments, with faster responses (Experiment 1) or higher accuracy (Experiment 2) when items were drawn from the same context, as opposed to items drawn from across contexts. Further, in both experiments, we found that participants were well able to provide temporal source memory judgments based on recalled durations. Finally, replicated across experiments, we found subjective duration bias, as estimated by psychometric curve fitting parameters of the recalled durations, correlated negatively with within-context temporal order judgments. These findings show that changes in timing context support event segmentation in associative memory.
Collapse
|
14
|
Dijkstra N, van Gaal S, Geerligs L, Bosch SE, van Gerven MAJ. No Evidence for Neural Overlap between Unconsciously Processed and Imagined Stimuli. eNeuro 2021; 8:ENEURO.0228-21.2021. [PMID: 34593516 PMCID: PMC8577044 DOI: 10.1523/eneuro.0228-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 11/23/2022] Open
Abstract
Visual representations can be generated via feedforward or feedback processes. The extent to which these processes result in overlapping representations remains unclear. Previous work has shown that imagined stimuli elicit similar representations as perceived stimuli throughout the visual cortex. However, while representations during imagery are indeed only caused by feedback processing, neural processing during perception is an interplay of both feedforward and feedback processing. This means that any representational overlap could be because of overlap in feedback processes. In the current study, we aimed to investigate this issue by characterizing the overlap between feedforward- and feedback-initiated category representations during imagined stimuli, conscious perception, and unconscious processing using fMRI in humans of either sex. While all three conditions elicited stimulus representations in left lateral occipital cortex (LOC), significant similarities were observed only between imagery and conscious perception in this area. Furthermore, connectivity analyses revealed stronger connectivity between frontal areas and left LOC during conscious perception and in imagery compared with unconscious processing. Together, these findings can be explained by the idea that long-range feedback modifies visual representations, thereby reducing representational overlap between purely feedforward- and feedback-initiated stimulus representations measured by fMRI. Neural representations influenced by feedback, either stimulus driven (perception) or purely internally driven (imagery), are, however, relatively similar.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
| | - Simon van Gaal
- Department of Psychology, Brain & Cognition, University of Amsterdam, 1000 GG, Amsterdam, The Netherlands
| | - Linda Geerligs
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Sander E Bosch
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Marcel A J van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Šoškić A, Jovanović V, Styles SJ, Kappenman ES, Ković V. How to do Better N400 Studies: Reproducibility, Consistency and Adherence to Research Standards in the Existing Literature. Neuropsychol Rev 2021; 32:577-600. [PMID: 34374003 PMCID: PMC9381463 DOI: 10.1007/s11065-021-09513-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Accepted: 04/06/2021] [Indexed: 11/11/2022]
Abstract
Given the complexity of ERP recording and processing pipeline, the resulting variability of methodological options, and the potential for these decisions to influence study outcomes, it is important to understand how ERP studies are conducted in practice and to what extent researchers are transparent about their data collection and analysis procedures. The review gives an overview of methodology reporting in a sample of 132 ERP papers, published between January 1980 – June 2018 in journals included in two large databases: Web of Science and PubMed. Because ERP methodology partly depends on the study design, we focused on a well-established component (the N400) in the most commonly assessed population (healthy neurotypical adults), in one of its most common modalities (visual images). The review provides insights into 73 properties of study design, data pre-processing, measurement, statistics, visualization of results, and references to supplemental information across studies within the same subfield. For each of the examined methodological decisions, the degree of consistency, clarity of reporting and deviations from the guidelines for best practice were examined. Overall, the results show that each study had a unique approach to ERP data recording, processing and analysis, and that at least some details were missing from all papers. In the review, we highlight the most common reporting omissions and deviations from established recommendations, as well as areas in which there was the least consistency. Additionally, we provide guidance for a priori selection of the N400 measurement window and electrode locations based on the results of previous studies.
Collapse
Affiliation(s)
- Anđela Šoškić
- Teacher Education Faculty, University of Belgrade, Belgrade, Serbia. .,Laboratory for Neurocognition and Applied Cognition, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia.
| | - Vojislav Jovanović
- Laboratory for Neurocognition and Applied Cognition, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia
| | - Suzy J Styles
- Division of Psychology, School of Social Sciences, Nanyang Technological University, Singapore, Singapore.,Centre for Research and Development On Learning (CRADLE), Nanyang Technological University, Singapore, Singapore.,Singapore Institute for Clinical Sciences (SICS), A*Star Research Entities, Singapore, Singapore
| | - Emily S Kappenman
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Vanja Ković
- Laboratory for Neurocognition and Applied Cognition, Department of Psychology, Faculty of Philosophy, University of Belgrade, Belgrade, Serbia
| |
Collapse
|
16
|
Familiarity for entities as a sensitive marker of antero-lateral entorhinal atrophy in amnestic mild cognitive impairment. Cortex 2020; 128:61-72. [DOI: 10.1016/j.cortex.2020.02.022] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Revised: 06/05/2019] [Accepted: 02/22/2020] [Indexed: 11/19/2022]
|
17
|
Too close to call: Spatial distance between options influences choice difficulty. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2020. [DOI: 10.1016/j.jesp.2019.103939] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
18
|
Smith CM, Federmeier KD. Neural Signatures of Learning Novel Object-Scene Associations. J Cogn Neurosci 2020; 32:783-803. [PMID: 31933437 DOI: 10.1162/jocn_a_01530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Objects are perceived within rich visual contexts, and statistical associations may be exploited to facilitate their rapid recognition. Recent work using natural scene-object associations suggests that scenes can prime the visual form of associated objects, but it remains unknown whether this relies on an extended learning process. We asked participants to learn categorically structured associations between novel objects and scenes in a paired associate memory task while ERPs were recorded. In the test phase, scenes were first presented (2500 msec), followed by objects that matched or mismatched the scene; degree of contextual mismatch was manipulated along visual and categorical dimensions. Matching objects elicited a reduced N300 response, suggesting visuostructural priming based on recently formed associations. Amplitude of an extended positivity (onset ∼200 msec) was sensitive to visual distance between the presented object and the contextually associated target object, most likely indexing visual template matching. Results suggest recent associative memories may be rapidly recruited to facilitate object recognition in a top-down fashion, with clinical implications for populations with impairments in hippocampal-dependent memory and executive function.
Collapse
|
19
|
Semantic and perceptual priming activate partially overlapping brain networks as revealed by direct cortical recordings in humans. Neuroimage 2019; 203:116204. [DOI: 10.1016/j.neuroimage.2019.116204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 08/19/2019] [Accepted: 09/17/2019] [Indexed: 01/20/2023] Open
|
20
|
Hussain Ismail AM, Solomon JA, Hansard M, Mareschal I. A perceptual bias for man-made objects in humans. Proc Biol Sci 2019; 286:20191492. [PMID: 31690239 PMCID: PMC6842849 DOI: 10.1098/rspb.2019.1492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 10/11/2019] [Indexed: 02/03/2023] Open
Abstract
Ambiguous images are widely recognized as a valuable tool for probing human perception. Perceptual biases that arise when people make judgements about ambiguous images reveal their expectations about the environment. While perceptual biases in early visual processing have been well established, their existence in higher-level vision has been explored only for faces, which may be processed differently from other objects. Here we developed a new, highly versatile method of creating ambiguous hybrid images comprising two component objects belonging to distinct categories. We used these hybrids to measure perceptual biases in object classification and found that images of man-made (manufactured) objects dominated those of naturally occurring (non-man-made) ones in hybrids. This dominance generalized to a broad range of object categories, persisted when the horizontal and vertical elements that dominate man-made objects were removed and increased with the real-world size of the manufactured object. Our findings show for the first time that people have perceptual biases to see man-made objects and suggest that extended exposure to manufactured environments in our urban-living participants has changed the way that they see the world.
Collapse
Affiliation(s)
- Ahamed Miflah Hussain Ismail
- School of Psychology, University of Nottingham Malaysia, Semenyih 43500, Malaysia
- School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| | - Joshua A. Solomon
- Centre for Applied Vision Research, City, University of London, London EC1V 0HB, UK
| | - Miles Hansard
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| | - Isabelle Mareschal
- School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| |
Collapse
|
21
|
Li B, Gao C, Wang J. Electrophysiological correlates of masked repetition and conceptual priming for visual objects. Brain Behav 2019; 9:e01415. [PMID: 31557425 PMCID: PMC6790342 DOI: 10.1002/brb3.1415] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 08/17/2019] [Accepted: 08/27/2019] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND Previous studies have investigated the time course of visual object processing using event-related potential (ERP) and the masked repetition priming paradigm. However, it is unclear how the ERP correlates associated with masked repetition priming differentiate from masked conceptual priming of visual objects. METHOD The present study used semantically related picture pairs of visual objects to compare the ERPs associated with masked repetition and conceptual priming of visual objects. RESULTS The results revealed that masked repetition priming was associated with N/P190 and N400 effects, whereas masked conceptual priming was only associated with N400 effect. Moreover, the topography of repetition N/P190 effect was different from repetition and conceptual N400 effects, whereas the topography of repetition N400 effect was similar to conceptual N400 effect. CONCLUSIONS These results indicated that masked repetition and conceptual priming were associated with spatiotemporally different ERP effects and that the N400 of visual objects was sensitive to automatic semantic spreading.
Collapse
Affiliation(s)
- Bingbing Li
- School of Education Science, Jiangsu Normal University, Xuzhou, China
| | - Chuanji Gao
- Department of Psychology, Institute for Mind and Brain, University of South Carolina, Columbia, SC, USA
| | - Juan Wang
- School of Education Science, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
22
|
Nuthmann A, de Groot F, Huettig F, Olivers CNL. Extrafoveal attentional capture by object semantics. PLoS One 2019; 14:e0217051. [PMID: 31120948 PMCID: PMC6532879 DOI: 10.1371/journal.pone.0217051] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 05/03/2019] [Indexed: 11/19/2022] Open
Abstract
There is ongoing debate on whether object meaning can be processed outside foveal vision, making semantics available for attentional guidance. Much of the debate has centred on whether objects that do not fit within an overall scene draw attention, in complex displays that are often difficult to control. Here, we revisited the question by reanalysing data from three experiments that used displays consisting of standalone objects from a carefully controlled stimulus set. Observers searched for a target object, as per auditory instruction. On the critical trials, the displays contained no target but objects that were semantically related to the target, visually related, or unrelated. Analyses using (generalized) linear mixed-effects models showed that, although visually related objects attracted most attention, semantically related objects were also fixated earlier in time than unrelated objects. Moreover, semantic matches affected the very first saccade in the display. The amplitudes of saccades that first entered semantically related objects were larger than 5° on average, confirming that object semantics is available outside foveal vision. Finally, there was no semantic capture of attention for the same objects when observers did not actively look for the target, confirming that it was not stimulus-driven. We discuss the implications for existing models of visual cognition.
Collapse
Affiliation(s)
- Antje Nuthmann
- Psychology Department, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Institute of Psychology, University of Kiel, Kiel, Germany
| | - Floor de Groot
- Department of Experimental and Applied Psychology & Institute for Brain and Behaviour, Vrije Universiteit, Amsterdam, The Netherlands
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Christian N. L. Olivers
- Department of Experimental and Applied Psychology & Institute for Brain and Behaviour, Vrije Universiteit, Amsterdam, The Netherlands
- * E-mail:
| |
Collapse
|
23
|
Blechert J, Lender A, Polk S, Busch NA, Ohla K. Food-Pics_Extended-An Image Database for Experimental Research on Eating and Appetite: Additional Images, Normative Ratings and an Updated Review. Front Psychol 2019; 10:307. [PMID: 30899232 PMCID: PMC6416180 DOI: 10.3389/fpsyg.2019.00307] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Accepted: 01/31/2019] [Indexed: 12/03/2022] Open
Abstract
Our current environment is characterized by the omnipresence of food cues. The taste and smell of real foods—but also graphical depictions of appetizing foods—can guide our eating behavior, for example, by eliciting food craving and anticipatory cephalic phase responses. To facilitate research into this so-called cue reactivity, several groups have compiled standardized food image sets. Yet, selecting the best subset of images for a specific research question can be difficult as images and image sets vary along several dimensions. In the present report, we review the strengths and weaknesses of popular food image sets to guide researchers during stimulus selection. Furthermore, we present a recent extension of our previously published database food-pics, which comprises an additional 328 food images from different countries to increase cross-cultural applicability. This food-pics_extended stimulus database, thus, encompasses and replaces food-pics. Normative data from a predominantly German-speaking sample are again presented as well as updated calculations of image characteristics.
Collapse
Affiliation(s)
- Jens Blechert
- Department of Psychology, University of Salzburg, Salzburg, Austria.,Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
| | - Anja Lender
- Department of Psychology, University of Salzburg, Salzburg, Austria.,Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
| | - Sarah Polk
- Department of Psychology and Education, Free University of Berlin, Berlin, Germany
| | - Niko A Busch
- Institute of Psychology, University of Münster, Münster, Germany
| | - Kathrin Ohla
- Research Center Jülich, Institute of Neuroscience and Medicine (INM-3), Cognitive Neuroscience, Jülich, Germany
| |
Collapse
|
24
|
Not all perceptual difficulties lower memory predictions: Testing the perceptual fluency hypothesis with rotated and inverted object images. Mem Cognit 2019; 47:906-922. [PMID: 30790210 DOI: 10.3758/s13421-019-00907-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Studies typically show that perceptual difficulties at the time of encoding lower memory predictions. One potential exception to this is the inverted-word manipulation, in which participants produce equivalent memory predictions for upright and inverted words, despite higher free-recall performance for the inverted words (Sungkhasettee, Friedman, & Castel in Psychonomic Bulletin & Review, 18, 973-978, 2011). In the present set of experiments, we aimed to investigate the contributions of online perceptual difficulties versus a priori beliefs through two disfluency manipulations conceptually similar to the inverted-word manipulation: inversion and canonicity. The inversion manipulation involved presentation of upright and inverted object images, whereas the canonicity manipulation involved presentation of objects to participants from frequent (canonical) or infrequent (noncanonical) viewing perspectives. Memory predictions were made either on an item-by-item basis or aggregately. In all studies, the perceptual identification latencies for inverted and noncanonical items were slower than those for upright and canonical items, respectively. In experiments conducted with item-by-item memory predictions, predictions were not significantly different from each other across encoding conditions. In contrast, in experiments using aggregate memory predictions, fluent items produced higher memory predictions than did disfluent items. These results show that in certain cases, participants may not consider online objective perceptual difficulties. Moreover, item-by-item and aggregate memory predictions produce different patterns, evidence of a dissociation between the two types of predictions. The results are discussed in light of theories that rely on objective perceptual fluency differences across encoding conditions versus theories that rely on participants' a priori beliefs about fluency.
Collapse
|
25
|
López-Caneda E, Crego A, Campos AD, González-Villar A, Sampaio A. The Think/No-Think Alcohol Task: A New Paradigm for Assessing Memory Suppression in Alcohol-Related Contexts. Alcohol Clin Exp Res 2018; 43:36-47. [DOI: 10.1111/acer.13916] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 10/23/2018] [Indexed: 12/25/2022]
Affiliation(s)
- Eduardo López-Caneda
- Psychological Neuroscience Lab ; Research Center in Psychology (CIPsi); School of Psychology; University of Minho; Braga Portugal
| | - Alberto Crego
- Psychological Neuroscience Lab ; Research Center in Psychology (CIPsi); School of Psychology; University of Minho; Braga Portugal
| | - Ana D. Campos
- Human Cognition Lab ; Research Center in Psychology (CIPsi); School of Psychology; University of Minho; Braga Portugal
| | - Alberto González-Villar
- Department of Clinical Psychology and Psychobiology ; University of Santiago de Compostela; Galicia Spain
| | - Adriana Sampaio
- Psychological Neuroscience Lab ; Research Center in Psychology (CIPsi); School of Psychology; University of Minho; Braga Portugal
| |
Collapse
|
26
|
Draschkow D, Heikel E, Võ MLH, Fiebach CJ, Sassenhagen J. No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing. Neuropsychologia 2018; 120:9-17. [DOI: 10.1016/j.neuropsychologia.2018.09.016] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 09/18/2018] [Accepted: 09/23/2018] [Indexed: 11/24/2022]
|
27
|
Delhaye E, Bahri MA, Salmon E, Bastin C. Impaired perceptual integration and memory for unitized representations are associated with perirhinal cortex atrophy in Alzheimer's disease. Neurobiol Aging 2018; 73:135-144. [PMID: 30342274 DOI: 10.1016/j.neurobiolaging.2018.09.021] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 09/12/2018] [Accepted: 09/15/2018] [Indexed: 11/27/2022]
Abstract
Unitization, the capacity to encode associations as one integrated entity, can enhance associative memory in populations with an associative memory deficit by promoting familiarity-based associative recognition. Patients with Alzheimer's disease (AD) are typically impaired in associative memory compared with healthy controls but do not benefit from unitization strategies. Using fragmented pictures of objects, this study aimed at assessing which of the cognitive processes that compose unitization is actually affected in AD: the retrieval of unitized representations itself, or some earlier stages of processing, such as the integration process at a perceptual or conceptual stage of representation. We also intended to relate patients' object unitization capacity to the integrity of their perirhinal cortex (PrC), as the PrC is thought to underlie unitization and is also one of the first affected regions in AD. We evaluated perceptual integration capacity and subsequent memory for those items that have supposedly been unitized in 23 mild AD patients and 20 controls. We systematically manipulated the level of perceptual integration during encoding by presenting object pictures that were either left intact, separated into 2 fragments, or separated into 4 fragments. Subjects were instructed to unitize the fragments into a single representation. Success of integration was assessed by a question requiring the identification of the object. Participants also underwent a structural magnetic resonance imaging examination, and measures of PrC, posterior cingulate cortex volume and thickness, and hippocampal volume, were extracted. The results showed that patients' perceptual integration performance decreased with the increased fragmentation level and that their memory for unitized representations was impaired whatever the demands in terms of perceptual integration at encoding. Both perceptual integration and memory for unitized representations were related to the integrity of the PrC, and memory for unitized representations was also related to the volume of the hippocampus. We argue that, globally, this supports representational theories of memory that hold that the role of the PrC is not only perceptual nor mnemonic but instead underlies complex object representation.
Collapse
Affiliation(s)
- Emma Delhaye
- GIGA-CRC In-Vivo Imaging, University in Liège, Liège, Belgium; PsyNCog, Faculty of Psychology, University in Liège, Liège, Belgium.
| | | | - Eric Salmon
- GIGA-CRC In-Vivo Imaging, University in Liège, Liège, Belgium; PsyNCog, Faculty of Psychology, University in Liège, Liège, Belgium; Memory Clinic, CHU Liege, University in Liège, Liège, Belgium
| | - Christine Bastin
- GIGA-CRC In-Vivo Imaging, University in Liège, Liège, Belgium; PsyNCog, Faculty of Psychology, University in Liège, Liège, Belgium
| |
Collapse
|
28
|
Conte S, Brenna V, Ricciardelli P, Turati C. The nature and emotional valence of a prime influences the processing of emotional faces in adults and children. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2018. [DOI: 10.1177/0165025418761815] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A large body of research has investigated both the emotional elaboration of facial stimuli in adults and the development of children’s recognition of emotional expressions. Yet, it is still not clear whether children’s ability to recognize an emotional face may be modulated by prior exposure to a different face, and whether an emotional expression may exert an effect on the processing of subsequently encountered facial emotional expressions. We tested in three experiments the recognition of happy and angry target faces preceded by neutral faces or objects (Experiment 1) and happy or angry faces (Experiment 2A and Experiment 2B) using an affective priming task in adults and 7- and 5-year-old children. Results showed a standard prime effect for neutral faces (Experiment 1) for all participants, and for happy faces in children (Experiment 2A) and adults (Experiment 2B). Otherwise, angry faces elicited negative priming effects in all participants (Experiment 2A). Overall, our findings showed that both prior exposure to a face per se and the emotional valence of the prime face have an impact on subsequent processing of facial emotional information. Implications for emotional processing are discussed.
Collapse
|
29
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
30
|
Draschkow D, Võ MLH. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Sci Rep 2017; 7:16471. [PMID: 29184115 PMCID: PMC5705766 DOI: 10.1038/s41598-017-16739-x] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/16/2017] [Indexed: 11/09/2022] Open
Abstract
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Collapse
Affiliation(s)
- Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany.
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
31
|
Delhaye E, Bastin C, Moulin CJ, Besson G, Barbeau EJ. Bridging novelty and familiarity-based recognition memory: A matter of timing. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1362090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Emma Delhaye
- Brain and Cognition Research Center, University of Toulouse, Toulouse, France
- GICA-Cyclotron Research Center, Université de Liège, Liege, Belgique
| | - Christine Bastin
- GICA-Cyclotron Research Center, Université de Liège, Liege, Belgique
| | - Christopher J.A. Moulin
- Laboratory of Psychology & NeuroCognition (CNRS UMR 5105), University of Grenoble Alpes, Grenoble, France
| | - Gabriel Besson
- GICA-Cyclotron Research Center, Université de Liège, Liege, Belgique
| | - Emmanuel J. Barbeau
- Brain and Cognition Research Center, University of Toulouse, Toulouse, France
| |
Collapse
|
32
|
Samar VJ, Berger L. Does a Flatter General Gradient of Visual Attention Explain Peripheral Advantages and Central Deficits in Deaf Adults? Front Psychol 2017; 8:713. [PMID: 28559861 PMCID: PMC5433326 DOI: 10.3389/fpsyg.2017.00713] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Accepted: 04/21/2017] [Indexed: 11/13/2022] Open
Abstract
Individuals deaf from early age often outperform hearing individuals in the visual periphery on attention-dependent dorsal stream tasks (e.g., spatial localization or movement detection), but sometimes show central visual attention deficits, usually on ventral stream object identification tasks. It has been proposed that early deafness adaptively redirects attentional resources from central to peripheral vision to monitor extrapersonal space in the absence of auditory cues, producing a more evenly distributed attention gradient across visual space. However, little direct evidence exists that peripheral advantages are functionally tied to central deficits, rather than determined by independent mechanisms, and previous studies using several attention tasks typically report peripheral advantages or central deficits, not both. To test the general altered attentional gradient proposal, we employed a novel divided attention paradigm that measured target localization performance along a gradient from parafoveal to peripheral locations, independent of concurrent central object identification performance in prelingually deaf and hearing groups who differed in access to auditory input. Deaf participants without cochlear implants (No-CI), with cochlear implants (CI), and hearing participants identified vehicles presented centrally, and concurrently reported the location of parafoveal (1.4°) and peripheral (13.3°) targets among distractors. No-CI participants but not CI participants showed a central identification accuracy deficit. However, all groups displayed equivalent target localization accuracy at peripheral and parafoveal locations and nearly parallel parafoveal-peripheral gradients. Furthermore, the No-CI group's central identification deficit remained after statistically controlling peripheral performance; conversely, the parafoveal and peripheral group performance equivalencies remained after controlling central identification accuracy. These results suggest that, in the absence of auditory input, reduced central attentional capacity is not necessarily associated with enhanced peripheral attentional capacity or with flattening of a general attention gradient. Our findings converge with earlier studies suggesting that a general graded trade-off of attentional resources across the visual field does not adequately explain the complex task-dependent spatial distribution of deaf-hearing performance differences reported in the literature. Rather, growing evidence suggests that the spatial distribution of attention-mediated performance in deaf people is determined by sophisticated cross-modal plasticity mechanisms that recruit specific sensory and polymodal cortex to achieve specific compensatory processing goals.
Collapse
Affiliation(s)
- Vincent J Samar
- NTID Department of Liberal Studies, Rochester Institute of Technology, RochesterNY, USA
| | - Lauren Berger
- PhD Program in Educational Neuroscience, Gallaudet University, WashingtonDC, USA
| |
Collapse
|
33
|
Schweitzer R, Trapp S, Bar M. Associated Information Increases Subjective Perception of Duration. Perception 2017; 46:1000-1007. [PMID: 28084904 DOI: 10.1177/0301006616689579] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Our sense of time is prone to various biases. For instance, one factor that can dilate an event's perceived duration is the violation of predictions; when a series of repeated stimuli is interrupted by an unpredictable oddball. On the other hand, when the probability of a repetition itself is manipulated, predictable conditions can also increase estimated duration. This suggests that manipulations of expectations have different or even opposing effects on time perception. In previous studies, expectations were generated because stimuli were repeated or because the likelihood of a sequence or a repetition was varied. In the natural environment, however, expectations are often built via associative processes, for example, the context of a kitchen promotes the expectation of plates, appliances, and other associated objects. Here, we manipulated such association-based expectations by using oddballs that were either contextually associated or nonassociated with the standard items. We find that duration was more strongly overestimated for contextually associated oddballs. We reason that top-down attention is biased toward associated information, and thereby dilates subjective duration for associated oddballs. Based on this finding, we propose an interplay between top-down attention and predictive processing in the perception of time.
Collapse
Affiliation(s)
- Richard Schweitzer
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Sabrina Trapp
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel; Psychology Department, Ludwig-Maximilians-University, Munich, Germany
| | - Moshe Bar
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
34
|
Kinateder M, Warren WH. Social Influence on Evacuation Behavior in Real and Virtual Environments. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00043] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
35
|
Ball F, Bernasconi F, Busch NA. Semantic Relations between Visual Objects Can Be Unconsciously Processed but Not Reported under Change Blindness. J Cogn Neurosci 2015; 27:2253-68. [DOI: 10.1162/jocn_a_00860] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Change blindness—the failure to detect changes in visual scenes—has often been interpreted as a result of impoverished visual information encoding or as a failure to compare the prechange and postchange scene. In the present electroencephalography study, we investigated whether semantic features of prechange and postchange information are processed unconsciously, even when observers are unaware that a change has occurred. We presented scenes composed of natural objects in which one object changed from one presentation to the next. Object changes were either semantically related (e.g., rail car changed to rail) or unrelated (e.g., rail car changed to sausage). Observers were first asked to detect whether any change had occurred and then to judge the semantic relation of the two objects involved in the change. We found a semantic mismatch ERP effect, that is, a more negative-going ERP for semantically unrelated compared to related changes, originating from a cortical network including the left middle temporal gyrus and occipital cortex and resembling the N400 effect, albeit at longer latencies. Importantly, this semantic mismatch effect persisted even when observers were unaware of the change and the semantic relationship of prechange and postchange object. This finding implies that change blindness does not preclude the encoding of the prechange and postchange objects' identities and possibly even the comparison of their semantic content. Thus, change blindness cannot be interpreted as resulting from impoverished or volatile visual representations or as a failure to process the prechange and postchange object. Instead, change detection appears to be limited at a later, postperceptual stage.
Collapse
Affiliation(s)
- Felix Ball
- 1Charité University Medicine, Berlin, Germany
- 2Humboldt-University, Berlin, Germany
| | - Fosco Bernasconi
- 1Charité University Medicine, Berlin, Germany
- 2Humboldt-University, Berlin, Germany
| | - Niko A. Busch
- 1Charité University Medicine, Berlin, Germany
- 2Humboldt-University, Berlin, Germany
| |
Collapse
|
36
|
de Groot F, Koelewijn T, Huettig F, Olivers CN. A stimulus set of words and pictures matched for visual and semantic similarity. JOURNAL OF COGNITIVE PSYCHOLOGY 2015. [DOI: 10.1080/20445911.2015.1101119] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
37
|
Buffat S, Chastres V, Bichot A, Rider D, Benmussa F, Lorenceau J. OB3D, a new set of 3D objects available for research: a web-based study. Front Psychol 2014; 5:1062. [PMID: 25339920 PMCID: PMC4186308 DOI: 10.3389/fpsyg.2014.01062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2014] [Accepted: 09/04/2014] [Indexed: 11/13/2022] Open
Abstract
Studying object recognition is central to fundamental and clinical research on cognitive functions but suffers from the limitations of the available sets that cannot always be modified and adapted to meet the specific goals of each study. We here present a new set of 3D scans of real objects available on-line as ASCII files, OB3D. These files are lists of dots, each defined by a triplet of spatial coordinates and their normal that allow simple and highly versatile transformations and adaptations. We performed a web-based experiment to evaluate the minimal number of dots required for the denomination and categorization of these objects, thus providing a reference threshold. We further analyze several other variables derived from this data set, such as the correlations with object complexity. This new stimulus set, which was found to activate the Lower Occipital Complex (LOC) in another study, may be of interest for studies of cognitive functions in healthy participants and patients with cognitive impairments, including visual perception, language, memory, etc.
Collapse
Affiliation(s)
- Stéphane Buffat
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées Brétigny, France ; Cognition and Action Group, Cognac G, Service de Santé des Armées, Centre National de la Recherche Scientifique, Université Paris Descartes, Unités Mixtes de Recherche-MD 4 - 8257 Paris, France
| | - Véronique Chastres
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées Brétigny, France
| | - Alain Bichot
- Département Action et Cognition en Situation Opérationnelle, Institut de Recherche Biomédicale des Armées Brétigny, France
| | - Delphine Rider
- Centre National de la Recherche Scientifique, Unités Mixtes de Service Relais d'Information sur les Sciences de la Cognition 3332 Paris, France
| | - Frédéric Benmussa
- Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, Unités Mixtes de Recherche-8248, Centre National de la Recherche Scientifique, École Normale Supérieure Paris, France
| | - Jean Lorenceau
- Centre National de la Recherche Scientifique, Unités Mixtes de Service Relais d'Information sur les Sciences de la Cognition 3332 Paris, France ; Laboratoire des Systèmes Perceptifs, Département d'études Cognitives, Unités Mixtes de Recherche-8248, Centre National de la Recherche Scientifique, École Normale Supérieure Paris, France
| |
Collapse
|
38
|
Blechert J, Meule A, Busch NA, Ohla K. Food-pics: an image database for experimental research on eating and appetite. Front Psychol 2014; 5:617. [PMID: 25009514 PMCID: PMC4067906 DOI: 10.3389/fpsyg.2014.00617] [Citation(s) in RCA: 338] [Impact Index Per Article: 33.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Accepted: 05/31/2014] [Indexed: 01/17/2023] Open
Abstract
Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior.
Collapse
Affiliation(s)
- Jens Blechert
- Division of Clinical Psychology, Psychotherapy and Health Psychology, University of Salzburg Salzburg, Austria
| | - Adrian Meule
- Institute of Psychology, University of Würzburg Würzburg, Germany ; Hospital for Child and Adolescent Psychiatry, LWL University Hospital of the Ruhr University Bochum Hamm, Germany
| | - Niko A Busch
- Institute of Medical Psychology, Charité-Universitätsmedizin Berlin, Germany ; Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany
| | - Kathrin Ohla
- Section Psychophysiology, Department of Molecular Genetics, German Institute of Human Nutrition Potsdam-Rehbrücke Nuthetal, Germany
| |
Collapse
|
39
|
A Tutorial on Data-Driven Methods for Statistically Assessing ERP Topographies. Brain Topogr 2013; 27:72-83. [DOI: 10.1007/s10548-013-0310-1] [Citation(s) in RCA: 75] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2012] [Accepted: 08/14/2013] [Indexed: 10/26/2022]
|