1
|
Singhal I, Srivastava N. Dynamics of mental imagery. Conscious Cogn 2025; 131:103865. [PMID: 40222266 DOI: 10.1016/j.concog.2025.103865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 04/06/2025] [Accepted: 04/08/2025] [Indexed: 04/15/2025]
Abstract
Phenomenology of mental imagery can reveal the structure of underlying mental representations, yet progress has been limited because of its private nature. Through a phenomenology-recreation task we elucidate the dynamics of mental imagery. Specifically, the temporal grain, speed of object manipulation, smoothness of contents unfolding, and temporal extent of stability of imagined contents. To gauge these properties, we asked a large cohort of participants (N = 827) to recreate these aspects of their imagination in six tasks. Results showed that temporal features of imagination unfold at distinct timescales, though a factor analysis showed that variance in these tasks could be accounted for via two factors; temporal ability and stability of mental imagery. Additionally, we contrast these regularities with those documented for visual perception, showing that imagined contents are sluggish but more stable than perception. However, both imagination and perception share a common constraint; maintaining identically sized temporal windows of conscious experience.
Collapse
Affiliation(s)
- Ishan Singhal
- Department of Cognitive Science, Indian Institute of Technology, Kanpur, India; Centre for Developing Intelligent Systems, Indian Institute of Technology, Kanpur, India.
| | - Nisheeth Srivastava
- Department of Cognitive Science, Indian Institute of Technology, Kanpur, India; Centre for Developing Intelligent Systems, Indian Institute of Technology, Kanpur, India
| |
Collapse
|
2
|
Linde‐Domingo J, Kerrén C. Evolving Engrams Demand Changes in Effective Cues. Hippocampus 2025; 35:e70015. [PMID: 40331490 PMCID: PMC12056888 DOI: 10.1002/hipo.70015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2025] [Revised: 04/23/2025] [Accepted: 04/25/2025] [Indexed: 05/08/2025]
Abstract
A longstanding principle in episodic memory research, known as the encoding specificity hypothesis, holds that an effective retrieval cue should closely match the original encoding conditions. This principle assumes that a successful retrieval cue remains static over time. Despite the broad acceptance of this idea, it conflicts with one of the most well-established findings in memory research: The dynamic and ever-changing nature of episodic memories. In this article, we propose that the most effective retrieval cue should engage with the current state of the memory, which may have shifted significantly since encoding. By redefining the criteria for successful recall, we challenge a core principle of the field and open new avenues for exploring memory accessibility, offering fresh insights into both theoretical, and applied domains.
Collapse
Affiliation(s)
- Juan Linde‐Domingo
- Department of Experimental PsychologyUniversity of GranadaGranadaSpain
- Mind, Brain and Behavior Research CenterUniversity of GranadaGranadaSpain
| | - Casper Kerrén
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
3
|
Mantegna F, Olivetti E, Schwedhelm P, Baldauf D. Covariance-based decoding reveals a category-specific functional connectivity network for imagined visual objects. Neuroimage 2025; 311:121171. [PMID: 40139516 DOI: 10.1016/j.neuroimage.2025.121171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Revised: 03/21/2025] [Accepted: 03/24/2025] [Indexed: 03/29/2025] Open
Abstract
The coordination of different brain regions is required for the visual imagery of complex objects (e.g., faces and places). Short-range connectivity within sensory areas is necessary to construct the mental image. Long-range connectivity between control and sensory areas is necessary to re-instantiate and maintain the mental image. While dynamic changes in functional connectivity are expected during visual imagery, it is unclear whether a category-specific network exists in which the strength and the spatial destination of the connections vary depending on the imagery target. In this magnetoencephalography study, we used a minimally constrained experimental paradigm wherein imagery categories were prompted using visual word cues only, and we decoded face versus place imagery based on their underlying functional connectivity patterns as estimated from the spatial covariance across brain regions. A subnetwork analysis further disentangled the contribution of different connections. The results show that face and place imagery can be decoded from both short-range and long-range connections. Overall, the results show that imagined object categories can be distinguished based on functional connectivity patterns observed in a category-specific network. Notably, functional connectivity estimates rely on purely endogenous brain signals suggesting that an external reference is not necessary to elicit such category-specific network dynamics.
Collapse
Affiliation(s)
- Francesco Mantegna
- Department of Psychology, New York University, New York, NY 10003, USA; Department of Engineering Science, Oxford University, Oxford, Oxfordshire, United Kingdom; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy.
| | - Emanuele Olivetti
- NeuroInformatics Laboratory (NILab), Bruno Kessler Foundation (FBK), Mattarello, TN 38100, Italy; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| | - Philipp Schwedhelm
- Functional Imaging Laboratory, German Primate Center - Leibniz Institute for Primate Research, Goettingen, 37077, Germany; CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| | - Daniel Baldauf
- CIMeC - Center for Mind and Brain Sciences, Mattarello, TN 38100, Italy
| |
Collapse
|
4
|
Liu M, Xiao G, Xiong G. Neurocognitive mechanisms of social scenario imagery generation in individuals with social anxiety. Behav Brain Res 2025; 484:115488. [PMID: 39986613 DOI: 10.1016/j.bbr.2025.115488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Revised: 01/01/2025] [Accepted: 02/16/2025] [Indexed: 02/24/2025]
Abstract
Cognitive behavioral theory emphasizes the significant role of mental imagery in the onset and development of Social Anxiety Disorder (SAD). However, the neural mechanisms underlying the generation of social scenario imagery in individuals with social anxiety remain unclear. In this study, 28 individuals with social anxiety and 31 healthy controls performed a retrospective cue imagery generation task to examine their neural responses. Behavioral results showed that, compared to negative social scenarios, the vividness of positive social scenario imagery was significantly lower in the social anxiety group, while the control group showed no significant difference between the two conditions. Event-related potential (ERP) results revealed that, for the social anxiety group, N170 and LPP amplitudes were significantly larger under the neutral condition compared to the negative condition, whereas the control group exhibited no significant difference between these conditions. Furthermore, the social anxiety group showed significantly larger LPP amplitudes than the control group in both the positive and neutral conditions. These findings provide the first neurophysiological evidence that individuals with social anxiety exhibit processing biases when generating imagery of positive and neutral social scenarios, suggesting heightened neural engagement in these conditions.
Collapse
Affiliation(s)
- Mingfan Liu
- School of Psychology, Jiangxi Normal University, Nanchang, China; Center of Mental Health Education and Research, Jiangxi Normal University, Nanchang, China
| | - Guanlai Xiao
- School of Psychology, Jiangxi Normal University, Nanchang, China.
| | - Genling Xiong
- School of Psychology, Jiangxi Normal University, Nanchang, China; Dongguan Shatian Experimental Middle School, Dongguan, China
| |
Collapse
|
5
|
Takahashi K, Pontes Quero S, Fiorilli J, Benedetti D, Yuste R, Friston KJ, Tononi G, Pennartz CM, Olcese U, TWCF: INTREPID Consortium. Testing the role of spontaneous activity in visuospatial perception with patterned optogenetics. PLoS One 2025; 20:e0318863. [PMID: 40014595 PMCID: PMC11867336 DOI: 10.1371/journal.pone.0318863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Accepted: 01/21/2025] [Indexed: 03/01/2025] Open
Abstract
A major debate in the field of consciousness pertains to whether neuronal activity or rather the causal structure of neural circuits underlie the generation of conscious experience. The former position is held by theoretical accounts of consciousness based on the predictive processing framework (such as neurorepresentationalism and active inference), while the latter is posited by the integrated information theory. This protocol describes an experiment, part of a larger adversarial collaboration, that was designed to address this question through a combination of behavioral tests in mice, functional imaging, patterned optogenetics and electrophysiology. The experiment will directly test if optogenetic inactivation of a portion of the visual cortex not responding to behaviorally relevant stimuli will affect the perception of the spatial distribution of these stimuli, even when the neurons being inactivated display no or very low spiking activity, so low that it does not induce a significant effect on other cortical areas. The results of the experiment will be compared against theoretical predictions, and will provide a major contribution towards understanding what the neuronal substrate of consciousness is.
Collapse
Affiliation(s)
- Kengo Takahashi
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Samuel Pontes Quero
- Department of Biological Sciences, NeuroTechnology Center, Columbia University, New York City, New York, United States of America
| | - Julien Fiorilli
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Davide Benedetti
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Rafael Yuste
- Department of Biological Sciences, NeuroTechnology Center, Columbia University, New York City, New York, United States of America
| | - Karl J. Friston
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, United Kingdom
| | - Giulio Tononi
- Department of Psychiatry, University of Wisconsin-Madison, Wisconsin, United States of America
| | - Cyriel M.A. Pennartz
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Umberto Olcese
- Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | | |
Collapse
|
6
|
Spagna A, Heidenry Z, Miselevich M, Lambert C, Eisenstadt B, Tremblay L, Liu Z, Liu J, Bartolomeo P. Competing models of visual mental imagery: Reverse hierarchy or heterarchy? Phys Life Rev 2024; 51:96-100. [PMID: 39342796 DOI: 10.1016/j.plrev.2024.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Accepted: 09/14/2024] [Indexed: 10/01/2024]
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, NY, USA, 10027.
| | - Zoe Heidenry
- Department of Psychology, Columbia University in the City of New York, NY, USA, 10027
| | | | - Chloe Lambert
- Department of Psychology, Columbia University in the City of New York, NY, USA, 10027
| | | | - Laura Tremblay
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, California, USA; Department of Neurology, VA Northern California Health Care System, Martinez, California, USA
| | - Zixin Liu
- Department of Human Development, Teachers College, Columbia University, NY, USA, 10027
| | - Jianghao Liu
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, 75013 Paris, France; Dassault Systèmes, Vélizy-Villacoublay, France
| | - Paolo Bartolomeo
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, 75013 Paris, France
| |
Collapse
|
7
|
Pan H, Song W, Li L, Qin X. The design and implementation of multi-character classification scheme based on EEG signals of visual imagery. Cogn Neurodyn 2024; 18:2299-2309. [PMID: 39678727 PMCID: PMC11639744 DOI: 10.1007/s11571-024-10087-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/23/2024] [Accepted: 02/07/2024] [Indexed: 12/17/2024] Open
Abstract
In visual-imagery-based brain-computer interface (VI-BCI), there are problems of singleness of imagination task and insufficient description of feature information, which seriously hinder the development and application of VI-BCI technology in the field of restoring communication. In this paper, we design and optimize a multi-character classification scheme based on electroencephalogram (EEG) signals of visual imagery (VI), which is used to classify 29 characters including 26 lowercase English letters and three punctuation marks. Firstly, a new paradigm of randomly presenting characters and including preparation stage is designed to acquire EEG signals and construct a multi-character dataset, which can eliminate the influence between VI tasks. Secondly, tensor data is obtained by the Morlet wavelet transform, and a feature extraction algorithm based on tensor-uncorrelated multilinear principal component analysis is used to extract high-quality features. Finally, three classifiers, namely support vector machine, K-nearest neighbor, and extreme learning machine, are employed for classifying multi-character, and the results are compared. The experimental results demonstrate that, the proposed scheme effectively extracts character features with minimal redundancy, weak correlation, and strong representation capability, and successfully achieves an average classification accuracy 97.59% for 29 characters, surpassing existing research in terms of both accuracy and quantity of classification. The present study designs a new paradigm for acquiring EEG signals of VI, and combines the Morlet wavelet transform and UMPCA algorithm to extract the character features, enabling multi-character classification in various classifiers. This research paves a novel pathway for establishing direct brain-to-world communication.
Collapse
Affiliation(s)
- Hongguang Pan
- College of Electrical and Control Engineering, Xi’an University of Science and Technology, Xi’an, 710054 Shaanxi China
- Xi’an Key Laboratory of Electrical Equipment Condition Monitoring and Power Supply Security, Xi’an, 710054 Shaanxi China
| | - Wei Song
- College of Electrical and Control Engineering, Xi’an University of Science and Technology, Xi’an, 710054 Shaanxi China
| | - Li Li
- College of Electrical and Control Engineering, Xi’an University of Science and Technology, Xi’an, 710054 Shaanxi China
| | - Xuebin Qin
- College of Electrical and Control Engineering, Xi’an University of Science and Technology, Xi’an, 710054 Shaanxi China
| |
Collapse
|
8
|
Larner AJ, Leff AP, Nachev PC. Phantasia, aphantasia, and hyperphantasia: Empirical data and conceptual considerations. Neurosci Biobehav Rev 2024; 164:105819. [PMID: 39032843 DOI: 10.1016/j.neubiorev.2024.105819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 07/08/2024] [Accepted: 07/17/2024] [Indexed: 07/23/2024]
Abstract
Within the past decade, the term "phantasia" has been increasingly used to describe the human capacity, faculty, or power of visual mental imagery, with extremes of imagery vividness characterised as "aphantasia" and "hyperphantasia". A substantial volume of empirical research addressing these constructs has now been published, including attempts to find inductive correlates of behaviourally defined aphantasia, for example using research questionnaires and functional magnetic resonance imaging. Mental imagery has long been noted as a source of conceptual confusions but no specific conceptual analysis of the new formulation of phantasia, aphantasia, and hyperphantasia has been undertaken hitherto. We offer some conceptual considerations on phantasia, noting the ongoing confusion of perceptual with mental images, and the ubiquitous use of unvalidated subjective assessment instruments such as the Vividness of Visual Imagery Questionnaire (VVIQ) in diagnosis and assessment, development of which was predicated on these conceptual confusions. We offer some suggestions for a conceptual framework for future empirical studies in this field, circumventing these conceptual confusions.
Collapse
Affiliation(s)
- A J Larner
- Department of Brain Repair & Rehabilitation, Institute of Neurology, University College London, London, United Kingdom.
| | - A P Leff
- Department of Brain Repair & Rehabilitation, Institute of Neurology, University College London, London, United Kingdom
| | - P C Nachev
- Department of Brain Repair & Rehabilitation, Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
9
|
Ranjan S, Odegaard B. Heterarchy or hierarchy? Insights from a new model of visual imagination. Phys Life Rev 2024; 49:74-76. [PMID: 38564906 DOI: 10.1016/j.plrev.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 03/20/2024] [Indexed: 04/04/2024]
|
10
|
Stecher R, Kaiser D. Representations of imaginary scenes and their properties in cortical alpha activity. Sci Rep 2024; 14:12796. [PMID: 38834699 DOI: 10.1038/s41598-024-63320-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 05/28/2024] [Indexed: 06/06/2024] Open
Abstract
Imagining natural scenes enables us to engage with a myriad of simulated environments. How do our brains generate such complex mental images? Recent research suggests that cortical alpha activity carries information about individual objects during visual imagery. However, it remains unclear if more complex imagined contents such as natural scenes are similarly represented in alpha activity. Here, we answer this question by decoding the contents of imagined scenes from rhythmic cortical activity patterns. In an EEG experiment, participants imagined natural scenes based on detailed written descriptions, which conveyed four complementary scene properties: openness, naturalness, clutter level and brightness. By conducting classification analyses on EEG power patterns across neural frequencies, we were able to decode both individual imagined scenes as well as their properties from the alpha band, showing that also the contents of complex visual images are represented in alpha rhythms. A cross-classification analysis between alpha power patterns during the imagery task and during a perception task, in which participants were presented images of the described scenes, showed that scene representations in the alpha band are partly shared between imagery and late stages of perception. This suggests that alpha activity mediates the top-down re-activation of scene-related visual contents during imagery.
Collapse
Affiliation(s)
- Rico Stecher
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, 35392, Gießen, Germany.
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus Liebig University Gießen, 35392, Gießen, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-University Marburg and Justus Liebig University Gießen, 35032, Marburg, Germany
| |
Collapse
|
11
|
Spagna A, Heidenry Z, Miselevich M, Lambert C, Eisenstadt BE, Tremblay L, Liu Z, Liu J, Bartolomeo P. Visual mental imagery: Evidence for a heterarchical neural architecture. Phys Life Rev 2024; 48:113-131. [PMID: 38217888 DOI: 10.1016/j.plrev.2023.12.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 01/15/2024]
Abstract
Theories of Visual Mental Imagery (VMI) emphasize the processes of retrieval, modification, and recombination of sensory information from long-term memory. Yet, only few studies have focused on the behavioral mechanisms and neural correlates supporting VMI of stimuli from different semantic domains. Therefore, we currently have a limited understanding of how the brain generates and maintains mental representations of colors, faces, shapes - to name a few. Such an undetermined scenario renders unclear the organizational structure of neural circuits supporting VMI, including the role of the early visual cortex. We aimed to fill this gap by reviewing the scientific literature of five semantic domains: visuospatial, face, colors, shapes, and letters imagery. Linking theory to evidence from over 60 different experimental designs, this review highlights three main points. First, there is no consistent activity in the early visual cortex across all VMI domains, contrary to the prediction of the dominant model. Second, there is consistent activity of the frontoparietal networks and the left hemisphere's fusiform gyrus during voluntary VMI irrespective of the semantic domain investigated. We propose that these structures are part of a domain-general VMI sub-network. Third, domain-specific information engages specific regions of the ventral and dorsal cortical visual pathways. These regions partly overlap with those found in visual perception studies (e.g., fusiform face area for faces imagery; lingual gyrus for color imagery). Altogether, the reviewed evidence suggests the existence of domain-general and domain-specific mechanisms of VMI selectively engaged by stimulus-specific properties (e.g., colors or faces). These mechanisms would be supported by an organizational structure mixing vertical and horizontal connections (heterarchy) between sub-networks for specific stimulus domains. Such a heterarchical organization of VMI makes different predictions from current models of VMI as reversed perception. Our conclusions set the stage for future research, which should aim to characterize the spatiotemporal dynamics and interactions among key regions of this architecture giving rise to visual mental images.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA.
| | - Zoe Heidenry
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | | | - Chloe Lambert
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA
| | | | - Laura Tremblay
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California; Department of Neurology, VA Northern California Health Care System, Martinez, California
| | - Zixin Liu
- Department of Human Development, Teachers College, Columbia University, NY, 10027, USA
| | - Jianghao Liu
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris 10027, France; Dassault Systèmes, Vélizy-Villacoublay, France
| | - Paolo Bartolomeo
- Sorbonne Université, Inserm, CNRS, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, Paris 10027, France
| |
Collapse
|
12
|
Shi D, Yu Q. Distinct neural signatures underlying information maintenance and manipulation in working memory. Cereb Cortex 2024; 34:bhae063. [PMID: 38436467 DOI: 10.1093/cercor/bhae063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 03/05/2024] Open
Abstract
Previous working memory research has demonstrated robust stimulus representations during memory maintenance in both voltage and alpha-band activity in electroencephalography. However, the exact functions of these 2 neural signatures have remained controversial. Here we systematically investigated their respective contributions to memory manipulation. Human participants either maintained a previously seen spatial location, or manipulated the location following a mental rotation cue over a delay. Using multivariate decoding, we observed robust location representations in low-frequency voltage and alpha-band oscillatory activity with distinct spatiotemporal dynamics: location representations were most evident in posterior channels in alpha-band activity, but were most prominent in the more anterior, central channels in voltage signals. Moreover, the temporal emergence of manipulated representation in central voltage preceded that in posterior alpha-band activity, suggesting that voltage might carry stimulus-specific source signals originated internally from anterior cortex, whereas alpha-band activity might reflect feedback signals in posterior cortex received from higher-order cortex. Lastly, while location representations in both signals were coded in a low-dimensional neural subspace, location representation in central voltage was higher-dimensional and underwent a representational transformation that exclusively predicted memory behavior. Together, these results highlight the crucial role of central voltage in working memory, and support functional distinctions between voltage and alpha-band activity.
Collapse
Affiliation(s)
- Dongping Shi
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Yu
- Institute of Neuroscience, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
13
|
Proverbio AM. The temporal dynamics of visual imagery and BCI: Comment on "Visual mental imagery: Evidence for a heterarchical neural architecture" by Spagna et al. Phys Life Rev 2024; 48:174-175. [PMID: 38301424 DOI: 10.1016/j.plrev.2024.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 01/23/2024] [Indexed: 02/03/2024]
Affiliation(s)
- Alice Mado Proverbio
- Cognitive Electrophysiology Lab, Department of Psychology, University of Milano-Bicocca, Piazza dell'AteneoNuovo, 1, Milan 20162, Italy.
| |
Collapse
|
14
|
Weber S, Christophel T, Görgen K, Soch J, Haynes J. Working memory signals in early visual cortex are present in weak and strong imagers. Hum Brain Mapp 2024; 45:e26590. [PMID: 38401134 PMCID: PMC10893972 DOI: 10.1002/hbm.26590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 12/06/2023] [Accepted: 12/29/2023] [Indexed: 02/26/2024] Open
Abstract
It has been suggested that visual images are memorized across brief periods of time by vividly imagining them as if they were still there. In line with this, the contents of both working memory and visual imagery are known to be encoded already in early visual cortex. If these signals in early visual areas were indeed to reflect a combined imagery and memory code, one would predict them to be weaker for individuals with reduced visual imagery vividness. Here, we systematically investigated this question in two groups of participants. Strong and weak imagers were asked to remember images across brief delay periods. We were able to reliably reconstruct the memorized stimuli from early visual cortex during the delay. Importantly, in contrast to the prediction, the quality of reconstruction was equally accurate for both strong and weak imagers. The decodable information also closely reflected behavioral precision in both groups, suggesting it could contribute to behavioral performance, even in the extreme case of completely aphantasic individuals. Our data thus suggest that working memory signals in early visual cortex can be present even in the (near) absence of phenomenal imagery.
Collapse
Affiliation(s)
- Simon Weber
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Training Group “Extrospection” and Berlin School of Mind and Brain, Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
| | - Thomas Christophel
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Department of PsychologyHumboldt‐Universität zu BerlinBerlinGermany
| | - Kai Görgen
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
| | - Joram Soch
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Institute of Psychology, Otto von Guericke University MagdeburgMagdeburgGermany
| | - John‐Dylan Haynes
- Bernstein Center for Computational Neuroscience Berlin and Berlin Center for Advanced NeuroimagingCharité ‐ Universitätsmedizin Berlin, corporate member of the Freie Universität Berlin and Humboldt‐Universität zu BerlinBerlinGermany
- Research Training Group “Extrospection” and Berlin School of Mind and Brain, Humboldt‐Universität zu BerlinBerlinGermany
- Research Cluster of Excellence “Science of Intelligence”Technische Universität BerlinBerlinGermany
- Department of PsychologyHumboldt‐Universität zu BerlinBerlinGermany
- Collaborative Research Center “Volition and Cognitive Control”Technische Universität DresdenDresdenGermany
| |
Collapse
|
15
|
Barnett B, Andersen LM, Fleming SM, Dijkstra N. Identifying content-invariant neural signatures of perceptual vividness. PNAS NEXUS 2024; 3:pgae061. [PMID: 38415219 PMCID: PMC10898512 DOI: 10.1093/pnasnexus/pgae061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 01/31/2024] [Indexed: 02/29/2024]
Abstract
Some conscious experiences are more vivid than others. Although perceptual vividness is a key component of human consciousness, how variation in this magnitude property is registered by the human brain is unknown. A striking feature of neural codes for magnitude in other psychological domains, such as number or reward, is that the magnitude property is represented independently of its sensory features. To test whether perceptual vividness also covaries with neural codes that are invariant to sensory content, we reanalyzed existing magnetoencephalography and functional MRI data from two distinct studies which quantified perceptual vividness via subjective ratings of awareness and visibility. Using representational similarity and decoding analyses, we find evidence for content-invariant neural signatures of perceptual vividness distributed across visual, parietal, and frontal cortices. Our findings indicate that the neural correlates of subjective vividness may share similar properties to magnitude codes in other cognitive domains.
Collapse
Affiliation(s)
- Benjy Barnett
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Lau M Andersen
- Aarhus Institute of Advanced Studies, 8000 Aarhus C, Denmark
- Center of Functionally Integrative Neuroscience, 8000 Aarhus C, Denmark
- Department for Linguistics, Cognitive Science and Semiotics, Aarhus University, 8000 Aarhus C, Denmark
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London WC1B 5EH, UK
| | - Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
| |
Collapse
|
16
|
Hu Y, Yu Q. Spatiotemporal dynamics of self-generated imagery reveal a reverse cortical hierarchy from cue-induced imagery. Cell Rep 2023; 42:113242. [PMID: 37831604 DOI: 10.1016/j.celrep.2023.113242] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 08/25/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023] Open
Abstract
Visual imagery allows for the construction of rich internal experience in our mental world. However, it has remained poorly understood how imagery experience derives volitionally as opposed to being cue driven. Here, using electroencephalography and functional magnetic resonance imaging, we systematically investigate the spatiotemporal dynamics of self-generated imagery by having participants volitionally imagining one of the orientations from a learned pool. We contrast self-generated imagery with cue-induced imagery, where participants imagined line orientations based on associative cues acquired previously. Our results reveal overlapping neural signatures of cue-induced and self-generated imagery. Yet, these neural signatures display substantially differential sensitivities to the two types of imagery: self-generated imagery is supported by an enhanced involvement of the anterior cortex in representing imagery contents. By contrast, cue-induced imagery is supported by enhanced imagery representations in the posterior visual cortex. These results jointly support a reverse cortical hierarchy in generating and maintaining imagery contents in self-generated versus externally cued imagery.
Collapse
Affiliation(s)
- Yiheng Hu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Yu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China.
| |
Collapse
|
17
|
Gjorgieva E, Morales-Torres R, Cabeza R, Woldorff MG. Neural retrieval processes occur more rapidly for visual mental images that were previously encoded with high-vividness. Cereb Cortex 2023; 33:10234-10244. [PMID: 37526263 DOI: 10.1093/cercor/bhad278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 08/02/2023] Open
Abstract
Visual mental imagery refers to our ability to experience visual images in the absence of sensory stimulation. Studies have shown that visual mental imagery can improve episodic memory. However, we have limited understanding of the neural mechanisms underlying this improvement. Using electroencephalography, we examined the neural processes associated with the retrieval of previously generated visual mental images, focusing on how the vividness at generation can modulate retrieval processes. Participants viewed word stimuli referring to common objects, forming a visual mental image of each word and rating the vividness of the mental image. This was followed by a surprise old/new recognition task. We compared retrieval performance for items rated as high- versus low-vividness at encoding. High-vividness items were retrieved with faster reaction times and higher confidence ratings in the memory judgment. While controlling for confidence, neural measures indicated that high-vividness items produced an earlier decrease in alpha-band activity at retrieval compared with low-vividness items, suggesting an earlier memory reinstatement. Even when low-vividness items were remembered with high confidence, they were not retrieved as quickly as high-vividness items. These results indicate that when highly vivid mental images are encoded, the speed of their retrieval occurs more rapidly, relative to low-vivid items.
Collapse
Affiliation(s)
- Eva Gjorgieva
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
| | - Ricardo Morales-Torres
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
| | - Roberto Cabeza
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
| | - Marty G Woldorff
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke Institute for Brain Sciences, Duke University, Durham, NC 27708, United States
- Departtment of Psychiatry, Duke University, Durham, NC 27708, United States
| |
Collapse
|
18
|
Li S, Zeng X, Shao Z, Yu Q. Neural Representations in Visual and Parietal Cortex Differentiate between Imagined, Perceived, and Illusory Experiences. J Neurosci 2023; 43:6508-6524. [PMID: 37582626 PMCID: PMC10513072 DOI: 10.1523/jneurosci.0592-23.2023] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/09/2023] [Accepted: 08/04/2023] [Indexed: 08/17/2023] Open
Abstract
Humans constantly receive massive amounts of information, both perceived from the external environment and imagined from the internal world. To function properly, the brain needs to correctly identify the origin of information being processed. Recent work has suggested common neural substrates for perception and imagery. However, it has remained unclear how the brain differentiates between external and internal experiences with shared neural codes. Here we tested this question in human participants (male and female) by systematically investigating the neural processes underlying the generation and maintenance of visual information from voluntary imagery, veridical perception, and illusion. The inclusion of illusion allowed us to differentiate between objective and subjective internality: while illusion has an objectively internal origin and can be viewed as involuntary imagery, it is also subjectively perceived as having an external origin like perception. Combining fMRI, eye-tracking, multivariate decoding, and encoding approaches, we observed superior orientation representations in parietal cortex during imagery compared with perception, and conversely in early visual cortex. This imagery dominance gradually developed along a posterior-to-anterior cortical hierarchy from early visual to parietal cortex, emerged in the early epoch of imagery and sustained into the delay epoch, and persisted across varied imagined contents. Moreover, representational strength of illusion was more comparable to imagery in early visual cortex, but more comparable to perception in parietal cortex, suggesting content-specific representations in parietal cortex differentiate between subjectively internal and external experiences, as opposed to early visual cortex. These findings together support a domain-general engagement of parietal cortex in internally generated experience.SIGNIFICANCE STATEMENT How does the brain differentiate between imagined and perceived experiences? Combining fMRI, eye-tracking, multivariate decoding, and encoding approaches, the current study revealed enhanced stimulus-specific representations in visual imagery originating from parietal cortex, supporting the subjective experience of imagery. This neural principle was further validated by evidence from visual illusion, wherein illusion resembled perception and imagery at different levels of cortical hierarchy. Our findings provide direct evidence for the critical role of parietal cortex as a domain-general region for content-specific imagery, and offer new insights into the neural mechanisms underlying the differentiation between subjectively internal and external experiences.
Collapse
Affiliation(s)
- Siyi Li
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Xuemei Zeng
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Zhujun Shao
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Yu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
19
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
20
|
Newton‐Fenner A, Hewitt D, Henderson J, Fallon N, Gu Y, Gorelkina O, Giesbrecht T, Stancak A. A comparison of reward processing during Becker-DeGroot-Marschak and Vickrey auctions: An ERP study. Psychophysiology 2023; 60:e14313. [PMID: 37076995 PMCID: PMC10909440 DOI: 10.1111/psyp.14313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 02/24/2023] [Accepted: 03/27/2023] [Indexed: 04/21/2023]
Abstract
Vickrey auctions (VA) and Becker-DeGroot-Marschak auctions (BDM) are strategically equivalent demand-revealing mechanisms, differentiated only by a human opponent in the VA, and a random-number-generator opponent in the BDM. Game parameters are such that players are incentivized to reveal their private subjective values (SV) and behavior should be identical in both tasks. However, this has been repeatedly shown not to be the case. In this study, the neural correlates of outcome feedback processing during VA and BDM were directly compared using electroencephalography. Twenty-eight healthy participants bid for household products which were then divided into high- and low-SV categories. The VA included a human opponent deception to induce a social environment, while in reality a random-number-generator was used in both tasks. A P3 component peaking at 336 ms over midline parietal sites showed more positive amplitudes for high bid values, and for win outcomes in the VA but not the BDM. Both auctions also elicited a Reward Positivity potential, maximal at 275 ms along the central midline electrodes, that was not modulated by auction task or SV. Further, an exploratory N170 potential in the right occipitotemporal electrodes and a vertex positive potential component were stronger in the VA relative to the BDM. Results point to an enhanced cortical response to bid outcomes during VA task in a potential component associated with emotional control, and to the occurrence of face-sensitive potentials in VA but not in BDM auction. These findings suggest modulation of bid outcome processing by the social-competitive aspect of auction tasks. Directly comparing two prominent auction paradigms affords the opportunity to isolate the impact of social environment on competitive, risky decision-making. Findings suggest that feedback processing as early as 176 ms is facilitated by the presence of a human competitor, and later processing is modulated by social context and subjective value.
Collapse
Affiliation(s)
- A. Newton‐Fenner
- Department of PsychologyUniversity of LiverpoolLiverpoolUK
- Institute of Risk and UncertaintyUniversity of LiverpoolLiverpoolUK
| | - D. Hewitt
- Department of PsychologyUniversity of LiverpoolLiverpoolUK
- Wellcome Centre for Integrative NeuroimagingUniversity of OxfordOxfordUK
| | - J. Henderson
- Department of PsychologyUniversity of LiverpoolLiverpoolUK
| | - N. Fallon
- Department of PsychologyUniversity of LiverpoolLiverpoolUK
| | - Y. Gu
- Management SchoolUniversity of LiverpoolLiverpoolUK
- Henley Business SchoolUniversity of ReadingReadingUK
| | - O. Gorelkina
- Management SchoolUniversity of LiverpoolLiverpoolUK
| | | | - A. Stancak
- Department of PsychologyUniversity of LiverpoolLiverpoolUK
- Institute of Risk and UncertaintyUniversity of LiverpoolLiverpoolUK
| |
Collapse
|
21
|
Sulfaro AA, Robinson AK, Carlson TA. Modelling perception as a hierarchical competition differentiates imagined, veridical, and hallucinated percepts. Neurosci Conscious 2023; 2023:niad018. [PMID: 37621984 PMCID: PMC10445666 DOI: 10.1093/nc/niad018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/26/2023] Open
Abstract
Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
- Queensland Brain Institute, QBI Building 79, The University of Queensland, St Lucia, QLD 4067, Australia
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
22
|
Ramon C, Graichen U, Gargiulo P, Zanow F, Knösche TR, Haueisen J. Spatiotemporal phase slip patterns for visual evoked potentials, covert object naming tasks, and insight moments extracted from 256 channel EEG recordings. Front Integr Neurosci 2023; 17:1087976. [PMID: 37384237 PMCID: PMC10293627 DOI: 10.3389/fnint.2023.1087976] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 05/19/2023] [Indexed: 06/30/2023] Open
Abstract
Phase slips arise from state transitions of the coordinated activity of cortical neurons which can be extracted from the EEG data. The phase slip rates (PSRs) were studied from the high-density (256 channel) EEG data, sampled at 16.384 kHz, of five adult subjects during covert visual object naming tasks. Artifact-free data from 29 trials were averaged for each subject. The analysis was performed to look for phase slips in the theta (4-7 Hz), alpha (7-12 Hz), beta (12-30 Hz), and low gamma (30-49 Hz) bands. The phase was calculated with the Hilbert transform, then unwrapped and detrended to look for phase slip rates in a 1.0 ms wide stepping window with a step size of 0.06 ms. The spatiotemporal plots of the PSRs were made by using a montage layout of 256 equidistant electrode positions. The spatiotemporal profiles of EEG and PSRs during the stimulus and the first second of the post-stimulus period were examined in detail to study the visual evoked potentials and different stages of visual object recognition in the visual, language, and memory areas. It was found that the activity areas of PSRs were different as compared with EEG activity areas during the stimulus and post-stimulus periods. Different stages of the insight moments during the covert object naming tasks were examined from PSRs and it was found to be about 512 ± 21 ms for the 'Eureka' moment. Overall, these results indicate that information about the cortical phase transitions can be derived from the measured EEG data and can be used in a complementary fashion to study the cognitive behavior of the brain.
Collapse
Affiliation(s)
- Ceon Ramon
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Regional Epilepsy Center, Harborview Medical Center, University of Washington, Seattle, WA, United States
| | - Uwe Graichen
- Department of Biostatistics and Data Science, Karl Landsteiner University of Health Sciences, Krems an der Donau, Austria
| | - Paolo Gargiulo
- Institute of Biomedical and Neural Engineering, Reykjavik University, Reykjavik, Iceland
- Department of Science, Landspitali University Hospital, Reykjavik, Iceland
| | | | - Thomas R. Knösche
- Max Planck Institute for Human Cognitive and Neurosciences, Leipzig, Germany
| | - Jens Haueisen
- Institute of Biomedical Engineering and Informatics, Technische Universität Ilmenau, Ilmenau, Germany
| |
Collapse
|
23
|
Proverbio AM, Tacchini M, Jiang K. What do you have in mind? ERP markers of visual and auditory imagery. Brain Cogn 2023; 166:105954. [PMID: 36657242 DOI: 10.1016/j.bandc.2023.105954] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 01/06/2023] [Accepted: 01/07/2023] [Indexed: 01/19/2023]
Abstract
This study aimed to investigate the psychophysiological markers of imagery processes through EEG/ERP recordings. Visual and auditory stimuli representing 10 different semantic categories were shown to 30 healthy participants. After a given interval and prompted by a light signal, participants were asked to activate a mental image corresponding to the semantic category for recording synchronized electrical potentials. Unprecedented electrophysiological markers of imagination were recorded in the absence of sensory stimulation. The following peaks were identified at specific scalp sites and latencies, during imagination of infants (centroparietal positivity, CPP, and late CPP), human faces (anterior negativity, AN), animals (anterior positivity, AP), music (P300-like), speech (N400-like), affective vocalizations (P2-like) and sensory (visual vs auditory) modality (PN300). Overall, perception and imagery conditions shared some common electro/cortical markers, but during imagery the category-dependent modulation of ERPs was long latency and more anterior, with respect to the perceptual condition. These ERP markers might be precious tools for BCI systems (pattern recognition, classification, or A.I. algorithms) applied to patients affected by consciousness disorders (e.g., in a vegetative or comatose state) or locked-in-patients (e.g., spinal or SLA patients).
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano-Bicocca, Italy.
| | - Marta Tacchini
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano-Bicocca, Italy
| | - Kaijun Jiang
- Cognitive Electrophysiology lab, Dept. of Psychology, University of Milano-Bicocca, Italy; Department of Psychology, University of Jyväskylä, Finland
| |
Collapse
|
24
|
Soyuhos O, Baldauf D. Functional connectivity fingerprints of the frontal eye field and inferior frontal junction suggest spatial versus nonspatial processing in the prefrontal cortex. Eur J Neurosci 2023; 57:1114-1140. [PMID: 36789470 DOI: 10.1111/ejn.15936] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 01/28/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023]
Abstract
Neuroimaging evidence suggests that the frontal eye field (FEF) and inferior frontal junction (IFJ) govern the encoding of spatial and nonspatial (such as feature- or object-based) representations, respectively, both during visual attention and working memory tasks. However, it is still unclear whether such contrasting functional segregation is also reflected in their underlying functional connectivity patterns. Here, we hypothesized that FEF has predominant functional coupling with spatiotopically organized regions in the dorsal ('where') visual stream whereas IFJ has predominant functional connectivity with the ventral ('what') visual stream. We applied seed-based functional connectivity analyses to temporally high-resolving resting-state magnetoencephalography (MEG) recordings. We parcellated the brain according to the multimodal Glasser atlas and tested, for various frequency bands, whether the spontaneous activity of each parcel in the ventral and dorsal visual pathway has predominant functional connectivity with FEF or IFJ. The results show that FEF has a robust power correlation with the dorsal visual pathway in beta and gamma bands. In contrast, anterior IFJ (IFJa) has a strong power coupling with the ventral visual stream in delta, beta and gamma oscillations. Moreover, while FEF is phase-coupled with the superior parietal lobe in the beta band, IFJa is phase-coupled with the middle and inferior temporal cortex in delta and gamma oscillations. We argue that these intrinsic connectivity fingerprints are congruent with each brain region's function. Therefore, we conclude that FEF and IFJ have dissociable connectivity patterns that fit their respective functional roles in spatial versus nonspatial top-down attention and working memory control.
Collapse
Affiliation(s)
- Orhan Soyuhos
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy.,Center for Neuroscience, University of California, Davis, California, USA
| | - Daniel Baldauf
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| |
Collapse
|
25
|
Corriveau A, Kidder A, Teichmann L, Wardle SG, Baker CI. Sustained neural representations of personally familiar people and places during cued recall. Cortex 2023; 158:71-82. [PMID: 36459788 PMCID: PMC9840701 DOI: 10.1016/j.cortex.2022.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/28/2022] [Accepted: 08/29/2022] [Indexed: 01/18/2023]
Abstract
The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self-generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the labels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ∼1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control.
Collapse
Affiliation(s)
- Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychology, The University of Chicago, Chicago, IL 60637, USA.
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| |
Collapse
|
26
|
Legrand N, Etard O, Viader F, Clochon P, Doidy F, Eustache F, Gagnepain P. Attentional capture mediates the emergence and suppression of intrusive memories. iScience 2022; 25:105516. [PMID: 36419855 PMCID: PMC9676635 DOI: 10.1016/j.isci.2022.105516] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 07/20/2022] [Accepted: 11/02/2022] [Indexed: 11/07/2022] Open
Abstract
Intrusive memories hijack consciousness and their control may lead to forgetting. However, the contribution of reflexive attention to qualifying a memory signal as interfering is unknown. We used machine learning to decode the brain's electrical activity and pinpoint the otherwise hidden emergence of intrusive memories reported during a memory suppression task. Importantly, the algorithm was trained on an independent attentional model of visual activity, mimicking either the abrupt and interfering appearance of visual scenes into conscious awareness or their deliberate exploration. Intrusion of memories into conscious awareness were decoded above chance. The decoding accuracy increased when the algorithm was trained using a model of reflexive attention. Conscious detection of intrusive activity decoded from the brain signal was central to the future silencing of suppressed memories and later forgetting. Unwanted memories require the reflexive orienting of attention and access to consciousness to be suppressed effectively by inhibitory control.
Collapse
Affiliation(s)
- Nicolas Legrand
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Olivier Etard
- Normandie University, UNICAEN, INSERM, COMETE, CYCERON, CHU Caen, 14000 Caen, France
| | - Fausto Viader
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Patrice Clochon
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Franck Doidy
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Francis Eustache
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| | - Pierre Gagnepain
- Normandie University, UNICAEN, PSL Research University, EPHE, INSERM, U1077, CHU de Caen, Neuropsychologie et Imagerie de la Mémoire Humaine, Centre Cyceron, Caen, France
| |
Collapse
|
27
|
Gifford AT, Dwivedi K, Roig G, Cichy RM. A large and rich EEG dataset for modeling human visual object recognition. Neuroimage 2022; 264:119754. [PMID: 36400378 PMCID: PMC9771828 DOI: 10.1016/j.neuroimage.2022.119754] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 09/14/2022] [Accepted: 11/14/2022] [Indexed: 11/16/2022] Open
Abstract
The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models' prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.
Collapse
Affiliation(s)
- Alessandro T Gifford
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences Berlin, Charité - Universitätsmedizin Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.
| | - Kshitij Dwivedi
- Department of Computer Science, Goethe Universität, Frankfurt am Main, Germany
| | - Gemma Roig
- Department of Computer Science, Goethe Universität, Frankfurt am Main, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences Berlin, Charité - Universitätsmedizin Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
28
|
Bo K, Cui L, Yin S, Hu Z, Hong X, Kim S, Keil A, Ding M. Decoding the temporal dynamics of affective scene processing. Neuroimage 2022; 261:119532. [PMID: 35931307 DOI: 10.1016/j.neuroimage.2022.119532] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 07/01/2022] [Accepted: 08/01/2022] [Indexed: 10/31/2022] Open
Abstract
Natural images containing affective scenes are used extensively to investigate the neural mechanisms of visual emotion processing. Functional fMRI studies have shown that these images activate a large-scale distributed brain network that encompasses areas in visual, temporal, and frontal cortices. The underlying spatial and temporal dynamics, however, remain to be better characterized. We recorded simultaneous EEG-fMRI data while participants passively viewed affective images from the International Affective Picture System (IAPS). Applying multivariate pattern analysis to decode EEG data, and representational similarity analysis to fuse EEG data with simultaneously recorded fMRI data, we found that: (1) ∼80 ms after picture onset, perceptual processing of complex visual scenes began in early visual cortex, proceeding to ventral visual cortex at ∼100 ms, (2) between ∼200 and ∼300 ms (pleasant pictures: ∼200 ms; unpleasant pictures: ∼260 ms), affect-specific neural representations began to form, supported mainly by areas in occipital and temporal cortices, and (3) affect-specific neural representations were stable, lasting up to ∼2 s, and exhibited temporally generalizable activity patterns. These results suggest that affective scene representations in the brain are formed temporally in a valence-dependent manner and may be sustained by recurrent neural interactions among distributed brain areas.
Collapse
Affiliation(s)
- Ke Bo
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Psychological and Brain Sciences, Dartmouth college, Hanover, NH 03755, USA
| | - Lihan Cui
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Siyang Yin
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Zhenhong Hu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Xiangfei Hong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, 200030, China
| | - Sungkean Kim
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Human-Computer Interaction, Hanyang University, Ansan, Republic of Korea
| | - Andreas Keil
- Department of Psychology, University of Florida, Gainesville, FL 32611, USA.
| | - Mingzhou Ding
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA.
| |
Collapse
|
29
|
Dawes AJ, Keogh R, Robuck S, Pearson J. Memories with a blind mind: Remembering the past and imagining the future with aphantasia. Cognition 2022; 227:105192. [PMID: 35752014 DOI: 10.1016/j.cognition.2022.105192] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 05/26/2022] [Accepted: 05/28/2022] [Indexed: 11/03/2022]
Abstract
Our capacity to re-experience the past and simulate the future is thought to depend heavily on visual imagery, which allows us to construct complex sensory representations in the absence of sensory stimulation. There are large individual differences in visual imagery ability, but their impact on autobiographical memory and future prospection remains poorly understood. Research in this field assumes the normative use of visual imagery as a cognitive tool to simulate the past and future, however some individuals lack the ability to visualise altogether (a condition termed "aphantasia"). Aphantasia represents a rare and naturally occurring knock-out model for examining the role of visual imagery in episodic memory recall. Here, we assessed individuals with aphantasia on an adapted form of the Autobiographical Interview, a behavioural measure of the specificity and richness of episodic details underpinning the memory of events. Aphantasic participants generated significantly fewer episodic details than controls for both past and future events. This effect was most pronounced for novel future events, driven by selective reductions in visual detail retrieval, accompanied by comparatively reduced ratings of the phenomenological richness of simulated events, and paralleled by quantitative linguistic markers of reduced perceptual language use in aphantasic participants compared to those with visual imagery. Our findings represent the first systematic evidence (using combined objective and subjective data streams) that aphantasia is associated with a diminished ability to re-experience the past and simulate the future, indicating that visual imagery is an important cognitive tool for the dynamic retrieval and recombination of episodic details during mental simulation.
Collapse
Affiliation(s)
- Alexei J Dawes
- School of Psychology, The University of New South Wales, Sydney, New South Wales, Australia.
| | - Rebecca Keogh
- School of Psychology, The University of New South Wales, Sydney, New South Wales, Australia; School of Psychological Sciences, Macquarie University, Sydney, New South Wales, Australia
| | - Sarah Robuck
- School of Psychology, The University of New South Wales, Sydney, New South Wales, Australia
| | - Joel Pearson
- School of Psychology, The University of New South Wales, Sydney, New South Wales, Australia
| |
Collapse
|
30
|
Bruera A, Poesio M. Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics. Front Artif Intell 2022; 5:796793. [PMID: 35280237 PMCID: PMC8905499 DOI: 10.3389/frai.2022.796793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/25/2022] [Indexed: 11/23/2022] Open
Abstract
Semantic knowledge about individual entities (i.e., the referents of proper names such as Jacinta Ardern) is fine-grained, episodic, and strongly social in nature, when compared with knowledge about generic entities (the referents of common nouns such as politician). We investigate the semantic representations of individual entities in the brain; and for the first time we approach this question using both neural data, in the form of newly-acquired EEG data, and distributional models of word meaning, employing them to isolate semantic information regarding individual entities in the brain. We ran two sets of analyses. The first set of analyses is only concerned with the evoked responses to individual entities and their categories. We find that it is possible to classify them according to both their coarse and their fine-grained category at appropriate timepoints, but that it is hard to map representational information learned from individuals to their categories. In the second set of analyses, we learn to decode from evoked responses to distributional word vectors. These results indicate that such a mapping can be learnt successfully: this counts not only as a demonstration that representations of individuals can be discriminated in EEG responses, but also as a first brain-based validation of distributional semantic models as representations of individual entities. Finally, in-depth analyses of the decoder performance provide additional evidence that the referents of proper names and categories have little in common when it comes to their representation in the brain.
Collapse
Affiliation(s)
- Andrea Bruera
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | | |
Collapse
|
31
|
Dijkstra N, Kok P, Fleming SM. Perceptual reality monitoring: Neural mechanisms dissociating imagination from reality. Neurosci Biobehav Rev 2022; 135:104557. [PMID: 35122782 DOI: 10.1016/j.neubiorev.2022.104557] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 01/12/2022] [Accepted: 01/30/2022] [Indexed: 01/21/2023]
Abstract
There is increasing evidence that imagination relies on similar neural mechanisms as externally triggered perception. This overlap presents a challenge for perceptual reality monitoring: deciding what is real and what is imagined. Here, we explore how perceptual reality monitoring might be implemented in the brain. We first describe sensory and cognitive factors that could dissociate imagery and perception and conclude that no single factor unambiguously signals whether an experience is internally or externally generated. We suggest that reality monitoring is implemented by higher-level cortical circuits that evaluate first-order sensory and cognitive factors to determine the source of sensory signals. According to this interpretation, perceptual reality monitoring shares core computations with metacognition. This multi-level architecture might explain several types of source confusion as well as dissociations between simply knowing whether something is real and actually experiencing it as real. We discuss avenues for future research to further our understanding of perceptual reality monitoring, an endeavour that has important implications for our understanding of clinical symptoms as well as general cognitive function.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom.
| | - Peter Kok
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom; Max Planck UCL Centre for Computational Psychiatry and Aging Research, University College London, United Kingdom; Department of Experimental Psychology, University College London, United Kingdom
| |
Collapse
|
32
|
Eisenhauer S, Gagl B, Fiebach CJ. Predictive pre-activation of orthographic and lexical-semantic representations facilitates visual word recognition. Psychophysiology 2021; 59:e13970. [PMID: 34813664 DOI: 10.1111/psyp.13970] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/24/2021] [Accepted: 10/26/2021] [Indexed: 11/30/2022]
Abstract
To a crucial extent, the efficiency of reading results from the fact that visual word recognition is faster in predictive contexts. Predictive coding models suggest that this facilitation results from pre-activation of predictable stimulus features across multiple representational levels before stimulus onset. Still, it is not sufficiently understood which aspects of the rich set of linguistic representations that are activated during reading-visual, orthographic, phonological, and/or lexical-semantic-contribute to context-dependent facilitation. To investigate in detail which linguistic representations are pre-activated in a predictive context and how they affect subsequent stimulus processing, we combined a well-controlled repetition priming paradigm, including words and pseudowords (i.e., pronounceable nonwords), with behavioral and magnetoencephalography measurements. For statistical analysis, we used linear mixed modeling, which we found had a higher statistical power compared to conventional multivariate pattern decoding analysis. Behavioral data from 49 participants indicate that word predictability (i.e., context present vs. absent) facilitated orthographic and lexical-semantic, but not visual or phonological processes. Magnetoencephalography data from 38 participants show sustained activation of orthographic and lexical-semantic representations in the interval before processing the predicted stimulus, suggesting selective pre-activation at multiple levels of linguistic representation as proposed by predictive coding. However, we found more robust lexical-semantic representations when processing predictable in contrast to unpredictable letter strings, and pre-activation effects mainly resembled brain responses elicited when processing the expected letter string. This finding suggests that pre-activation did not result in "explaining away" predictable stimulus features, but rather in a "sharpening" of brain responses involved in word processing.
Collapse
Affiliation(s)
- Susanne Eisenhauer
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Benjamin Gagl
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.,Center for Individual Development and Adaptive Education of Children at Risk (IDeA), Frankfurt am Main, Germany
| | - Christian J Fiebach
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.,Center for Individual Development and Adaptive Education of Children at Risk (IDeA), Frankfurt am Main, Germany.,Brain Imaging Center, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
33
|
Schwartz R, Rozier C, Seidel Malkinson T, Lehongre K, Adam C, Lambrecq V, Navarro V, Naccache L, Axelrod V. Comparing stimulus-evoked and spontaneous response of the face-selective multi-units in the human posterior fusiform gyrus. Neurosci Conscious 2021; 2021:niab033. [PMID: 34667640 PMCID: PMC8520048 DOI: 10.1093/nc/niab033] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 08/03/2021] [Accepted: 09/02/2021] [Indexed: 11/23/2022] Open
Abstract
The stimulus-evoked neural response is a widely explored phenomenon. Conscious awareness is associated in many cases with the corresponding selective stimulus-evoked response. For example, conscious awareness of a face stimulus is associated with or accompanied by stimulus-evoked activity in the fusiform face area (FFA). In addition to the stimulus-evoked response, spontaneous (i.e. task-unrelated) activity in the brain is also abundant. Notably, spontaneous activity is considered unconscious. For example, spontaneous activity in the FFA is not associated with conscious awareness of a face. The question is: what is the difference at the neural level between stimulus-evoked activity in a case that this activity is associated with conscious awareness of some content (e.g. activity in the FFA in response to fully visible face stimuli) and spontaneous activity in that same region of the brain? To answer this question, in the present study, we had a rare opportunity to record two face-selective multi-units in the vicinity of the FFA in a human patient. We compared multi-unit face-selective task-evoked activity with spontaneous prestimulus and a resting-state activity. We found that when activity was examined over relatively long temporal windows (e.g. 100–200 ms), face-selective stimulus-evoked firing in the recorded multi-units was much higher than the spontaneous activity. In contrast, when activity was examined over relatively short windows, we found many cases of high firing rates within the spontaneous activity that were comparable to stimulus-evoked activity. Our results thus indicate that the sustained activity is what might differentiate between stimulus-evoked activity that is associated with conscious awareness and spontaneous activity.
Collapse
Affiliation(s)
- Rina Schwartz
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 52900, Israel
| | - Camille Rozier
- Institut National de la Santé et de la Recherche Médicale Unité 1127, Centre National de la Recherche Scientifique Unité Mixte de Recherche (UMR) 7225, Université Pierre-et-Marie-Curie Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle Épinière ICM, Paris 75013, France
| | - Tal Seidel Malkinson
- Institut National de la Santé et de la Recherche Médicale Unité 1127, Centre National de la Recherche Scientifique Unité Mixte de Recherche (UMR) 7225, Université Pierre-et-Marie-Curie Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle Épinière ICM, Paris 75013, France
| | - Katia Lehongre
- Institut National de la Santé et de la Recherche Médicale Unité 1127, Centre National de la Recherche Scientifique Unité Mixte de Recherche (UMR) 7225, Université Pierre-et-Marie-Curie Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle Épinière ICM, Paris 75013, France
| | - Claude Adam
- Neurology Department, AP-HP, GH Pitie-Salpêtrière-Charles Foix, Epilepsy Unit, 47-83 boulevard de l'Hôpital, Paris 75013, France
| | - Virginie Lambrecq
- Institut National de la Santé et de la Recherche Médicale Unité 1127, Centre National de la Recherche Scientifique Unité Mixte de Recherche (UMR) 7225, Université Pierre-et-Marie-Curie Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle Épinière ICM, Paris 75013, France
| | - Vincent Navarro
- Institut National de la Santé et de la Recherche Médicale Unité 1127, Centre National de la Recherche Scientifique Unité Mixte de Recherche (UMR) 7225, Université Pierre-et-Marie-Curie Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle Épinière ICM, Paris 75013, France
| | - Lionel Naccache
- Institut National de la Santé et de la Recherche Médicale Unité 1127, Centre National de la Recherche Scientifique Unité Mixte de Recherche (UMR) 7225, Université Pierre-et-Marie-Curie Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle Épinière ICM, Paris 75013, France
| | - Vadim Axelrod
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 52900, Israel
| |
Collapse
|
34
|
Lowe MX, Mohsenzadeh Y, Lahner B, Charest I, Oliva A, Teng S. Cochlea to categories: The spatiotemporal dynamics of semantic auditory representations. Cogn Neuropsychol 2021; 38:468-489. [PMID: 35729704 PMCID: PMC10589059 DOI: 10.1080/02643294.2022.2085085] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 03/31/2022] [Accepted: 05/25/2022] [Indexed: 10/17/2022]
Abstract
How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.
Collapse
Affiliation(s)
- Matthew X. Lowe
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Unlimited Sciences, Colorado Springs, CO
| | - Yalda Mohsenzadeh
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- The Brain and Mind Institute, The University of Western Ontario, London, ON, Canada
- Department of Computer Science, The University of Western Ontario, London, ON, Canada
| | - Benjamin Lahner
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Ian Charest
- Département de Psychologie, Université de Montréal, Montréal, Québec, Canada
- Center for Human Brain Health, University of Birmingham, UK
| | - Aude Oliva
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
| | - Santani Teng
- Computer Science and Artificial Intelligence Lab (CSAIL), MIT, Cambridge, MA
- Smith-Kettlewell Eye Research Institute (SKERI), San Francisco, CA
| |
Collapse
|
35
|
Dijkstra N, van Gaal S, Geerligs L, Bosch SE, van Gerven MAJ. No Evidence for Neural Overlap between Unconsciously Processed and Imagined Stimuli. eNeuro 2021; 8:ENEURO.0228-21.2021. [PMID: 34593516 PMCID: PMC8577044 DOI: 10.1523/eneuro.0228-21.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 11/23/2022] Open
Abstract
Visual representations can be generated via feedforward or feedback processes. The extent to which these processes result in overlapping representations remains unclear. Previous work has shown that imagined stimuli elicit similar representations as perceived stimuli throughout the visual cortex. However, while representations during imagery are indeed only caused by feedback processing, neural processing during perception is an interplay of both feedforward and feedback processing. This means that any representational overlap could be because of overlap in feedback processes. In the current study, we aimed to investigate this issue by characterizing the overlap between feedforward- and feedback-initiated category representations during imagined stimuli, conscious perception, and unconscious processing using fMRI in humans of either sex. While all three conditions elicited stimulus representations in left lateral occipital cortex (LOC), significant similarities were observed only between imagery and conscious perception in this area. Furthermore, connectivity analyses revealed stronger connectivity between frontal areas and left LOC during conscious perception and in imagery compared with unconscious processing. Together, these findings can be explained by the idea that long-range feedback modifies visual representations, thereby reducing representational overlap between purely feedforward- and feedback-initiated stimulus representations measured by fMRI. Neural representations influenced by feedback, either stimulus driven (perception) or purely internally driven (imagery), are, however, relatively similar.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
| | - Simon van Gaal
- Department of Psychology, Brain & Cognition, University of Amsterdam, 1000 GG, Amsterdam, The Netherlands
| | - Linda Geerligs
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Sander E Bosch
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Marcel A J van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| |
Collapse
|
36
|
Wischnewski M, Peelen MV. Causal neural mechanisms of context-based object recognition. eLife 2021; 10:69736. [PMID: 34374647 PMCID: PMC8354632 DOI: 10.7554/elife.69736] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Accepted: 07/26/2021] [Indexed: 12/05/2022] Open
Abstract
Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.
Collapse
Affiliation(s)
- Miles Wischnewski
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands.,Department of Biomedical Engineering, University of Minnesota, Minneapolis, United States
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
37
|
Lanfranco RC, Rivera-Rei Á, Huepe D, Ibáñez A, Canales-Johnson A. Beyond imagination: Hypnotic visual hallucination induces greater lateralised brain activity than visual mental imagery. Neuroimage 2021; 239:118282. [PMID: 34146711 DOI: 10.1016/j.neuroimage.2021.118282] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 06/10/2021] [Accepted: 06/16/2021] [Indexed: 01/17/2023] Open
Abstract
Hypnotic suggestions can produce a broad range of perceptual experiences, including hallucinations. Visual hypnotic hallucinations differ in many ways from regular mental images. For example, they are usually experienced as automatic, vivid, and real images, typically compromising the sense of reality. While both hypnotic hallucination and mental imagery are believed to mainly rely on the activation of the visual cortex via top-down mechanisms, it is unknown how they differ in the neural processes they engage. Here we used an adaptation paradigm to test and compare top-down processing between hypnotic hallucination, mental imagery, and visual perception in very highly hypnotisable individuals whose ability to hallucinate was assessed. By measuring the N170/VPP event-related complex and using multivariate decoding analysis, we found that hypnotic hallucination of faces involves greater top-down activation of sensory processing through lateralised neural mechanisms in the right hemisphere compared to mental imagery. Our findings suggest that the neural signatures that distinguish hypnotically hallucinated faces from imagined faces lie in the right brain hemisphere.
Collapse
Affiliation(s)
- Renzo C Lanfranco
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom; Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Álvaro Rivera-Rei
- Latin American Brain Health Institute (BrainLat) & Center for Social and Cognitive Neuroscience, Universidad Adolfo Ibáñez, Santiago, Chile
| | - David Huepe
- Latin American Brain Health Institute (BrainLat) & Center for Social and Cognitive Neuroscience, Universidad Adolfo Ibáñez, Santiago, Chile
| | - Agustín Ibáñez
- Latin American Brain Health Institute (BrainLat) & Center for Social and Cognitive Neuroscience, Universidad Adolfo Ibáñez, Santiago, Chile; Cognitive Neuroscience Center, Universidad de San Andrés, Buenos Aires, Argentina; National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina; Global Brain Health Institute, University of California San Francisco, San Francisco, United States of America, and Trinity College Dublin, Dublin, Ireland
| | - Andrés Canales-Johnson
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom; Vicerrectoría de Investigación y Posgrado, Universidad Católica del Maule, Talca, Chile.
| |
Collapse
|
38
|
Yu Q, Postle BR. The Neural Codes Underlying Internally Generated Representations in Visual Working Memory. J Cogn Neurosci 2021; 33:1142-1157. [PMID: 34428785 PMCID: PMC8594925 DOI: 10.1162/jocn_a_01702] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Humans can construct rich subjective experience even when no information is available in the external world. Here, we investigated the neural representation of purely internally generated stimulus-like information during visual working memory. Participants performed delayed recall of oriented gratings embedded in noise with varying contrast during fMRI scanning. Their trialwise behavioral responses provided an estimate of their mental representation of the to-be-reported orientation. We used multivariate inverted encoding models to reconstruct the neural representations of orientation in reference to the response. We found that response orientation could be successfully reconstructed from activity in early visual cortex, even on 0% contrast trials when no orientation information was actually presented, suggesting the existence of a purely internally generated neural code in early visual cortex. In addition, cross-generalization and multidimensional scaling analyses demonstrated that information derived from internal sources was represented differently from typical working memory representations, which receive influences from both external and internal sources. Similar results were also observed in intraparietal sulcus, with slightly different cross-generalization patterns. These results suggest a potential mechanism for how externally driven and internally generated information is maintained in working memory.
Collapse
Affiliation(s)
- Qing Yu
- Chinese Academy of Sciences, Shanghai, China
| | | |
Collapse
|
39
|
Koenig-Robert R, Pearson J. Why do imagery and perception look and feel so different? Philos Trans R Soc Lond B Biol Sci 2021; 376:20190703. [PMID: 33308061 PMCID: PMC7741076 DOI: 10.1098/rstb.2019.0703] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2020] [Indexed: 12/16/2022] Open
Abstract
Despite the past few decades of research providing convincing evidence of the similarities in function and neural mechanisms between imagery and perception, for most of us, the experience of the two are undeniably different, why? Here, we review and discuss the differences between imagery and perception and the possible underlying causes of these differences, from function to neural mechanisms. Specifically, we discuss the directional flow of information (top-down versus bottom-up), the differences in targeted cortical layers in primary visual cortex and possible different neural mechanisms of modulation versus excitation. For the first time in history, neuroscience is beginning to shed light on this long-held mystery of why imagery and perception look and feel so different. This article is part of the theme issue 'Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation'.
Collapse
Affiliation(s)
| | - Joel Pearson
- School of Psychology, The University of New South Wales, Sydney, Australia
| |
Collapse
|
40
|
Gurtner LM, Hartmann M, Mast FW. Eye movements during visual imagery and perception show spatial correspondence but have unique temporal signatures. Cognition 2021; 210:104597. [PMID: 33508576 DOI: 10.1016/j.cognition.2021.104597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 01/07/2021] [Accepted: 01/08/2021] [Indexed: 11/20/2022]
Abstract
Eye fixation patterns during mental imagery are similar to those during perception of the same picture, suggesting that oculomotor mechanisms play a role in mental imagery (i.e., the "looking at nothing" effect). Previous research has focused on the spatial similarities of eye movements during perception and mental imagery. The primary aim of this study was to assess whether the spatial similarity translates to the temporal domain. We used recurrence quantification analysis (RQA) to assess the temporal structure of eye fixations in visual perception and mental imagery and we compared the temporal as well as the spatial characteristics in mental imagery with perception by means of Bayesian hierarchical regression models. We further investigated how person and picture-specific characteristics contribute to eye movement behavior in mental imagery. Working memory capacity and mental imagery abilities were assessed to either predict gaze dynamics in visual imagery or to moderate a possible correspondence between spatial or temporal gaze dynamics in perception and mental imagery. We were able to show the spatial similarity of fixations between visual perception and imagery and we provide first evidence for its moderation by working memory capacity. Interestingly, the temporal gaze dynamics in mental imagery were unrelated to those in perception and their variance between participants was not explained by variance in visuo-spatial working memory capacity or vividness of mental images. The semantic content of the imagined pictures was the only meaningful predictor of temporal gaze dynamics. The spatial correspondence reflects shared spatial structure of mental images and perceived pictures, while the unique temporal gaze behavior could be driven by generation, maintenance and protection processes specific to visual imagery. The unique temporal gaze dynamics offer a window to new insights into the genuine process of mental imagery independent of its similarity to perception.
Collapse
Affiliation(s)
- Lilla M Gurtner
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland.
| | - Matthias Hartmann
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland; Faculty of Psychology, UniDistance Suisse, Überlandstrasse 12, 3900 Brig, Switzerland
| | - Fred W Mast
- Department of Psychology, University of Bern, Fabrikstrasse 8, 3012 Bern, Switzerland
| |
Collapse
|
41
|
Canales-Johnson A, Lanfranco RC, Morales JP, Martínez-Pernía D, Valdés J, Ezquerro-Nassar A, Rivera-Rei Á, Ibanez A, Chennu S, Bekinschtein TA, Huepe D, Noreika V. In your phase: neural phase synchronisation underlies visual imagery of faces. Sci Rep 2021; 11:2401. [PMID: 33504828 PMCID: PMC7840739 DOI: 10.1038/s41598-021-81336-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 01/05/2021] [Indexed: 01/15/2023] Open
Abstract
Mental imagery is the process through which we retrieve and recombine information from our memory to elicit the subjective impression of “seeing with the mind’s eye”. In the social domain, we imagine other individuals while recalling our encounters with them or modelling alternative social interactions in future. Many studies using imaging and neurophysiological techniques have shown several similarities in brain activity between visual imagery and visual perception, and have identified frontoparietal, occipital and temporal neural components of visual imagery. However, the neural connectivity between these regions during visual imagery of socially relevant stimuli has not been studied. Here we used electroencephalography to investigate neural connectivity and its dynamics between frontal, parietal, occipital and temporal electrodes during visual imagery of faces. We found that voluntary visual imagery of faces is associated with long-range phase synchronisation in the gamma frequency range between frontoparietal electrode pairs and between occipitoparietal electrode pairs. In contrast, no effect of imagery was observed in the connectivity between occipitotemporal electrode pairs. Gamma range synchronisation between occipitoparietal electrode pairs predicted subjective ratings of the contour definition of imagined faces. Furthermore, we found that visual imagery of faces is associated with an increase of short-range frontal synchronisation in the theta frequency range, which temporally preceded the long-range increase in the gamma synchronisation. We speculate that the local frontal synchrony in the theta frequency range might be associated with an effortful top-down mnemonic reactivation of faces. In contrast, the long-range connectivity in the gamma frequency range along the fronto-parieto-occipital axis might be related to the endogenous binding and subjective clarity of facial visual features.
Collapse
Affiliation(s)
- Andrés Canales-Johnson
- Department of Psychology, University of Cambridge, Downing Site, Cambridge, CB2 3EB, UK. .,Vicerrectoría de Investigación y Posgrado, Universidad Católica del Maule, Talca, Chile.
| | - Renzo C Lanfranco
- Department of Psychology, University of Edinburgh, Edinburgh, UK.,Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
| | - Juan Pablo Morales
- Facultad de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| | | | - Joaquín Valdés
- Escuela de Psicología, Universidad Adolfo Ibáñez, Santiago, Chile
| | | | | | - Agustín Ibanez
- Escuela de Psicología, Universidad Adolfo Ibáñez, Santiago, Chile.,Center for Social and Cognitive Neuroscience (CSCN), Latin American Institute of Brain Health (BrainLat), Universidad Adolfo Ibanez, Santiago, Chile.,National Scientific and Technical Research Council (CONICET), Buenos Aires, Argentina.,Universidad Autónoma del Caribe, Barranquilla, Colombia.,Cognitive Neurosience Center (CNC), Universidad de San Andrés, Buenos Aires, Argentina.,Global Brain Health Institute (GBHI), University of California San Francisco (UCSF), San Francisco, USA
| | - Srivas Chennu
- School of Computing, University of Kent, Chatham Maritime, UK.,Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | | | - David Huepe
- Escuela de Psicología, Universidad Adolfo Ibáñez, Santiago, Chile
| | - Valdas Noreika
- Department of Psychology, University of Cambridge, Downing Site, Cambridge, CB2 3EB, UK.,Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| |
Collapse
|
42
|
Spagna A, Hajhajate D, Liu J, Bartolomeo P. Visual mental imagery engages the left fusiform gyrus, but not the early visual cortex: A meta-analysis of neuroimaging evidence. Neurosci Biobehav Rev 2021; 122:201-217. [PMID: 33422567 DOI: 10.1016/j.neubiorev.2020.12.029] [Citation(s) in RCA: 90] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 12/03/2020] [Accepted: 12/23/2020] [Indexed: 12/13/2022]
Abstract
The dominant neural model of visual mental imagery (VMI) stipulates that memories from the medial temporal lobe acquire sensory features in early visual areas. However, neurological patients with damage restricted to the occipital cortex typically show perfectly vivid VMI, while more anterior damages extending into the temporal lobe, especially in the left hemisphere, often cause VMI impairments. Here we present two major results reconciling neuroimaging findings in neurotypical subjects with the performance of brain-damaged patients: (1) A large-scale meta-analysis of 46 fMRI studies, of which 27 investigated specifically visual mental imagery, revealed that VMI engages fronto-parietal networks and a well-delimited region in the left fusiform gyrus. (2) A Bayesian analysis showed no evidence for imagery-related activity in early visual cortices. We propose a revised neural model of VMI that draws inspiration from recent cytoarchitectonic and lesion studies, whereby fronto-parietal networks initiate, modulate, and maintain activity in a core temporal network centered on the fusiform imagery node, a high-level visual region in the left fusiform gyrus.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Columbia University in the City of New York, NY, 10027, USA; Sorbonne Université, Inserm U 1127, CNRS UMR 7225, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, F-75013, Paris, France
| | - Dounia Hajhajate
- Sorbonne Université, Inserm U 1127, CNRS UMR 7225, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, F-75013, Paris, France
| | - Jianghao Liu
- Sorbonne Université, Inserm U 1127, CNRS UMR 7225, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, F-75013, Paris, France; Dassault Systèmes, Vélizy-Villacoublay, France
| | - Paolo Bartolomeo
- Sorbonne Université, Inserm U 1127, CNRS UMR 7225, Paris Brain Institute, ICM, Hôpital de la Pitié-Salpêtrière, F-75013, Paris, France.
| |
Collapse
|
43
|
Zuure MB, Cohen MX. Narrowband multivariate source separation for semi-blind discovery of experiment contrasts. J Neurosci Methods 2020; 350:109063. [PMID: 33370560 DOI: 10.1016/j.jneumeth.2020.109063] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 11/29/2020] [Accepted: 12/22/2020] [Indexed: 11/16/2022]
Abstract
BACKGROUND Electrophysiological recordings contain mixtures of signals from distinct neural sources, impeding a straightforward interpretation of the sensor-level data. This mixing is particularly detrimental when distinct sources resonate in overlapping frequencies. Fortunately, the mixing is linear and instantaneous. Multivariate source separation methods may therefore successfully separate statistical sources, even with overlapping spatial distributions. NEW METHOD We demonstrate a feature-guided multivariate source separation method that is tuned to narrowband frequency content as well as binary condition differences. This method - comparison scanning generalized eigendecomposition, csGED - harnesses the covariance structure of multichannel data to find directions (i.e., eigenvectors) that maximally separate two subsets of data. To drive condition specificity and frequency specificity, our data subsets were taken from different task conditions and narrowband-filtered prior to applying GED. RESULTS To validate the method, we simulated MEG data in two conditions with shared noise characteristics and unique signal. csGED outperformed the best sensor at reconstructing the ground truth signals, even in the presence of large amounts of noise. We next applied csGED to a published empirical MEG dataset on visual perception vs. imagery. csGED identified sources in alpha, beta, and gamma bands, and successfully separated distinct networks in the same frequency band. COMPARISON WITH EXISTING METHOD(S) GED is a flexible feature-guided decomposition method that has previously successfully been applied. Our combined frequency- and condition-tuning is a novel adaptation that extends the power of GED in cognitive electrophysiology. CONCLUSIONS We demonstrate successful condition-specific source separation by applying csGED to simulated and empirical data.
Collapse
Affiliation(s)
- Marrit B Zuure
- Radboud University, Donders Centre for Neuroscience, the Netherlands
| | - Michael X Cohen
- Radboud University, Donders Centre for Neuroscience, the Netherlands; Radboud University Medical Center, Donders Centre for Medical Neuroscience, the Netherlands.
| |
Collapse
|
44
|
Feuerriegel D, Blom T, Hogendoorn H. Predictive activation of sensory representations as a source of evidence in perceptual decision-making. Cortex 2020; 136:140-146. [PMID: 33461733 DOI: 10.1016/j.cortex.2020.12.008] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 10/15/2020] [Accepted: 12/13/2020] [Indexed: 11/29/2022]
Abstract
Our brains can represent expected future states of our sensory environment. Recent work has shown that, when we expect a specific stimulus to appear at a specific time, we can predictively generate neural representations of that stimulus even before it is physically presented. These observations raise two exciting questions: Are pre-activated sensory representations used for perceptual decision-making? And, do we transiently perceive an expected stimulus that does not actually appear? To address these questions, we propose that pre-activated neural representations provide sensory evidence that is used for perceptual decision-making. This can be understood within the framework of the Diffusion Decision Model as an early accumulation of decision evidence in favour of the expected percept. Our proposal makes novel predictions relating to expectation effects on neural markers of decision evidence accumulation, and also provides an explanation for why we sometimes perceive stimuli that are expected, but do not appear.
Collapse
Affiliation(s)
- Daniel Feuerriegel
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia.
| | - Tessel Blom
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia
| | - Hinze Hogendoorn
- Melbourne School of Psychological Sciences, The University of Melbourne, Australia
| |
Collapse
|
45
|
Dijkstra N, Ambrogioni L, Vidaurre D, van Gerven M. Neural dynamics of perceptual inference and its reversal during imagery. eLife 2020; 9:e53588. [PMID: 32686645 PMCID: PMC7371419 DOI: 10.7554/elife.53588] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 06/30/2020] [Indexed: 12/27/2022] Open
Abstract
After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Centre for Cognition, Radboud University, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
- Wellcome Centre for Human Neuroimaging, University College LondonLondonUnited Kingdom
| | - Luca Ambrogioni
- Donders Centre for Cognition, Radboud University, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Diego Vidaurre
- Oxford Centre for Human Brain Activity, Oxford UniversityOxfordUnited Kingdom
- Department of Clinical Health, Aarhus UniversityAarhusDenmark
| | - Marcel van Gerven
- Donders Centre for Cognition, Radboud University, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| |
Collapse
|
46
|
Xie S, Kaiser D, Cichy RM. Visual Imagery and Perception Share Neural Representations in the Alpha Frequency Band. Curr Biol 2020; 30:2621-2627.e5. [PMID: 32531274 PMCID: PMC7342016 DOI: 10.1016/j.cub.2020.04.074] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 04/06/2020] [Accepted: 04/27/2020] [Indexed: 11/21/2022]
Abstract
To behave adaptively with sufficient flexibility, biological organisms must cognize beyond immediate reaction to a physically present stimulus. For this, humans use visual mental imagery [1, 2], the ability to conjure up a vivid internal experience from memory that stands in for the percept of the stimulus. Visually imagined contents subjectively mimic perceived contents, suggesting that imagery and perception share common neural mechanisms. Using multivariate pattern analysis on human electroencephalography (EEG) data, we compared the oscillatory time courses of mental imagery and perception of objects. We found that representations shared between imagery and perception emerged specifically in the alpha frequency band. These representations were present in posterior, but not anterior, electrodes, suggesting an origin in parieto-occipital cortex. Comparison of the shared representations to computational models using representational similarity analysis revealed a relationship to later layers of deep neural networks trained on object representations, but not auditory or semantic models, suggesting representations of complex visual features as the basis of commonality. Together, our results identify and characterize alpha oscillations as a cortical signature of representations shared between visual mental imagery and perception. Perception and imagery share neural representations in the alpha frequency band Shared representations stem from parieto-occipital sources Modeling suggests contents of shared representations are complex visual features
Collapse
Affiliation(s)
- Siying Xie
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin 14195, Germany.
| | - Daniel Kaiser
- Department of Psychology, University of York, Heslington, York YO10 5DD, UK
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin 14195, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Unter den Linden 6, Berlin 10099, Germany; Bernstein Centre for Computational Neuroscience Berlin, Humboldt-Universität zu Berlin, Unter den Linden 6, Berlin 10099, Germany.
| |
Collapse
|
47
|
Generative Feedback Explains Distinct Brain Activity Codes for Seen and Mental Images. Curr Biol 2020; 30:2211-2224.e6. [DOI: 10.1016/j.cub.2020.04.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Revised: 02/03/2020] [Accepted: 04/06/2020] [Indexed: 11/21/2022]
|
48
|
Bone MB, Ahmad F, Buchsbaum BR. Feature-specific neural reactivation during episodic memory. Nat Commun 2020; 11:1945. [PMID: 32327642 PMCID: PMC7181630 DOI: 10.1038/s41467-020-15763-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 03/12/2020] [Indexed: 12/04/2022] Open
Abstract
We present a multi-voxel analytical approach, feature-specific informational connectivity (FSIC), that leverages hierarchical representations from a neural network to decode neural reactivation in fMRI data collected while participants performed an episodic visual recall task. We show that neural reactivation associated with low-level (e.g. edges), high-level (e.g. facial features), and semantic (e.g. “terrier”) features occur throughout the dorsal and ventral visual streams and extend into the frontal cortex. Moreover, we show that reactivation of both low- and high-level features correlate with the vividness of the memory, whereas only reactivation of low-level features correlates with recognition accuracy when the lure and target images are semantically similar. In addition to demonstrating the utility of FSIC for mapping feature-specific reactivation, these findings resolve the contributions of low- and high-level features to the vividness of visual memories and challenge a strict interpretation the posterior-to-anterior visual hierarchy. Memory recollection involves reactivation of neural activity that occurred during the recalled experience. Here, the authors show that neural reactivation can be decomposed into visual-semantic features, is widely synchronized throughout the brain, and predicts memory vividness and accuracy.
Collapse
Affiliation(s)
- Michael B Bone
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada.
| | - Fahad Ahmad
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada
| | - Bradley R Buchsbaum
- Rotman Research Institute at Baycrest, Toronto, ON, M6A 2E1, Canada.,Department of Psychology, University of Toronto, Toronto, ON, M5S 1A1, Canada
| |
Collapse
|
49
|
Fan X, Wang F, Shao H, Zhang P, He S. The bottom-up and top-down processing of faces in the human occipitotemporal cortex. eLife 2020; 9:48764. [PMID: 31934855 PMCID: PMC7000216 DOI: 10.7554/elife.48764] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 01/10/2020] [Indexed: 01/07/2023] Open
Abstract
Although face processing has been studied extensively, the dynamics of how face-selective cortical areas are engaged remains unclear. Here, we uncovered the timing of activation in core face-selective regions using functional Magnetic Resonance Imaging and Magnetoencephalography in humans. Processing of normal faces started in the posterior occipital areas and then proceeded to anterior regions. This bottom-up processing sequence was also observed even when internal facial features were misarranged. However, processing of two-tone Mooney faces lacking explicit prototypical facial features engaged top-down projection from the right posterior fusiform face area to right occipital face area. Further, face-specific responses elicited by contextual cues alone emerged simultaneously in the right ventral face-selective regions, suggesting parallel contextual facilitation. Together, our findings chronicle the precise timing of bottom-up, top-down, as well as context-facilitated processing sequences in the occipital-temporal face network, highlighting the importance of the top-down operations especially when faced with incomplete or ambiguous input.
Collapse
Affiliation(s)
- Xiaoxu Fan
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Fan Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Hanyu Shao
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Peng Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Minnesota, Minneapolis, United States
| |
Collapse
|
50
|
A Neural Chronometry of Memory Recall. Trends Cogn Sci 2019; 23:1071-1085. [DOI: 10.1016/j.tics.2019.09.011] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 09/13/2019] [Accepted: 09/25/2019] [Indexed: 12/23/2022]
|