1
|
Linde‐Domingo J, Kerrén C. Evolving Engrams Demand Changes in Effective Cues. Hippocampus 2025; 35:e70015. [PMID: 40331490 PMCID: PMC12056888 DOI: 10.1002/hipo.70015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2025] [Revised: 04/23/2025] [Accepted: 04/25/2025] [Indexed: 05/08/2025]
Abstract
A longstanding principle in episodic memory research, known as the encoding specificity hypothesis, holds that an effective retrieval cue should closely match the original encoding conditions. This principle assumes that a successful retrieval cue remains static over time. Despite the broad acceptance of this idea, it conflicts with one of the most well-established findings in memory research: The dynamic and ever-changing nature of episodic memories. In this article, we propose that the most effective retrieval cue should engage with the current state of the memory, which may have shifted significantly since encoding. By redefining the criteria for successful recall, we challenge a core principle of the field and open new avenues for exploring memory accessibility, offering fresh insights into both theoretical, and applied domains.
Collapse
Affiliation(s)
- Juan Linde‐Domingo
- Department of Experimental PsychologyUniversity of GranadaGranadaSpain
- Mind, Brain and Behavior Research CenterUniversity of GranadaGranadaSpain
| | - Casper Kerrén
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
2
|
Woodry R, Curtis CE, Winawer J. Feedback scales the spatial tuning of cortical responses during both visual working memory and long-term memory. J Neurosci 2025; 45:e0681242025. [PMID: 40086873 PMCID: PMC12019112 DOI: 10.1523/jneurosci.0681-24.2025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 02/07/2025] [Accepted: 02/11/2025] [Indexed: 03/16/2025] Open
Abstract
Perception, working memory, and long-term memory each evoke neural responses in visual cortex. While previous neuroimaging research on the role of visual cortex in memory has largely emphasized similarities between perception and memory, we hypothesized that responses in visual cortex would differ depending on the origins of the inputs. Using fMRI, we quantified spatial tuning in visual cortex while participants (both sexes) viewed, maintained in working memory, or retrieved from long-term memory a peripheral target. In each condition, BOLD responses were spatially tuned and aligned with the target's polar angle in all measured visual field maps including V1. As expected given the increasing sizes of receptive fields, polar angle tuning during perception increased in width up the visual hierarchy from V1 to V2, V3, hV4, and beyond. In stark contrast, the tuned responses were broad across the visual hierarchy during long-term memory (replicating a prior result) and during working memory. This pattern is consistent with the idea that mnemonic responses in V1 stem from top-down sources, even when the stimulus was recently viewed and is held in working memory. Moreover, in long-term memory, trial-to-trial biases in these tuned responses (clockwise or counterclockwise of target), predicted matched biases in memory, suggesting that the reinstated cortical responses influence memory guided behavior. We conclude that feedback widens spatial tuning in visual cortex during memory, where earlier visual maps inherit broader tuning from later maps thereby impacting the precision of memory.Significance Statement We demonstrate that remembering a visual stimulus evokes responses in visual cortex that differ in spatial extent compared to seeing the same stimulus. Perception evokes tuned responses in early visual areas that increase in size up the visual hierarchy. Prior work showed that feedback inputs associated with long-term memory originate from later visual areas with larger receptive fields resulting in uniformly wide spatial tuning even in primary visual cortex. We replicate these results and show that the same pattern holds when maintaining in working memory a recently viewed stimulus. That trial-to-trial difficulty is reflected in the accuracy and precision of these representations suggests that visual cortex is flexibly used for processing visuospatial information, regardless of where that information originates.
Collapse
Affiliation(s)
- Robert Woodry
- Department of Psychology, New York University, New York City, New York 10003
| | - Clayton E. Curtis
- Department of Psychology, New York University, New York City, New York 10003
- Center for Neural Science, New York University, New York City, New York 10003
| | - Jonathan Winawer
- Department of Psychology, New York University, New York City, New York 10003
- Center for Neural Science, New York University, New York City, New York 10003
| |
Collapse
|
3
|
Rau EMB, Fellner MC, Heinen R, Zhang H, Yin Q, Vahidi P, Kobelt M, Asano E, Kim-McManus O, Sattar S, Lin JJ, Auguste KI, Chang EF, King-Stephens D, Weber PB, Laxer KD, Knight RT, Johnson EL, Ofen N, Axmacher N. Reinstatement and transformation of memory traces for recognition. SCIENCE ADVANCES 2025; 11:eadp9336. [PMID: 39970226 PMCID: PMC11838014 DOI: 10.1126/sciadv.adp9336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 01/16/2025] [Indexed: 02/21/2025]
Abstract
Episodic memory relies on the formation and retrieval of content-specific memory traces. In addition to their veridical reactivation, previous studies have indicated that traces may undergo substantial transformations. However, the exact time course and regional distribution of reinstatement and transformation during recognition memory have remained unclear. We applied representational similarity analysis to human intracranial electroencephalography to track the spatiotemporal dynamics underlying the reinstatement and transformation of memory traces. Specifically, we examined how reinstatement and transformation of item-specific representations across occipital, ventral visual, and lateral parietal cortices contribute to successful memory formation and recognition. Our findings suggest that reinstatement in temporal cortex and transformation in parietal cortex coexist and provide complementary strategies for recognition. Further, we find that generalization and differentiation of neural representations contribute to memory and probe memory-specific correspondence with deep neural network (DNN) model features. Our results suggest that memory formation is particularly supported by generalized and mnemonic representational formats beyond the visual features of a DNN.
Collapse
Affiliation(s)
- Elias M. B. Rau
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Marie-Christin Fellner
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Rebekka Heinen
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Hui Zhang
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Qin Yin
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Parisa Vahidi
- Life-Span Cognitive Neuroscience Program, Institute of Gerontology, Wayne State University, Detroit, MI, USA
- Department of Psychology, College of Liberal Arts and Sciences, Wayne State University, Detroit, MI, USA
| | - Malte Kobelt
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Eishi Asano
- Departments of Pediatrics and Neurology, Children’s Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, MI, USA
| | - Olivia Kim-McManus
- Department of Neurosciences, University of California, San Diego, San Diego, CA, USA
- Division of Child Neurology, Rady Children’s Hospital, San Diego, CA, USA
| | - Shifteh Sattar
- Division of Child Neurology, Rady Children’s Hospital, San Diego, CA, USA
| | - Jack J. Lin
- Department of Neurology, University of California, Davis, Davis, CA, USA
| | - Kurtis I. Auguste
- Department of Pediatric Neurosurgery, Benioff Children's Hospital, Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, CA, USA
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
| | - David King-Stephens
- Department of Neurology and Neurosurgery, California Pacific Medical Center, San Francisco, CA, USA
- Department of Neurology, Yale School of Medicine, New Haven, CT, USA
| | - Peter B. Weber
- Department of Neurology and Neurosurgery, California Pacific Medical Center, San Francisco, CA, USA
| | - Kenneth D. Laxer
- Department of Neurology and Neurosurgery, California Pacific Medical Center, San Francisco, CA, USA
| | - Robert T. Knight
- Helen Wills Neuroscience Institute and Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Elizabeth L. Johnson
- Departments of Medical Social Sciences and Pediatrics, Northwestern University, Chicago, IL, USA
- Department of Psychology, Northwestern University, Evanston, IL, USA
| | - Noa Ofen
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
- Life-Span Cognitive Neuroscience Program, Institute of Gerontology, Wayne State University, Detroit, MI, USA
- Department of Psychology, School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, USA
| | - Nikolai Axmacher
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
4
|
Woodry R, Curtis CE, Winawer J. Feedback scales the spatial tuning of cortical responses during both visual working memory and long-term memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.11.589111. [PMID: 38659957 PMCID: PMC11042180 DOI: 10.1101/2024.04.11.589111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
Perception, working memory, and long-term memory each evoke neural responses in visual cortex. While previous neuroimaging research on the role of visual cortex in memory has largely emphasized similarities between perception and memory, we hypothesized that responses in visual cortex would differ depending on the origins of the inputs. Using fMRI, we quantified spatial tuning in visual cortex while participants (both sexes) viewed, maintained in working memory, or retrieved from long-term memory a peripheral target. In each condition, BOLD responses were spatially tuned and aligned with the target's polar angle in all measured visual field maps including V1. As expected given the increasing sizes of receptive fields, polar angle tuning during perception increased in width up the visual hierarchy from V1 to V2, V3, hV4, and beyond. In stark contrast, the tuned responses were broad across the visual hierarchy during long-term memory (replicating a prior result) and during working memory. This pattern is consistent with the idea that mnemonic responses in V1 stem from top-down sources, even when the stimulus was recently viewed and is held in working memory. Moreover, in long-term memory, trial-to-trial biases in these tuned responses (clockwise or counterclockwise of target), predicted matched biases in memory, suggesting that the reinstated cortical responses influence memory guided behavior. We conclude that feedback widens spatial tuning in visual cortex during memory, where earlier visual maps inherit broader tuning from later maps thereby impacting the precision of memory.
Collapse
Affiliation(s)
- Robert Woodry
- Department of Psychology, New York University, New York City, NY 10003
| | - Clayton E. Curtis
- Department of Psychology, New York University, New York City, NY 10003
- Center for Neural Science, New York University, New York City, NY 10003
| | - Jonathan Winawer
- Department of Psychology, New York University, New York City, NY 10003
- Center for Neural Science, New York University, New York City, NY 10003
| |
Collapse
|
5
|
Qi H, Liu C. Metacontrol Regulates Creative Thinking: An EEG Complexity Analysis Based on Multiscale Entropy. Brain Sci 2024; 14:1094. [PMID: 39595857 PMCID: PMC11592368 DOI: 10.3390/brainsci14111094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 10/23/2024] [Accepted: 10/29/2024] [Indexed: 11/28/2024] Open
Abstract
Previous studies have shown that creative thinking is associated with metacontrol, but its neural basis is unknown. The present study explored the neural basis of both by assessing EEG complexity through multiscale entropy. Subjects were engaged in a metacontrol task and an Alternative Uses Task, grouped according to task performance, and the EEG was analysed by multiscale entropy. The results showed that EEG complexity was significantly higher in the high-metacontrol and high-creativity groups than in the low-metacontrol and low-creativity groups, respectively, at high time scales. The metacontrol adaptability score and multipurpose task score were significantly and positively correlated with the EEG complexity at multiple electrode sites. It suggests that metacontrol and creativity are dependent on the activation of long-duration neural networks.
Collapse
Affiliation(s)
| | - Chunlei Liu
- School of Psychology, Qufu Normal University, Qufu 273165, China;
| |
Collapse
|
6
|
Douville CO. Reality and imagination intertwined: A sensorimotor paradox interpretation. Biosystems 2024; 246:105350. [PMID: 39433120 DOI: 10.1016/j.biosystems.2024.105350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Revised: 09/16/2024] [Accepted: 10/08/2024] [Indexed: 10/23/2024]
Abstract
As a hypothesis on the origins of mind and language, the evolutionary theory of the sensorimotor paradox suggests that capacities for imagination, self-representation and abstraction would operate from a dissociation in what is known as the forward model. In some studies, sensory perception is understood as a system of prediction and confirmation (feedforward and feedback processes) that would share common yet distinct and overlapping neural networks with mental imagery. The latter would then mostly operate through internal feedback processes. The hypothesis of our theory is that dissociation and parallelism between those processes would make it less likely for imaginary prediction to match and simultaneously coincide with any sensory feedback, contradicting the stimulus/response pattern. The gap between the two and the effort required to maintain this gap, born from the development of bipedal stance and a radical change to our relation to our own hands, would be the very structural foundation to our capacity to elaborate abstract thoughts, by partially blocking and inhibiting motor action. Mental imagery would structurally be dissociated from perception, though maintaining an intricated relation of interdependence. Moreover, the content of the images would be subordinate to their function as emotional regulators, prioritising consistency with some global, conditional and socially learnt body-image. As a higher-level and proto-aesthetic function, we can speculate that the action and instrumentalisation of dissociating imagination from perception would become the actual prediction and their coordination, the expected feedback.
Collapse
|
7
|
Pacheco-Estefan D, Fellner MC, Kunz L, Zhang H, Reinacher P, Roy C, Brandt A, Schulze-Bonhage A, Yang L, Wang S, Liu J, Xue G, Axmacher N. Maintenance and transformation of representational formats during working memory prioritization. Nat Commun 2024; 15:8234. [PMID: 39300141 DOI: 10.1038/s41467-024-52541-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 09/11/2024] [Indexed: 09/22/2024] Open
Abstract
Visual working memory depends on both material-specific brain areas in the ventral visual stream (VVS) that support the maintenance of stimulus representations and on regions in the prefrontal cortex (PFC) that control these representations. How executive control prioritizes working memory contents and whether this affects their representational formats remains an open question, however. Here, we analyzed intracranial EEG (iEEG) recordings in epilepsy patients with electrodes in VVS and PFC who performed a multi-item working memory task involving a retro-cue. We employed Representational Similarity Analysis (RSA) with various Deep Neural Network (DNN) architectures to investigate the representational format of prioritized VWM content. While recurrent DNN representations matched PFC representations in the beta band (15-29 Hz) following the retro-cue, they corresponded to VVS representations in a lower frequency range (3-14 Hz) towards the end of the maintenance period. Our findings highlight the distinct coding schemes and representational formats of prioritized content in VVS and PFC.
Collapse
Affiliation(s)
- Daniel Pacheco-Estefan
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801, Bochum, Germany.
| | - Marie-Christin Fellner
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801, Bochum, Germany
| | - Lukas Kunz
- Department of Epileptology, University Hospital Bonn, Bonn, Germany
| | - Hui Zhang
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801, Bochum, Germany
| | - Peter Reinacher
- Department of Stereotactic and Functional Neurosurgery, Medical Center - Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Fraunhofer Institute for Laser Technology, Aachen, Germany
| | - Charlotte Roy
- Epilepsy Center, Medical Center - Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Armin Brandt
- Epilepsy Center, Medical Center - Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Andreas Schulze-Bonhage
- Epilepsy Center, Medical Center - Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Linglin Yang
- Department of Psychiatry, Second Affiliated Hospital, School of medicine, Zhejiang University, Hangzhou, China
| | - Shuang Wang
- Department of Neurology, Epilepsy center, Second Affiliated Hospital, School of medicine, Zhejiang University, Hangzhou, China
| | - Jing Liu
- Department of Applied Social Sciences, The Hong Kong Polytechnic University, Hong Kong, Hong Kong SAR
| | - Gui Xue
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, PR China
| | - Nikolai Axmacher
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, 44801, Bochum, Germany
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, PR China
| |
Collapse
|
8
|
Huang J, Wang T, Dai W, Li Y, Yang Y, Zhang Y, Wu Y, Zhou T, Xing D. Neuronal representation of visual working memory content in the primate primary visual cortex. SCIENCE ADVANCES 2024; 10:eadk3953. [PMID: 38875332 PMCID: PMC11177929 DOI: 10.1126/sciadv.adk3953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 05/10/2024] [Indexed: 06/16/2024]
Abstract
The human ability to perceive vivid memories as if they "float" before our eyes, even in the absence of actual visual stimuli, captivates the imagination. To determine the neural substrates underlying visual memories, we investigated the neuronal representation of working memory content in the primary visual cortex of monkeys. Our study revealed that neurons exhibit unique responses to different memory contents, using firing patterns distinct from those observed during the perception of external visual stimuli. Moreover, this neuronal representation evolves with alterations in the recalled content and extends beyond the retinotopic areas typically reserved for processing external visual input. These discoveries shed light on the visual encoding of memories and indicate avenues for understanding the remarkable power of the mind's eye.
Collapse
Affiliation(s)
- Jiancao Huang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tian Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- College of Life Sciences, Beijing Normal University, Beijing 100875, China
| | - Weifeng Dai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yang Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yi Yang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yange Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yujie Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tingting Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Dajun Xing
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
9
|
Ahn S, Adeli H, Zelinsky GJ. The attentive reconstruction of objects facilitates robust object recognition. PLoS Comput Biol 2024; 20:e1012159. [PMID: 38870125 PMCID: PMC11175536 DOI: 10.1371/journal.pcbi.1012159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 05/11/2024] [Indexed: 06/15/2024] Open
Abstract
Humans are extremely robust in our ability to perceive and recognize objects-we see faces in tea stains and can recognize friends on dark streets. Yet, neurocomputational models of primate object recognition have focused on the initial feed-forward pass of processing through the ventral stream and less on the top-down feedback that likely underlies robust object perception and recognition. Aligned with the generative approach, we propose that the visual system actively facilitates recognition by reconstructing the object hypothesized to be in the image. Top-down attention then uses this reconstruction as a template to bias feedforward processing to align with the most plausible object hypothesis. Building on auto-encoder neural networks, our model makes detailed hypotheses about the appearance and location of the candidate objects in the image by reconstructing a complete object representation from potentially incomplete visual input due to noise and occlusion. The model then leverages the best object reconstruction, measured by reconstruction error, to direct the bottom-up process of selectively routing low-level features, a top-down biasing that captures a core function of attention. We evaluated our model using the MNIST-C (handwritten digits under corruptions) and ImageNet-C (real-world objects under corruptions) datasets. Not only did our model achieve superior performance on these challenging tasks designed to approximate real-world noise and occlusion viewing conditions, but also better accounted for human behavioral reaction times and error patterns than a standard feedforward Convolutional Neural Network. Our model suggests that a complete understanding of object perception and recognition requires integrating top-down and attention feedback, which we propose is an object reconstruction.
Collapse
Affiliation(s)
- Seoyoung Ahn
- Department of Molecular and Cell Biology, University of California, Berkeley, California, United States of America
| | - Hossein Adeli
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York City, New York, United States of America
| | - Gregory J. Zelinsky
- Department of Psychology, Stony Brook University, Stony Brook, New York, United States of America
- Department of Computer Science, Stony Brook University, Stony Brook, New York, United States of America
| |
Collapse
|
10
|
Tian S, Chen L, Wang X, Li G, Fu Z, Ji Y, Lu J, Wang X, Shan S, Bi Y. Vision matters for shape representation: Evidence from sculpturing and drawing in the blind. Cortex 2024; 174:241-255. [PMID: 38582629 DOI: 10.1016/j.cortex.2024.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/23/2024] [Accepted: 02/27/2024] [Indexed: 04/08/2024]
Abstract
Shape is a property that could be perceived by vision and touch, and is classically considered to be supramodal. While there is mounting evidence for the shared cognitive and neural representation space between visual and tactile shape, previous research tended to rely on dissimilarity structures between objects and had not examined the detailed properties of shape representation in the absence of vision. To address this gap, we conducted three explicit object shape knowledge production experiments with congenitally blind and sighted participants, who were asked to produce verbal features, 3D clay models, and 2D drawings of familiar objects with varying levels of tactile exposure, including tools, large nonmanipulable objects, and animals. We found that the absence of visual experience (i.e., in the blind group) led to stronger differences in animals than in tools and large objects, suggesting that direct tactile experience of objects is essential for shape representation when vision is unavailable. For tools with rich tactile/manipulation experiences, the blind produced overall good shapes comparable to the sighted, yet also showed intriguing differences. The blind group had more variations and a systematic bias in the geometric property of tools (making them stubbier than the sighted), indicating that visual experience contributes to aligning internal representations and calibrating overall object configurations, at least for tools. Taken together, the object shape representation reflects the intricate orchestration of vision, touch and language.
Collapse
Affiliation(s)
- Shuang Tian
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Lingjuan Chen
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Guochao Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Ze Fu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yufeng Ji
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jiahui Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shiguang Shan
- Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG, McGovern Institute for Brain Research, Beijing Normal University, Beijing, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
11
|
Lee Masson H, Chen J, Isik L. A shared neural code for perceiving and remembering social interactions in the human superior temporal sulcus. Neuropsychologia 2024; 196:108823. [PMID: 38346576 DOI: 10.1016/j.neuropsychologia.2024.108823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 01/15/2024] [Accepted: 02/09/2024] [Indexed: 02/20/2024]
Abstract
Recognizing and remembering social information is a crucial cognitive skill. Neural patterns in the superior temporal sulcus (STS) support our ability to perceive others' social interactions. However, despite the prominence of social interactions in memory, the neural basis of remembering social interactions is still unknown. To fill this gap, we investigated the brain mechanisms underlying memory of others' social interactions during free spoken recall of a naturalistic movie. By applying machine learning-based fMRI encoding analyses to densely labeled movie and recall data we found that a subset of the STS activity evoked by viewing social interactions predicted neural responses in not only held-out movie data, but also during memory recall. These results provide the first evidence that activity in the STS is reinstated in response to specific social content and that its reactivation underlies our ability to remember others' interactions. These findings further suggest that the STS contains representations of social interactions that are not only perceptually driven, but also more abstract or conceptual in nature.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Psychology, Durham University, Durham, DH1 3LE, United Kingdom; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States.
| | - Janice Chen
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, 21218, United States
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, 21218, United States.
| |
Collapse
|
12
|
Kim SG, De Martino F, Overath T. Linguistic modulation of the neural encoding of phonemes. Cereb Cortex 2024; 34:bhae155. [PMID: 38687241 PMCID: PMC11059272 DOI: 10.1093/cercor/bhae155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 05/02/2024] Open
Abstract
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, Netherlands
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Duke Institute for Brain Sciences, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
| |
Collapse
|
13
|
Bi Z, Li H, Tian L. Top-down generation of low-resolution representations improves visual perception and imagination. Neural Netw 2024; 171:440-456. [PMID: 38150870 DOI: 10.1016/j.neunet.2023.12.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 11/30/2023] [Accepted: 12/18/2023] [Indexed: 12/29/2023]
Abstract
Perception or imagination requires top-down signals from high-level cortex to primary visual cortex (V1) to reconstruct or simulate the representations bottom-up stimulated by the seen images. Interestingly, top-down signals in V1 have lower spatial resolution than bottom-up representations. It is unclear why the brain uses low-resolution signals to reconstruct or simulate high-resolution representations. By modeling the top-down pathway of the visual system using the decoder of a variational auto-encoder (VAE), we reveal that low-resolution top-down signals can better reconstruct or simulate the information contained in the sparse activities of V1 simple cells, which facilitates perception and imagination. This advantage of low-resolution generation is related to facilitating high-level cortex to form geometry-respecting representations observed in experiments. Furthermore, we present two findings regarding this phenomenon in the context of AI-generated sketches, a style of drawings made of lines. First, we found that the quality of the generated sketches critically depends on the thickness of the lines in the sketches: thin-line sketches are harder to generate than thick-line sketches. Second, we propose a technique to generate high-quality thin-line sketches: instead of directly using original thin-line sketches, we use blurred sketches to train VAE or GAN (generative adversarial network), and then infer the thin-line sketches from the VAE- or GAN-generated blurred sketches. Collectively, our work suggests that low-resolution top-down generation is a strategy the brain uses to improve visual perception and imagination, which inspires new sketch-generation AI techniques.
Collapse
Affiliation(s)
- Zedong Bi
- Lingang Laboratory, Shanghai 200031, China.
| | - Haoran Li
- Department of Physics, Hong Kong Baptist University, Hong Kong, China
| | - Liang Tian
- Department of Physics, Hong Kong Baptist University, Hong Kong, China; Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Hong Kong, China; Institute of Systems Medicine and Health Sciences, Hong Kong Baptist University, Hong Kong, China; State Key Laboratory of Environmental and Biological Analysis, Hong Kong Baptist University, Hong Kong, China.
| |
Collapse
|
14
|
Günseli E, Foster JJ, Sutterer DW, Todorova L, Vogel EK, Awh E. Encoded and updated spatial working memories share a common representational format in alpha activity. iScience 2024; 27:108963. [PMID: 38333713 PMCID: PMC10850742 DOI: 10.1016/j.isci.2024.108963] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/08/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024] Open
Abstract
Working memory (WM) flexibly updates information to adapt to the dynamic environment. Here, we used alpha-band activity in the EEG to reconstruct the content of dynamic WM updates and compared this representational format to static WM content. An inverted encoding model using alpha activity precisely tracked both the initially encoded position and the updated position following an auditory cue signaling mental updating. The timing of the update, as tracked in the EEG, correlated with reaction times and saccade latency. Finally, cross-training analyses revealed a robust generalization of alpha-band reconstruction of WM contents before and after updating. These findings demonstrate that alpha activity tracks the dynamic updates to spatial WM and that the format of this activity is preserved across the encoded and updated representations. Thus, our results highlight a new approach for measuring updates to WM and show common representational formats during dynamic mental updating and static storage.
Collapse
Affiliation(s)
- Eren Günseli
- Department of Psychology, Sabancı University, Istanbul, Turkey
| | - Joshua J. Foster
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| | - David W. Sutterer
- Department of Psychology, University of Tennessee, Knoxville, TN, USA
| | - Lara Todorova
- Department of Psychology, Sabancı University, Istanbul, Turkey
| | - Edward K. Vogel
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| | - Edward Awh
- Department of Psychology, University of Chicago, Chicago, IL, USA
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
15
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. Nat Neurosci 2024; 27:339-347. [PMID: 38168931 PMCID: PMC10923171 DOI: 10.1038/s41593-023-01512-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 10/31/2023] [Indexed: 01/05/2024]
Abstract
Conventional views of brain organization suggest that regions at the top of the cortical hierarchy processes internally oriented information using an abstract amodal neural code. Despite this, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here we report that retinotopic coding structures interactions between internally oriented (mnemonic) and externally oriented (perceptual) brain areas. Using functional magnetic resonance imaging, we observed robust inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. These functionally linked retinotopic populations in mnemonic and perceptual areas exhibit spatially specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually inhibitory dynamic. These results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, providing a scaffold for their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| | - Edward H Silson
- Psychosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK
| | - Brenda D Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Caroline E Robertson
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
16
|
Shenyan O, Lisi M, Greenwood JA, Skipper JI, Dekker TM. Visual hallucinations induced by Ganzflicker and Ganzfeld differ in frequency, complexity, and content. Sci Rep 2024; 14:2353. [PMID: 38287084 PMCID: PMC10825158 DOI: 10.1038/s41598-024-52372-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 01/17/2024] [Indexed: 01/31/2024] Open
Abstract
Visual hallucinations can be phenomenologically divided into those of a simple or complex nature. Both simple and complex hallucinations can occur in pathological and non-pathological states, and can also be induced experimentally by visual stimulation or deprivation-for example using a high-frequency, eyes-open flicker (Ganzflicker) and perceptual deprivation (Ganzfeld). Here we leverage the differences in visual stimulation that these two techniques involve to investigate the role of bottom-up and top-down processes in shifting the complexity of visual hallucinations, and to assess whether these techniques involve a shared underlying hallucinatory mechanism despite their differences. For each technique, we measured the frequency and complexity of the hallucinations produced, utilising button presses, retrospective drawing, interviews, and questionnaires. For both experimental techniques, simple hallucinations were more common than complex hallucinations. Crucially, we found that Ganzflicker was more effective than Ganzfeld at eliciting simple hallucinations, while complex hallucinations remained equivalent across the two conditions. As a result, the likelihood that an experienced hallucination was complex was higher during Ganzfeld. Despite these differences, we found a correlation between the frequency and total time spent hallucinating in Ganzflicker and Ganzfeld conditions, suggesting some shared mechanisms between the two methodologies. We attribute the tendency to experience frequent simple hallucinations in both conditions to a shared low-level core hallucinatory mechanism, such as excitability of visual cortex, potentially amplified in Ganzflicker compared to Ganzfeld due to heightened bottom-up input. The tendency to experience complex hallucinations, in contrast, may be related to top-down processes less affected by visual stimulation.
Collapse
Affiliation(s)
- Oris Shenyan
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK.
- Institute of Ophthalmology, University College London, London, UK.
| | - Matteo Lisi
- Department of Psychology, Royal Holloway University, London, UK
| | - John A Greenwood
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK
| | - Jeremy I Skipper
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK
| | - Tessa M Dekker
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, UK
- Institute of Ophthalmology, University College London, London, UK
| |
Collapse
|
17
|
Peters B, DiCarlo JJ, Gureckis T, Haefner R, Isik L, Tenenbaum J, Konkle T, Naselaris T, Stachenfeld K, Tavares Z, Tsao D, Yildirim I, Kriegeskorte N. How does the primate brain combine generative and discriminative computations in vision? ARXIV 2024:arXiv:2401.06005v1. [PMID: 38259351 PMCID: PMC10802669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
Collapse
Affiliation(s)
- Benjamin Peters
- Zuckerman Mind Brain Behavior Institute, Columbia University
- School of Psychology & Neuroscience, University of Glasgow
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, MIT
- McGovern Institute for Brain Research, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Quest for Intelligence, Schwarzman College of Computing, MIT
| | | | - Ralf Haefner
- Brain and Cognitive Sciences, University of Rochester
- Center for Visual Science, University of Rochester
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University
| | - Joshua Tenenbaum
- Department of Brain and Cognitive Sciences, MIT
- NSF Center for Brains, Minds and Machines, MIT
- Computer Science and Artificial Intelligence Laboratory, MIT
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Natural and Artificial Intelligence, Harvard University
| | | | | | - Zenna Tavares
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Data Science Institute, Columbia University
| | - Doris Tsao
- Dept of Molecular & Cell Biology, University of California Berkeley
- Howard Hughes Medical Institute
| | - Ilker Yildirim
- Department of Psychology, Yale University
- Department of Statistics and Data Science, Yale University
| | - Nikolaus Kriegeskorte
- Zuckerman Mind Brain Behavior Institute, Columbia University
- Department of Psychology, Columbia University
- Department of Neuroscience, Columbia University
- Department of Electrical Engineering, Columbia University
| |
Collapse
|
18
|
Lv J, Shen Y, Huang Z, Zhang C, Meijiu J, Zhang H. Watching eyes effect: the impact of imagined eyes on prosocial behavior and satisfactions in the dictator game. Front Psychol 2024; 14:1292232. [PMID: 38268799 PMCID: PMC10806148 DOI: 10.3389/fpsyg.2023.1292232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 12/14/2023] [Indexed: 01/26/2024] Open
Abstract
The concept of the watching eyes effect suggests that the presence of eye or eye-like cues can influence individual altruistic behavior. However, few studies have investigated the effects of imagined eyes on altruistic behaviors and the psychological measures of dictators and recipients in the dictator game. This study used a 2 (Presentation Mode: Imagined/Visual) 2 (Cue Type: Eye/Flower) between-subject design and measured the effects of recipients' psychological variables and the communication texts between the dictator and the recipient. The results showed that there was a significant interaction between Presentation Mode and Cue Type. In the imagined condition, the dictator exhibited more altruistic behavior than in the visual condition. However, there was no significant difference in altruistic behavior between the Imagined Eye and Imagined Flower conditions. In addition, the study found that the Cue Type had a significant main effect on the recipients' satisfaction with the allocation outcome. Notably, in the Visual Flower condition, the dictator used more egoistic norm words when communicating with the recipient than other conditions. This study provides novel evidence on the effect of imagined social cues on individual behavior in the dictator game, and to some extent validates the robustness of the watching eyes effect under manipulation of higher-level verbal cognitive processes. At the same time, the study is the first to explore the impacts on recipients' psychological variables and the communication texts. These efforts offer new insights into the psychological and cognitive mechanisms underlying the watching eyes effect.
Collapse
Affiliation(s)
- Jieyu Lv
- Department of Psychology, Central University of Finance and Economics, Beijing, China
| | | | | | | | | | | |
Collapse
|
19
|
Sulfaro AA, Robinson AK, Carlson TA. Properties of imagined experience across visual, auditory, and other sensory modalities. Conscious Cogn 2024; 117:103598. [PMID: 38086154 DOI: 10.1016/j.concog.2023.103598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/13/2023] [Accepted: 10/23/2023] [Indexed: 01/16/2024]
Abstract
Little is known about the perceptual characteristics of mental images nor how they vary across sensory modalities. We conducted an exhaustive survey into how mental images are experienced across modalities, mainly targeting visual and auditory imagery of a single stimulus, the letter "O", to facilitate direct comparisons. We investigated temporal properties of mental images (e.g. onset latency, duration), spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), movement requirements (e.g. eye movements), real-imagined interactions (e.g. inner speech while reading), beliefs about imagery norms and terminologies, as well as respondent confidence. Participants also reported on the five traditional senses and their prominence during thinking, imagining, and dreaming. Overall, visual and auditory experiences dominated mental events, although auditory mental images were superior to visual mental images on almost every metric tested except regarding spatial properties. Our findings suggest that modality-specific differences in mental imagery may parallel those of other sensory neural processes.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia.
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia; Queensland Brain Institute, The University of Queensland, St Lucia 4072, Queensland, Australia.
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown 2006, New South Wales, Australia.
| |
Collapse
|
20
|
Favila SE, Aly M. Hippocampal mechanisms resolve competition in memory and perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.09.561548. [PMID: 37873400 PMCID: PMC10592663 DOI: 10.1101/2023.10.09.561548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Behaving adaptively requires selection of relevant memories and sensations and suppression of competing ones. We hypothesized that these mechanisms are linked, such that hippocampal computations that resolve competition in memory also shape the precision of sensory representations to guide selective attention. We leveraged f MRI-based pattern similarity, receptive field modeling, and eye tracking to test this hypothesis in humans performing a memory-dependent visual search task. In the hippocampus, differentiation of competing memories predicted the precision of memory-guided eye movements. In visual cortex, preparatory coding of remembered target locations predicted search successes, whereas preparatory coding of competing locations predicted search failures due to interference. These effects were linked: stronger hippocampal memory differentiation was associated with lower competitor activation in visual cortex, yielding more precise preparatory representations. These results demonstrate a role for memory differentiation in shaping the precision of sensory representations, highlighting links between mechanisms that overcome competition in memory and perception.
Collapse
Affiliation(s)
- Serra E Favila
- Department of Psychology, Columbia University, New York, NY, 10027
| | - Mariam Aly
- Department of Psychology, Columbia University, New York, NY, 10027
| |
Collapse
|
21
|
Bai Y, Liu S, Zhu M, Wang B, Li S, Meng L, Shi X, Chen F, Jiang H, Jiang C. Perceptual Pattern of Cleft-Related Speech: A Task-fMRI Study on Typical Mandarin-Speaking Adults. Brain Sci 2023; 13:1506. [PMID: 38002467 PMCID: PMC10669275 DOI: 10.3390/brainsci13111506] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/30/2023] [Accepted: 10/17/2023] [Indexed: 11/26/2023] Open
Abstract
Congenital cleft lip and palate is one of the common deformities in the craniomaxillofacial region. The current study aimed to explore the perceptual pattern of cleft-related speech produced by Mandarin-speaking patients with repaired cleft palate using the task-based functional magnetic resonance imaging (task-fMRI) technique. Three blocks of speech stimuli, including hypernasal speech, the glottal stop, and typical speech, were played to 30 typical adult listeners with no history of cleft palate speech exploration. Using a randomized block design paradigm, the participants were instructed to assess the intelligibility of the stimuli. Simultaneously, fMRI data were collected. Brain activation was compared among the three types of speech stimuli. Results revealed that greater blood-oxygen-level-dependent (BOLD) responses to the cleft-related glottal stop than to typical speech were localized in the right fusiform gyrus and the left inferior occipital gyrus. The regions responding to the contrast between the glottal stop and cleft-related hypernasal speech were located in the right fusiform gyrus. More significant BOLD responses to hypernasal speech than to the glottal stop were localized in the left orbital part of the inferior frontal gyrus and middle temporal gyrus. More significant BOLD responses to typical speech than to the glottal stop were localized in the left inferior temporal gyrus, left superior temporal gyrus, left medial superior frontal gyrus, and right angular gyrus. Furthermore, there was no significant difference between hypernasal speech and typical speech. In conclusion, the typical listener would initiate different neural processes to perceive cleft-related speech. Our findings lay a foundation for exploring the perceptual pattern of patients with repaired cleft palate.
Collapse
Affiliation(s)
- Yun Bai
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Shaowei Liu
- Department of Radiology, Jiangsu Province Hospital of Chinese Medicine, Affiliated Hospital of Nanjing University of Chinese Medicine, Nanjing 210004, China
| | - Mengxian Zhu
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Binbing Wang
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Sheng Li
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Liping Meng
- Department of Children’s Healthcare, Women’s Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing 210004, China
| | - Xinghui Shi
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Hongbing Jiang
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| | - Chenghui Jiang
- Department of Oral and Maxillofacial Surgery, The Affiliated Stomatological Hospital of Nanjing Medical University, Nanjing 210029, China; (Y.B.)
- Jiangsu Province Key Laboratory of Oral Diseases, Nanjing 210029, China
- Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing 210029, China
| |
Collapse
|
22
|
Steel A, Silson EH, Garcia BD, Robertson CE. A retinotopic code structures the interaction between perception and memory systems. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.15.540807. [PMID: 37292758 PMCID: PMC10245578 DOI: 10.1101/2023.05.15.540807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Conventional views of brain organization suggest that the cortical apex processes internally-oriented information using an abstract, amodal neural code. Yet, recent reports have described the presence of retinotopic coding at the cortical apex, including the default mode network. What is the functional role of retinotopic coding atop the cortical hierarchy? Here, we report that retinotopic coding structures interactions between internally-oriented (mnemonic) and externally-oriented (perceptual) brain areas. Using fMRI, we observed robust, inverted (negative) retinotopic coding in category-selective memory areas at the cortical apex, which is functionally linked to the classic (positive) retinotopic coding in category-selective perceptual areas in high-level visual cortex. Specifically, these functionally-linked retinotopic populations in mnemonic and perceptual areas exhibit spatially-specific opponent responses during both bottom-up perception and top-down recall, suggesting that these areas are interlocked in a mutually-inhibitory dynamic. Together, these results show that retinotopic coding structures interactions between perceptual and mnemonic neural systems, thereby scaffolding their dynamic interaction.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | - Edward H. Silson
- Psychology, School of Philosophy, Psychology, and Language Sciences, University of Edinburgh, Edinburgh, UK EH8 9JZ
| | - Brenda D. Garcia
- Department of Psychology and Brain Sciences, Dartmouth College, Hanover, NH, 03755
| | | |
Collapse
|
23
|
Li S, Zeng X, Shao Z, Yu Q. Neural Representations in Visual and Parietal Cortex Differentiate between Imagined, Perceived, and Illusory Experiences. J Neurosci 2023; 43:6508-6524. [PMID: 37582626 PMCID: PMC10513072 DOI: 10.1523/jneurosci.0592-23.2023] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/09/2023] [Accepted: 08/04/2023] [Indexed: 08/17/2023] Open
Abstract
Humans constantly receive massive amounts of information, both perceived from the external environment and imagined from the internal world. To function properly, the brain needs to correctly identify the origin of information being processed. Recent work has suggested common neural substrates for perception and imagery. However, it has remained unclear how the brain differentiates between external and internal experiences with shared neural codes. Here we tested this question in human participants (male and female) by systematically investigating the neural processes underlying the generation and maintenance of visual information from voluntary imagery, veridical perception, and illusion. The inclusion of illusion allowed us to differentiate between objective and subjective internality: while illusion has an objectively internal origin and can be viewed as involuntary imagery, it is also subjectively perceived as having an external origin like perception. Combining fMRI, eye-tracking, multivariate decoding, and encoding approaches, we observed superior orientation representations in parietal cortex during imagery compared with perception, and conversely in early visual cortex. This imagery dominance gradually developed along a posterior-to-anterior cortical hierarchy from early visual to parietal cortex, emerged in the early epoch of imagery and sustained into the delay epoch, and persisted across varied imagined contents. Moreover, representational strength of illusion was more comparable to imagery in early visual cortex, but more comparable to perception in parietal cortex, suggesting content-specific representations in parietal cortex differentiate between subjectively internal and external experiences, as opposed to early visual cortex. These findings together support a domain-general engagement of parietal cortex in internally generated experience.SIGNIFICANCE STATEMENT How does the brain differentiate between imagined and perceived experiences? Combining fMRI, eye-tracking, multivariate decoding, and encoding approaches, the current study revealed enhanced stimulus-specific representations in visual imagery originating from parietal cortex, supporting the subjective experience of imagery. This neural principle was further validated by evidence from visual illusion, wherein illusion resembled perception and imagery at different levels of cortical hierarchy. Our findings provide direct evidence for the critical role of parietal cortex as a domain-general region for content-specific imagery, and offer new insights into the neural mechanisms underlying the differentiation between subjectively internal and external experiences.
Collapse
Affiliation(s)
- Siyi Li
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Xuemei Zeng
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| | - Zhujun Shao
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Qing Yu
- Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
| |
Collapse
|
24
|
Robinson AK, Quek GL, Carlson TA. Visual Representations: Insights from Neural Decoding. Annu Rev Vis Sci 2023; 9:313-335. [PMID: 36889254 DOI: 10.1146/annurev-vision-100120-025301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.
Collapse
Affiliation(s)
- Amanda K Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia;
| | - Genevieve L Quek
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia;
| | | |
Collapse
|
25
|
Li HH, Curtis CE. Neural population dynamics of human working memory. Curr Biol 2023; 33:3775-3784.e4. [PMID: 37595590 PMCID: PMC10528783 DOI: 10.1016/j.cub.2023.07.067] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 06/20/2023] [Accepted: 07/31/2023] [Indexed: 08/20/2023]
Abstract
The activity of neurons in macaque prefrontal cortex (PFC) persists during working memory (WM) delays, providing a mechanism for memory.1,2,3,4,5,6,7,8,9,10,11 Although theory,11,12 including formal network models,13,14 assumes that WM codes are stable over time, PFC neurons exhibit dynamics inconsistent with these assumptions.15,16,17,18,19 Recently, multivariate reanalyses revealed the coexistence of both stable and dynamic WM codes in macaque PFC.20,21,22,23 Human EEG studies also suggest that WM might contain dynamics.24,25 Nonetheless, how WM dynamics vary across the cortical hierarchy and which factors drive dynamics remain unknown. To elucidate WM dynamics in humans, we decoded WM content from fMRI responses across multiple cortical visual field maps.26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48 We found coexisting stable and dynamic neural representations of WM during a memory-guided saccade task. Geometric analyses of neural subspaces revealed that early visual cortex exhibited stronger dynamics than high-level visual and frontoparietal cortex. Leveraging models of population receptive fields, we visualized and made the neural dynamics interpretable. We found that during WM delays, V1 population initially encoded a narrowly tuned bump of activation centered on the peripheral memory target. Remarkably, this bump then spread inward toward foveal locations, forming a vector along the trajectory of the forthcoming memory-guided saccade. In other words, the neural code transformed into an abstraction of the stimulus more proximal to memory-guided behavior. Therefore, theories of WM must consider both sensory features and their task-relevant abstractions because changes in the format of memoranda naturally drive neural dynamics.
Collapse
Affiliation(s)
- Hsin-Hung Li
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Clayton E Curtis
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
26
|
Sulfaro AA, Robinson AK, Carlson TA. Modelling perception as a hierarchical competition differentiates imagined, veridical, and hallucinated percepts. Neurosci Conscious 2023; 2023:niad018. [PMID: 37621984 PMCID: PMC10445666 DOI: 10.1093/nc/niad018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 08/26/2023] Open
Abstract
Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.
Collapse
Affiliation(s)
- Alexander A Sulfaro
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| | - Amanda K Robinson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
- Queensland Brain Institute, QBI Building 79, The University of Queensland, St Lucia, QLD 4067, Australia
| | - Thomas A Carlson
- School of Psychology, Griffith Taylor Building, The University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
27
|
Wilson H, Golbabaee M, Proulx MJ, Charles S, O'Neill E. EEG-based BCI Dataset of Semantic Concepts for Imagination and Perception Tasks. Sci Data 2023; 10:386. [PMID: 37322034 PMCID: PMC10272218 DOI: 10.1038/s41597-023-02287-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 06/02/2023] [Indexed: 06/17/2023] Open
Abstract
Electroencephalography (EEG) is a widely-used neuroimaging technique in Brain Computer Interfaces (BCIs) due to its non-invasive nature, accessibility and high temporal resolution. A range of input representations has been explored for BCIs. The same semantic meaning can be conveyed in different representations, such as visual (orthographic and pictorial) and auditory (spoken words). These stimuli representations can be either imagined or perceived by the BCI user. In particular, there is a scarcity of existing open source EEG datasets for imagined visual content, and to our knowledge there are no open source EEG datasets for semantics captured through multiple sensory modalities for both perceived and imagined content. Here we present an open source multisensory imagination and perception dataset, with twelve participants, acquired with a 124 EEG channel system. The aim is for the dataset to be open for purposes such as BCI related decoding and for better understanding the neural mechanisms behind perception, imagination and across the sensory modalities when the semantic category is held constant.
Collapse
Affiliation(s)
- Holly Wilson
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| | - Mohammad Golbabaee
- Department of Engineering Mathematics, University of Bristol, Bristol, BS8 1TW, UK
| | | | - Stephen Charles
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, BA2 7AY, UK.
| |
Collapse
|
28
|
Kay K, Bonnen K, Denison RN, Arcaro MJ, Barack DL. Tasks and their role in visual neuroscience. Neuron 2023; 111:1697-1713. [PMID: 37040765 DOI: 10.1016/j.neuron.2023.03.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 04/13/2023]
Abstract
Vision is widely used as a model system to gain insights into how sensory inputs are processed and interpreted by the brain. Historically, careful quantification and control of visual stimuli have served as the backbone of visual neuroscience. There has been less emphasis, however, on how an observer's task influences the processing of sensory inputs. Motivated by diverse observations of task-dependent activity in the visual system, we propose a framework for thinking about tasks, their role in sensory processing, and how we might formally incorporate tasks into our models of vision.
Collapse
Affiliation(s)
- Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Kathryn Bonnen
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Rachel N Denison
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Mike J Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19146, USA
| | - David L Barack
- Departments of Neuroscience and Philosophy, University of Pennsylvania, Philadelphia, PA 19146, USA
| |
Collapse
|
29
|
Olman CA. What multiplexing means for the interpretation of functional MRI data. Front Hum Neurosci 2023; 17:1134811. [PMID: 37091812 PMCID: PMC10117671 DOI: 10.3389/fnhum.2023.1134811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 03/20/2023] [Indexed: 04/08/2023] Open
Abstract
Despite technology advances that have enabled routine acquisition of functional MRI data with sub-millimeter resolution, the inferences that cognitive neuroscientists must make to link fMRI data to behavior are complicated. Thus, a single dataset subjected to different analyses can be interpreted in different ways. This article presents two optical analogies that can be useful for framing fMRI analyses in a way that allows for multiple interpretations of fMRI data to be valid simultaneously without undermining each other. The first is reflection: when an object is reflected in a mirrored surface, it appears as if the reflected object is sharing space with the mirrored object, but of course it is not. This analogy can be a good guide for interpreting the fMRI signal, since even at sub-millimeter resolutions the signal is determined by a mixture of local and long-range neural computations. The second is refraction. If we view an object through a multi-faceted prism or gemstone, our view will change-sometimes dramatically-depending on our viewing angle. In the same way, interpretation of fMRI data (inference of underlying neuronal activity) can and should be different depending on the analysis approach. Rather than representing a weakness of the methodology, or the superiority of one approach over the other (for example, simple regression analysis versus multi-voxel pattern analysis), this is an expected consequence of how information is multiplexed in the neural networks of the brain: multiple streams of information are simultaneously present in each location. The fact that any one analysis typically shows only one view of the data also puts some parentheses around fMRI practitioners' constant search for ground truth against which to compare their data. By holding our interpretations lightly and understanding that many interpretations of the data can all be true at the same time, we do a better job of preparing ourselves to appreciate, and eventually understand, the complexity of the brain and the behavior it produces.
Collapse
Affiliation(s)
- Cheryl A. Olman
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
30
|
Dijkstra N, Fleming SM. Subjective signal strength distinguishes reality from imagination. Nat Commun 2023; 14:1627. [PMID: 36959279 PMCID: PMC10036541 DOI: 10.1038/s41467-023-37322-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 03/09/2023] [Indexed: 03/25/2023] Open
Abstract
Humans are voracious imaginers, with internal simulations supporting memory, planning and decision-making. Because the neural mechanisms supporting imagery overlap with those supporting perception, a foundational question is how reality and imagination are kept apart. One possibility is that the intention to imagine is used to identify and discount self-generated signals during imagery. Alternatively, because internally generated signals are generally weaker, sensory strength is used to index reality. Traditional psychology experiments struggle to investigate this issue as subjects can rapidly learn that real stimuli are in play. Here, we combined one-trial-per-participant psychophysics with computational modelling and neuroimaging to show that imagined and perceived signals are in fact intermixed, with judgments of reality being determined by whether this intermixed signal is strong enough to cross a reality threshold. A consequence of this account is that when virtual or imagined signals are strong enough, they become subjectively indistinguishable from reality.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, London, UK.
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, University College London, London, UK
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
31
|
Dissociating Hippocampal and Cortical Contributions to Predictive Processing. J Neurosci 2023; 43:184-186. [PMID: 36646458 PMCID: PMC9838692 DOI: 10.1523/jneurosci.1840-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 11/07/2022] [Accepted: 11/11/2022] [Indexed: 01/13/2023] Open
|
32
|
Leisman G. On the Application of Developmental Cognitive Neuroscience in Educational Environments. Brain Sci 2022; 12:1501. [PMID: 36358427 PMCID: PMC9688360 DOI: 10.3390/brainsci12111501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/25/2022] [Accepted: 11/01/2022] [Indexed: 09/29/2023] Open
Abstract
The paper overviews components of neurologic processing efficiencies to develop innovative methodologies and thinking to school-based applications and changes in educational leadership based on sound findings in the cognitive neurosciences applied to schools and learners. Systems science can allow us to better manage classroom-based learning and instruction on the basis of relatively easily evaluated efficiencies or inefficiencies and optimization instead of simply examining achievement. "Medicalizing" the learning process with concepts such as "learning disability" or employing grading methods such as pass-fail does little to aid in understanding the processes that learners employ to acquire, integrate, remember, and apply information learned. The paper endeavors to overview and provided reference to tools that can be employed that allow a better focus on nervous system-based strategic approaches to classroom learning.
Collapse
Affiliation(s)
- Gerry Leisman
- Movement and Cognition Laboratory, Department of Physical Therapy, University of Haifa, Haifa 3498838, Israel; or
- Department of Neurology, Universidad de Ciencias Médicas de la Habana, Havana 11300, Cuba
| |
Collapse
|
33
|
Favila SE, Kuhl BA, Winawer J. Perception and memory have distinct spatial tuning properties in human visual cortex. Nat Commun 2022; 13:5864. [PMID: 36257949 PMCID: PMC9579130 DOI: 10.1038/s41467-022-33161-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Accepted: 09/06/2022] [Indexed: 11/12/2022] Open
Abstract
Reactivation of earlier perceptual activity is thought to underlie long-term memory recall. Despite evidence for this view, it is unclear whether mnemonic activity exhibits the same tuning properties as feedforward perceptual activity. Here, we leverage population receptive field models to parameterize fMRI activity in human visual cortex during spatial memory retrieval. Though retinotopic organization is present during both perception and memory, large systematic differences in tuning are also evident. Whereas there is a three-fold decline in spatial precision from early to late visual areas during perception, this pattern is not observed during memory retrieval. This difference cannot be explained by reduced signal-to-noise or poor performance on memory trials. Instead, by simulating top-down activity in a network model of cortex, we demonstrate that this property is well explained by the hierarchical structure of the visual system. Together, modeling and empirical results suggest that computational constraints imposed by visual system architecture limit the fidelity of memory reactivation in sensory cortex.
Collapse
Affiliation(s)
- Serra E Favila
- Department of Psychology, New York University, New York, NY, 10003, USA.
- Department of Psychology, Columbia University, New York, NY, 10027, USA.
| | - Brice A Kuhl
- Department of Psychology, University of Oregon, Eugene, OR, 97403, USA
- Institute of Neuroscience, University of Oregon, Eugene, OR, 97403, USA
| | - Jonathan Winawer
- Department of Psychology, New York University, New York, NY, 10003, USA
- Center for Neural Science, New York University, New York, NY, 10003, USA
| |
Collapse
|
34
|
Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. Commun Biol 2022; 5:214. [PMID: 35304588 PMCID: PMC8933408 DOI: 10.1038/s42003-022-03137-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 02/08/2022] [Indexed: 12/04/2022] Open
Abstract
Neural representations of visual perception are affected by mental imagery and attention. Although attention is known to modulate neural representations, it is unknown how imagery changes neural representations when imagined and perceived images semantically conflict. We hypothesized that imagining an image would activate a neural representation during its perception even while watching a conflicting image. To test this hypothesis, we developed a closed-loop system to show images inferred from electrocorticograms using a visual semantic space. The successful control of the feedback images demonstrated that the semantic vector inferred from electrocorticograms became closer to the vector of the imagined category, even while watching images from different categories. Moreover, modulation of the inferred vectors by mental imagery depended asymmetrically on the perceived and imagined categories. Shared neural representation between mental imagery and perception was still activated by the imagery under semantically conflicting perceptions depending on the semantic category. In this study, intracranial EEG recordings show that neural representations of imagined images can still be present in humans even when they are shown conflicting images.
Collapse
|
35
|
Dijkstra N, Kok P, Fleming SM. Perceptual reality monitoring: Neural mechanisms dissociating imagination from reality. Neurosci Biobehav Rev 2022; 135:104557. [PMID: 35122782 DOI: 10.1016/j.neubiorev.2022.104557] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 01/12/2022] [Accepted: 01/30/2022] [Indexed: 01/21/2023]
Abstract
There is increasing evidence that imagination relies on similar neural mechanisms as externally triggered perception. This overlap presents a challenge for perceptual reality monitoring: deciding what is real and what is imagined. Here, we explore how perceptual reality monitoring might be implemented in the brain. We first describe sensory and cognitive factors that could dissociate imagery and perception and conclude that no single factor unambiguously signals whether an experience is internally or externally generated. We suggest that reality monitoring is implemented by higher-level cortical circuits that evaluate first-order sensory and cognitive factors to determine the source of sensory signals. According to this interpretation, perceptual reality monitoring shares core computations with metacognition. This multi-level architecture might explain several types of source confusion as well as dissociations between simply knowing whether something is real and actually experiencing it as real. We discuss avenues for future research to further our understanding of perceptual reality monitoring, an endeavour that has important implications for our understanding of clinical symptoms as well as general cognitive function.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom.
| | - Peter Kok
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, United Kingdom; Max Planck UCL Centre for Computational Psychiatry and Aging Research, University College London, United Kingdom; Department of Experimental Psychology, University College London, United Kingdom
| |
Collapse
|
36
|
Park S, Serences JT. Relative precision of top-down attentional modulations is lower in early visual cortex compared to mid- and high-level visual areas. J Neurophysiol 2022; 127:504-518. [PMID: 35020526 PMCID: PMC8836715 DOI: 10.1152/jn.00300.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 01/06/2022] [Accepted: 01/06/2022] [Indexed: 02/03/2023] Open
Abstract
Top-down spatial attention enhances cortical representations of behaviorally relevant visual information and increases the precision of perceptual reports. However, little is known about the relative precision of top-down attentional modulations in different visual areas, especially compared with the highly precise stimulus-driven responses that are observed in early visual cortex. For example, the precision of attentional modulations in early visual areas may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas in prefrontal cortex that generate and relay the top-down signals. Here, we used functional MRI (fMRI) and human participants to assess the precision of bottom-up spatial representations evoked by high-contrast stimuli across the visual hierarchy. Then, we examined the relative precision of top-down attentional modulations in the absence of spatially specific bottom-up drive. Whereas V1 showed the largest relative difference between the precision of top-down attentional modulations and the precision of bottom-up modulations, midlevel areas such as V4 showed relatively smaller differences between the precision of top-down and bottom-up modulations. Overall, this interaction between visual areas (e.g., V1 vs. V4) and the relative precision of top-down and bottom-up modulations suggests that the precision of top-down attentional modulations is limited by the representational fidelity of areas that generate and relay top-down feedback signals.NEW & NOTEWORTHY When the relative precision of purely top-down and bottom-up signals were compared across visual areas, early visual areas like V1 showed higher bottom-up precision compared with top-down precision. In contrast, midlevel areas showed similar levels of top-down and bottom-up precision. This result suggests that the precision of top-down attentional modulations may be limited by the relatively coarse spatial selectivity and the anatomical connectivity of the areas generating and relaying the signals.
Collapse
Affiliation(s)
- Sunyoung Park
- Department of Psychology, University of California San Diego, La Jolla, California
| | - John T Serences
- Department of Psychology, University of California San Diego, La Jolla, California
- Neurosciences Graduate Program, University of California San Diego, La Jolla, California
| |
Collapse
|
37
|
Allen EJ, St-Yves G, Wu Y, Breedlove JL, Prince JS, Dowdle LT, Nau M, Caron B, Pestilli F, Charest I, Hutchinson JB, Naselaris T, Kay K. A massive 7T fMRI dataset to bridge cognitive neuroscience and artificial intelligence. Nat Neurosci 2022; 25:116-126. [PMID: 34916659 DOI: 10.1038/s41593-021-00962-x] [Citation(s) in RCA: 144] [Impact Index Per Article: 48.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 10/12/2021] [Indexed: 11/09/2022]
Abstract
Extensive sampling of neural activity during rich cognitive phenomena is critical for robust understanding of brain function. Here we present the Natural Scenes Dataset (NSD), in which high-resolution functional magnetic resonance imaging responses to tens of thousands of richly annotated natural scenes were measured while participants performed a continuous recognition task. To optimize data quality, we developed and applied novel estimation and denoising techniques. Simple visual inspections of the NSD data reveal clear representational transformations along the ventral visual pathway. Further exemplifying the inferential power of the dataset, we used NSD to build and train deep neural network models that predict brain activity more accurately than state-of-the-art models from computer vision. NSD also includes substantial resting-state and diffusion data, enabling network neuroscience perspectives to constrain and enhance models of perception and memory. Given its unprecedented scale, quality and breadth, NSD opens new avenues of inquiry in cognitive neuroscience and artificial intelligence.
Collapse
Affiliation(s)
- Emily J Allen
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN, USA
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Ghislain St-Yves
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USA
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA
| | - Yihan Wu
- Graduate Program in Cognitive Science, University of Minnesota, Minneapolis, MN, USA
| | - Jesse L Breedlove
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USA
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Jacob S Prince
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Logan T Dowdle
- Department of Neuroscience, Center for Magnetic Resonance Research (CMRR), University of Minnesota, Minneapolis, MN, USA
- Department of Neurosurgery, Center for Magnetic Resonance Research (CMRR), University of Minnesota, Minneapolis, MN, USA
| | - Matthias Nau
- National Institute of Mental Health (NIMH), Bethesda MD, USA
| | - Brad Caron
- Program in Neuroscience, Indiana University, Bloomington IN, USA
- Program in Vision Science, Indiana University, Bloomington IN, USA
| | - Franco Pestilli
- Department of Psychology, University of Texas at Austin, Austin, TX, USA
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
- Institute for Neuroscience, University of Texas at Austin, Austin, TX, USA
| | - Ian Charest
- Center for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, UK
- cerebrUM, Département de Psychologie, Université de Montréal, Montréal QC, Canada
| | | | - Thomas Naselaris
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USA
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA
| | - Kendrick Kay
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
38
|
Kale A, Wu Y, Hullman J. Causal Support: Modeling Causal Inferences with Visualizations. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:1150-1160. [PMID: 34587057 DOI: 10.1109/tvcg.2021.3114824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal visual "insights". We formally evaluate the quality of causal inferences from visualizations by adopting causal support-a Bayesian cognition model that learns the probability of alternative causal explanations given some data-as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users' causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts' mental models more explicit in VA software.
Collapse
|
39
|
Li HH, Sprague TC, Yoo AH, Ma WJ, Curtis CE. Joint representation of working memory and uncertainty in human cortex. Neuron 2021; 109:3699-3712.e6. [PMID: 34525327 PMCID: PMC8602749 DOI: 10.1016/j.neuron.2021.08.022] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 07/09/2021] [Accepted: 08/17/2021] [Indexed: 10/20/2022]
Abstract
Neural representations of visual working memory (VWM) are noisy, and thus, decisions based on VWM are inevitably subject to uncertainty. However, the mechanisms by which the brain simultaneously represents the content and uncertainty of memory remain largely unknown. Here, inspired by the theory of probabilistic population codes, we test the hypothesis that the human brain represents an item maintained in VWM as a probability distribution over stimulus feature space, thereby capturing both its content and uncertainty. We used a neural generative model to decode probability distributions over memorized locations from fMRI activation patterns. We found that the mean of the probability distribution decoded from retinotopic cortical areas predicted memory reports on a trial-by-trial basis. Moreover, in several of the same mid-dorsal stream areas, the spread of the distribution predicted subjective trial-by-trial uncertainty judgments. These results provide evidence that VWM content and uncertainty are jointly represented by probabilistic neural codes.
Collapse
Affiliation(s)
- Hsin-Hung Li
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Thomas C Sprague
- Department of Psychology, New York University, New York, NY 10003, USA; Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA 93106, USA
| | - Aspen H Yoo
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Wei Ji Ma
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Clayton E Curtis
- Department of Psychology, New York University, New York, NY 10003, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
40
|
Liu J, Zhang H, Yu T, Ren L, Ni D, Yang Q, Lu B, Zhang L, Axmacher N, Xue G. Transformative neural representations support long-term episodic memory. SCIENCE ADVANCES 2021; 7:eabg9715. [PMID: 34623910 PMCID: PMC8500506 DOI: 10.1126/sciadv.abg9715] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 08/17/2021] [Indexed: 06/13/2023]
Abstract
Memory is often conceived as a dynamic process that involves substantial transformations of mental representations. However, the neural mechanisms underlying these transformations and their role in memory formation and retrieval have only started to be elucidated. Combining intracranial EEG recordings with deep neural network models, we provide a detailed picture of the representational transformations from encoding to short-term memory maintenance and long-term memory retrieval that underlie successful episodic memory. We observed substantial representational transformations during encoding. Critically, more pronounced semantic representational formats predicted better subsequent long-term memory, and this effect was mediated by more consistent item-specific representations across encoding events. The representations were further transformed right after stimulus offset, and the representations during long-term memory retrieval were more similar to those during short-term maintenance than during encoding. Our results suggest that memory representations pass through multiple stages of transformations to achieve successful long-term memory formation and recall.
Collapse
Affiliation(s)
- Jing Liu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Hui Zhang
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum 44801, Germany
| | - Tao Yu
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing 100053, China
| | - Liankun Ren
- Comprehensive Epilepsy Center of Beijing, Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing 100053, China
| | - Duanyu Ni
- Beijing Institute of Functional Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing 100053, China
| | - Qinhao Yang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Baoqing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Liang Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Nikolai Axmacher
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum 44801, Germany
| | - Gui Xue
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
41
|
Hallenbeck GE, Sprague TC, Rahmati M, Sreenivasan KK, Curtis CE. Working memory representations in visual cortex mediate distraction effects. Nat Commun 2021; 12:4714. [PMID: 34354071 PMCID: PMC8342709 DOI: 10.1038/s41467-021-24973-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 07/13/2021] [Indexed: 11/17/2022] Open
Abstract
Although the contents of working memory can be decoded from visual cortex activity, these representations may play a limited role if they are not robust to distraction. We used model-based fMRI to estimate the impact of distracting visual tasks on working memory representations in several visual field maps in visual and frontoparietal association cortex. Here, we show distraction causes the fidelity of working memory representations to briefly dip when both the memorandum and distractor are jointly encoded by the population activities. Distraction induces small biases in memory errors which can be predicted by biases in neural decoding in early visual cortex, but not other regions. Although distraction briefly disrupts working memory representations, the widespread redundancy with which working memory information is encoded may protect against catastrophic loss. In early visual cortex, the neural representation of information in working memory and behavioral performance are intertwined, solidifying its importance in visual memory. The relative roles of visual, parietal, and frontal cortex in working memory have been actively debated. Here, the authors show that distraction impacts visual working memory representations in primary visual areas, indicating that these regions play a key role in the maintenance of working memory.
Collapse
Affiliation(s)
| | - Thomas C Sprague
- Department of Psychology, New York University, New York, NY, USA.,Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA
| | - Masih Rahmati
- Department of Psychology, New York University, New York, NY, USA.,Center for Neural Science, New York University, New York, NY, USA
| | - Kartik K Sreenivasan
- Division of Science and Mathematics, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Clayton E Curtis
- Department of Psychology, New York University, New York, NY, USA. .,Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
42
|
|
43
|
Königsmark VT, Bergmann J, Reeder RR. The Ganzflicker experience: High probability of seeing vivid and complex pseudo-hallucinations with imagery but not aphantasia. Cortex 2021; 141:522-534. [PMID: 34172274 DOI: 10.1016/j.cortex.2021.05.007] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 02/02/2021] [Accepted: 05/09/2021] [Indexed: 02/07/2023]
Abstract
There are considerable individual differences in visual mental imagery ability across the general population, including a "blind mind's eye", or aphantasia. Recent studies have shown that imagery is linked to differences in perception in the healthy population, and clinical work has found a connection between imagery and hallucinatory experiences in neurological disorders. However, whether imagery ability is associated with anomalous perception-including hallucinations-in the general population remains unclear. In the current study, we explored the relationship between imagery ability and the anomalous perception of pseudo-hallucinations (PH) using rhythmic flicker stimulation ("Ganzflicker"). Specifically, we investigated whether the ability to generate voluntary imagery is associated with susceptibility to flicker-induced PH. We additionally explored individual differences in observed features of PH. We recruited a sample of people with aphantasia (aphants) and imagery (imagers) to view a constant red-and-black flicker for approximately 10 min. We found that imagers were more susceptible to PH, and saw more complex and vivid PH, compared to aphants. This study provides the first evidence that the ability to generate visual imagery increases the likelihood of experiencing complex and vivid anomalous percepts.
Collapse
Affiliation(s)
- Varg T Königsmark
- Institute of Psychology, Otto-von-Guericke University Magdeburg, Magdeburg, Germany
| | - Johanna Bergmann
- Department of Psychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Reshanne R Reeder
- Institute of Psychology, Otto-von-Guericke University Magdeburg, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany; Department of Psychology, Edge Hill University, Ormskirk, UK.
| |
Collapse
|
44
|
Milton F, Fulford J, Dance C, Gaddum J, Heuerman-Williamson B, Jones K, Knight KF, MacKisack M, Winlove C, Zeman A. Behavioral and Neural Signatures of Visual Imagery Vividness Extremes: Aphantasia versus Hyperphantasia. Cereb Cortex Commun 2021; 2:tgab035. [PMID: 34296179 PMCID: PMC8186241 DOI: 10.1093/texcom/tgab035] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Revised: 04/18/2021] [Accepted: 04/20/2021] [Indexed: 12/17/2022] Open
Abstract
Although Galton recognized in the 1880s that some individuals lack visual imagery, this phenomenon was mostly neglected over the following century. We recently coined the terms "aphantasia" and "hyperphantasia" to describe visual imagery vividness extremes, unlocking a sustained surge of public interest. Aphantasia is associated with subjective impairment of face recognition and autobiographical memory. Here we report the first systematic, wide-ranging neuropsychological and brain imaging study of people with aphantasia (n = 24), hyperphantasia (n = 25), and midrange imagery vividness (n = 20). Despite equivalent performance on standard memory tests, marked group differences were measured in autobiographical memory and imagination, participants with hyperphantasia outperforming controls who outperformed participants with aphantasia. Face recognition difficulties and autistic spectrum traits were reported more commonly in aphantasia. The Revised NEO Personality Inventory highlighted reduced extraversion in the aphantasia group and increased openness in the hyperphantasia group. Resting state fMRI revealed stronger connectivity between prefrontal cortices and the visual network among hyperphantasic than aphantasic participants. In an active fMRI paradigm, there was greater anterior parietal activation among hyperphantasic and control than aphantasic participants when comparing visualization of famous faces and places with perception. These behavioral and neural signatures of visual imagery vividness extremes validate and illuminate this significant but neglected dimension of individual difference.
Collapse
Affiliation(s)
- Fraser Milton
- Discipline of Psychology, University of Exeter, Exeter EX4 4QG, UK
| | - Jon Fulford
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| | - Carla Dance
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| | - James Gaddum
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| | | | - Kealan Jones
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| | - Kathryn F Knight
- Discipline of Psychology, University of Exeter, Exeter EX4 4QG, UK
| | - Matthew MacKisack
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| | - Crawford Winlove
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| | - Adam Zeman
- Cognitive Neurology Research Group, University of Exeter Medical School, College House, Exeter EX1 2LU, UK
| |
Collapse
|
45
|
Knapen T. Topographic connectivity reveals task-dependent retinotopic processing throughout the human brain. Proc Natl Acad Sci U S A 2021; 118:e2017032118. [PMID: 33372144 PMCID: PMC7812773 DOI: 10.1073/pnas.2017032118] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
The human visual system is organized as a hierarchy of maps that share the topography of the retina. Known retinotopic maps have been identified using simple visual stimuli under strict fixation, conditions different from everyday vision which is active, dynamic, and complex. This means that it remains unknown how much of the brain is truly visually organized. Here I demonstrate widespread stable visual organization beyond the traditional visual system, in default-mode network and hippocampus. Detailed topographic connectivity with primary visual cortex during movie-watching, resting-state, and retinotopic-mapping experiments revealed that visual-spatial representations throughout the brain are warped by cognitive state. Specifically, traditionally visual regions alternate with default-mode network and hippocampus in preferentially representing the center of the visual field. This visual role of default-mode network and hippocampus would allow these regions to interface between abstract memories and concrete sensory impressions. Together, these results indicate that visual-spatial organization is a fundamental coding principle that structures the communication between distant brain regions.
Collapse
Affiliation(s)
- Tomas Knapen
- Spinoza Centre for Neuroimaging, Royal Netherlands Academy of Sciences, Meibergdreef 75, 1105 BK Amsterdam, The Netherlands;
- Cognitive Psychology, Faculty of Behavioural and Movement Sciences, Vrije Universiteit, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands
| |
Collapse
|
46
|
Favila SE, Lee H, Kuhl BA. Transforming the Concept of Memory Reactivation. Trends Neurosci 2020; 43:939-950. [PMID: 33041061 PMCID: PMC7688497 DOI: 10.1016/j.tins.2020.09.006] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 08/18/2020] [Accepted: 09/15/2020] [Indexed: 12/18/2022]
Abstract
Reactivation refers to the phenomenon wherein patterns of neural activity expressed during perceptual experience are re-expressed at a later time, a putative neural marker of memory. Reactivation of perceptual content has been observed across many cortical areas and correlates with objective and subjective expressions of memory in humans. However, because reactivation emphasizes similarities between perceptual and memory-based representations, it obscures differences in how perceptual events and memories are represented. Here, we highlight recent evidence of systematic differences in how (and where) perceptual events and memories are represented in the brain. We argue that neural representations of memories are best thought of as spatially transformed versions of perceptual representations. We consider why spatial transformations occur and identify critical questions for future research.
Collapse
Affiliation(s)
- Serra E Favila
- Department of Psychology, Columbia University, New York, NY 10027, USA
| | - Hongmi Lee
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Brice A Kuhl
- Department of Psychology, University of Oregon, Eugene, OR 97403, USA.
| |
Collapse
|
47
|
Nestor A, Lee ACH, Plaut DC, Behrmann M. The Face of Image Reconstruction: Progress, Pitfalls, Prospects. Trends Cogn Sci 2020; 24:747-759. [PMID: 32674958 PMCID: PMC7429291 DOI: 10.1016/j.tics.2020.06.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 05/27/2020] [Accepted: 06/15/2020] [Indexed: 10/23/2022]
Abstract
Recent research has demonstrated that neural and behavioral data acquired in response to viewing face images can be used to reconstruct the images themselves. However, the theoretical implications, promises, and challenges of this direction of research remain unclear. We evaluate the potential of this research for elucidating the visual representations underlying face recognition. Specifically, we outline complementary and converging accounts of the visual content, the representational structure, and the neural dynamics of face processing. We illustrate how this research addresses fundamental questions in the study of normal and impaired face recognition, and how image reconstruction provides a powerful framework for uncovering face representations, for unifying multiple types of empirical data, and for facilitating both theoretical and methodological progress.
Collapse
Affiliation(s)
- Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - David C Plaut
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| |
Collapse
|