1
|
Guassi Moreira JF, Silvers JA. Multi-voxel pattern analysis for developmental cognitive neuroscientists. Dev Cogn Neurosci 2025; 73:101555. [PMID: 40188575 PMCID: PMC12002837 DOI: 10.1016/j.dcn.2025.101555] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 02/28/2025] [Accepted: 03/19/2025] [Indexed: 04/08/2025] Open
Abstract
The current prevailing approaches to analyzing task fMRI data in developmental cognitive neuroscience are brain connectivity and mass univariate task-based analyses, used either in isolation or as part of a broader analytic framework (e.g., BWAS). While these are powerful tools, it is somewhat surprising that multi-voxel pattern analysis (MVPA) is not more common in developmental cognitive neuroscience given its enhanced ability to both probe neural population codes and greater sensitivity relative to the mass univariate approach. Omitting MVPA methods might represent a missed opportunity to leverage a suite of tools that are uniquely poised to reveal mechanisms underlying brain development. The goal of this review is to spur awareness and adoption of MVPA in developmental cognitive neuroscience by providing a practical introduction to foundational MVPA concepts. We begin by defining MVPA and explain why examining multi-voxel patterns of brain activity can aid in understanding the developing human brain. We then survey four different types of MVPA: Decoding, representational similarity analysis (RSA), pattern expression, and voxel-wise encoding models. Each variant of MVPA is presented with a conceptual overview of the method followed by practical considerations and subvariants thereof. We go on to highlight the types of developmental questions that can be answered by MPVA, discuss practical matters in MVPA implementation germane to developmental cognitive neuroscientists, and make recommendations for integrating MVPA with the existing analytic ecosystem in the field.
Collapse
|
2
|
Takeda K, Abe K, Kitazono J, Oizumi M. Unsupervised alignment reveals structural commonalities and differences in neural representations of natural scenes across individuals and brain areas. iScience 2025; 28:112427. [PMID: 40343275 PMCID: PMC12059663 DOI: 10.1016/j.isci.2025.112427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Revised: 02/10/2025] [Accepted: 04/10/2025] [Indexed: 05/11/2025] Open
Abstract
Neuroscience research aims to identify universal neural mechanisms underlying sensory information encoding by comparing neural representations across individuals, typically using Representational Similarity Analysis. However, traditional methods assume direct stimulus correspondence across individuals, limiting the exploration of other possibilities. To address this, we propose an unsupervised alignment framework based on Gromov-Wasserstein Optimal Transport, which identifies correspondences between neural representations solely from internal similarity structures, without relying on stimulus labels. Applying this method to Neuropixels recordings in mice and fMRI data in humans viewing natural scenes, we found that the neural representations in the same visual cortical areas can be well aligned across individuals in an unsupervised manner. Furthermore, alignment across different brain areas is influenced by factors beyond the visual hierarchy, with higher-order visual areas aligning well with each other, but not with lower-order areas. This unsupervised approach reveals more nuanced structural commonalities and differences in neural representations than conventional methods.
Collapse
Affiliation(s)
- Ken Takeda
- Graduate School of Arts and Science, The University of Tokyo, Tokyo, Japan
| | - Kota Abe
- Graduate School of Arts and Science, The University of Tokyo, Tokyo, Japan
| | - Jun Kitazono
- Graduate School of Data Science, Yokohama City University, Kanagawa, Japan
| | - Masafumi Oizumi
- Graduate School of Arts and Science, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Nakai T, Kubo R, Nishimoto S. Cortical representational geometry of diverse tasks reveals subject-specific and subject-invariant cognitive structures. Commun Biol 2025; 8:713. [PMID: 40341201 PMCID: PMC12062439 DOI: 10.1038/s42003-025-08134-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 04/25/2025] [Indexed: 05/10/2025] Open
Abstract
The variability in brain function forms the basis for our uniqueness. Prior studies indicate smaller individual differences and larger inter-subject correlation (ISC) in sensorimotor areas than in the association cortex. These studies, deriving information from brain activity, leave individual differences in cognitive structures based on task similarity relations unexplored. This study quantitatively evaluates these differences by integrating ISC, representational similarity analysis, and vertex-wise encoding models using functional magnetic resonance imaging across 25 cognitive tasks. ISC based on cognitive structures enables subject identification with 100% accuracy using at least 14 tasks. ISC is larger in the fronto-parietal association and higher-order visual cortices, suggesting subject-invariant cognitive structures in these regions. Principal component analysis reveals different cognitive structure configurations within these regions. This study provides evidence of individual variability and similarity in abstract cognitive structures.
Collapse
Affiliation(s)
- Tomoya Nakai
- Araya Inc, Tokyo, Japan.
- Lyon Neuroscience Research Center (CRNL), INSERM U1028 - CNRS UMR5292, University of Lyon, Bron, France.
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.
| | - Rieko Kubo
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan
- Graduate School of Frontier Biosciences, The University of Osaka, Suita, Japan
| | - Shinji Nishimoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan
- Graduate School of Frontier Biosciences, The University of Osaka, Suita, Japan
- Graduate School of Medicine, The University of Osaka, Suita, Japan
| |
Collapse
|
4
|
Haupt M, Garrett DD, Cichy RM. Healthy aging delays and dedifferentiates high-level visual representations. Curr Biol 2025; 35:2112-2127.e6. [PMID: 40239656 DOI: 10.1016/j.cub.2025.03.062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2024] [Revised: 01/23/2025] [Accepted: 03/25/2025] [Indexed: 04/18/2025]
Abstract
Healthy aging impacts visual information processing with consequences for subsequent high-level cognition and everyday behavior, but the underlying neural changes in visual representations remain unknown. Here, we investigate the nature of representations underlying object recognition in older compared to younger adults by tracking them in time using electroencephalography (EEG), across space using functional magnetic resonance imaging (fMRI), and by probing their behavioral relevance using similarity judgments. Applying a multivariate analysis framework to combine experimental assessments, four key findings about how brain aging impacts object recognition emerge. First, aging selectively delays the formation of object representations, profoundly changing the chronometry of visual processing. Second, the delay in the formation of object representations emerges in high-level rather than low- and mid-level ventral visual cortex, supporting the theory that brain areas developing last deteriorate first. Third, aging reduces content selectivity in the high-level ventral visual cortex, indicating age-related neural dedifferentiation as the mechanism of representational change. Finally, we demonstrate that the identified representations of the aging brain are behaviorally relevant, ascertaining ecological relevance. Together, our results reveal the impact of healthy aging on the visual brain.
Collapse
Affiliation(s)
- Marleen Haupt
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin 14195, Germany; Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzallee 94, Berlin 14195, Germany.
| | - Douglas D Garrett
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, 10-12 Russell Square, London WC1B 5EH, UK
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, Berlin 14195, Germany; Berlin School of Mind and Brain, Faculty of Philosophy, Humboldt-Universität zu Berlin, Luisenstraße 56, Berlin 10117, Germany; Bernstein Center for Computational Neuroscience Berlin, Humbold-Universität zu Berlin, Philippstraße 13, Berlin 10115, Germany.
| |
Collapse
|
5
|
Cheng YA, Sanayei M, Chen X, Jia K, Li S, Fang F, Watanabe T, Thiele A, Zhang RY. A neural geometry approach comprehensively explains apparently conflicting models of visual perceptual learning. Nat Hum Behav 2025; 9:1023-1040. [PMID: 40164913 PMCID: PMC12106082 DOI: 10.1038/s41562-025-02149-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Accepted: 02/20/2025] [Indexed: 04/02/2025]
Abstract
Visual perceptual learning (VPL), defined as long-term improvement in a visual task, is considered a crucial tool for elucidating underlying visual and brain plasticity. Previous studies have proposed several neural models of VPL, including changes in neural tuning or in noise correlations. Here, to adjudicate different models, we propose that all neural changes at single units can be conceptualized as geometric transformations of population response manifolds in a high-dimensional neural space. Following this neural geometry approach, we identified neural manifold shrinkage due to reduced trial-by-trial population response variability, rather than tuning or correlation changes, as the primary mechanism of VPL. Furthermore, manifold shrinkage successfully explains VPL effects across artificial neural responses in deep neural networks, multivariate blood-oxygenation-level-dependent signals in humans and multiunit activities in monkeys. These converging results suggest that our neural geometry approach comprehensively explains a wide range of empirical results and reconciles previously conflicting models of VPL.
Collapse
Affiliation(s)
- Yu-Ang Cheng
- Brain Health Institute, National Center for Mental Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine and School of Psychology, Shanghai, People's Republic of China
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Mehdi Sanayei
- Biosciences Institute, Newcastle University, Framlington Place, Newcastle upon Tyne, UK
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences, Tehran, Iran
| | - Xing Chen
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Ke Jia
- Affiliated Mental Health Center and Hangzhou Seventh People's Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China
- Liangzhu Laboratory, MOE Frontier Science Center for Brain Science and Brain-machine Integration, State Key Laboratory of Brain-Machine Intelligence, Zhejiang University, Hangzhou, People's Republic of China
- NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University, Hangzhou, People's Republic of China
| | - Sheng Li
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, People's Republic of China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing, People's Republic of China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, People's Republic of China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, People's Republic of China
| | - Takeo Watanabe
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Alexander Thiele
- Biosciences Institute, Newcastle University, Framlington Place, Newcastle upon Tyne, UK
| | - Ru-Yuan Zhang
- Brain Health Institute, National Center for Mental Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine and School of Psychology, Shanghai, People's Republic of China.
| |
Collapse
|
6
|
Durkin C, Apicella M, Baldassano C, Kandel E, Shohamy D. The Beholder's Share: Bridging art and neuroscience to study individual differences in subjective experience. Proc Natl Acad Sci U S A 2025; 122:e2413871122. [PMID: 40193608 PMCID: PMC12012540 DOI: 10.1073/pnas.2413871122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 02/11/2025] [Indexed: 04/09/2025] Open
Abstract
Our experience of the world is inherently subjective, shaped by individual history, knowledge, and perspective. Art offers a framework within which this subjectivity is practiced and promoted, inviting viewers to engage in interpretation. According to art theory, different forms of art-ranging from the representational to the abstract-challenge these interpretive processes in different ways. Yet, much remains unknown about how art is subjectively interpreted. In this study, we sought to elucidate the neural and cognitive mechanisms that underlie the subjective interpretation of art. Using brain imaging and written descriptions, we quantified individual variability in responses to paintings by the same artists, contrasting figurative and abstract paintings. Our findings revealed that abstract art elicited greater interindividual variability in activity within higher-order, associative brain areas, particularly those comprising the default-mode network. By contrast, no such differences were found in early visual areas, suggesting that subjective variability arises from higher cognitive processes rather than differences in sensory processing. These findings provide insight into how the brain engages with and perceives different forms of art and imbues it with subjective interpretation.
Collapse
Affiliation(s)
- Celia Durkin
- Department of Psychology, Columbia University, New York, NY10027
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY10027
| | - Marc Apicella
- Department of Psychology, Columbia University, New York, NY10027
| | | | - Eric Kandel
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY10027
- Department of Neuroscience, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, New York, NY10027
| | - Daphna Shohamy
- Department of Psychology, Columbia University, New York, NY10027
- Zuckerman Mind Brain and Behavior Institute, Columbia University, New York, NY10027
- Kavli Institute for Brain Science, New York, NY10027
| |
Collapse
|
7
|
Ohm T, Karjus A, Tamm MV, Schich M. fruit-SALAD: A Style Aligned Artwork Dataset to reveal similarity perception in image embeddings. Sci Data 2025; 12:254. [PMID: 39939631 PMCID: PMC11821872 DOI: 10.1038/s41597-025-04529-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 01/28/2025] [Indexed: 02/14/2025] Open
Abstract
The notion of visual similarity is essential for computer vision, and in applications and studies revolving around vector embeddings of images. However, the scarcity of benchmark datasets poses a significant hurdle in exploring how these models perceive similarity. Here we introduce Style Aligned Artwork Datasets (SALAD), and an example of fruit-SALAD with 10,000 images of fruit depictions. This combined semantic category and style benchmark comprises 100 instances each of 10 easy-to-recognize fruit categories, across 10 easy distinguishable styles. Leveraging a systematic pipeline of generative image synthesis, this visually diverse yet balanced benchmark demonstrates salient differences in semantic category and style similarity weights across various computational models, including machine learning models, feature extraction algorithms, and complexity measures, as well as conceptual models for reference. This meticulously designed dataset offers a controlled and balanced platform for the comparative analysis of similarity perception. The SALAD framework allows the comparison of how these models perform semantic category and style recognition task to go beyond the level of anecdotal knowledge, making it robustly quantifiable and qualitatively interpretable.
Collapse
Affiliation(s)
- Tillmann Ohm
- Tallinn University, School of Digital Technologies, Tallinn, Estonia.
- Tallinn University, School of Humanities, Tallinn, Estonia.
| | - Andres Karjus
- Estonian Business School, Tallinn, Estonia
- Tallinn University, ERA Chair of Cultural Data Analytics, Tallinn, Estonia
- Tallinn University, Baltic Film, Media and Arts School, Tallinn, Estonia
| | - Mikhail V Tamm
- Tallinn University, School of Digital Technologies, Tallinn, Estonia
- Tallinn University, ERA Chair of Cultural Data Analytics, Tallinn, Estonia
| | - Maximilian Schich
- Tallinn University, ERA Chair of Cultural Data Analytics, Tallinn, Estonia
- Tallinn University, Baltic Film, Media and Arts School, Tallinn, Estonia
| |
Collapse
|
8
|
Wang J, Lapate RC. Emotional state dynamics impacts temporal memory. Cogn Emot 2025; 39:136-155. [PMID: 38898587 PMCID: PMC11655710 DOI: 10.1080/02699931.2024.2349326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 02/02/2024] [Accepted: 02/13/2024] [Indexed: 06/21/2024]
Abstract
Emotional fluctuations are ubiquitous in everyday life, but precisely how they sculpt the temporal organisation of memories remains unclear. Here, we designed a novel task - the Emotion Boundary Task - wherein participants viewed sequences of negative and neutral images surrounded by a colour border. We manipulated perceptual context (border colour), emotional-picture valence, as well as the direction of emotional-valence shifts (i.e., shifts from neutral-to-negative and negative-to-neutral events) to create events with a shared perceptual and/or emotional context. We measured memory for temporal order and temporal distances for images processed within and across events. Negative images processed within events were remembered as closer in time compared to neutral ones. In contrast, temporal distances were remembered as longer for images spanning neutral-to-negative shifts - suggesting temporal dilation in memory with the onset of a negative event following a previously-neutral state. The extent of negative-picture induced temporal dilation in memory correlated with dispositional negativity across individuals. Lastly, temporal order memory was enhanced for recently-presented negative (versus neutral) images. These findings suggest that emotional-state dynamics matters when considering emotion-temporal memory interactions: While persistent negative events may compress subjectively remembered time, dynamic shifts from neutral-to-negative events produce temporal dilation in memory, with implications for adaptive emotional functioning.
Collapse
Affiliation(s)
- Jingyi Wang
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA
| | - Regina C Lapate
- Department of Psychological & Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, USA
| |
Collapse
|
9
|
Zhang Y, Ma C, Li H, Assumpção L, Liu Y. Sophisticated perspective-takers are distinctive: Neural idiosyncrasy of functional connectivity in the mentalizing network. iScience 2024; 27:111472. [PMID: 39720521 PMCID: PMC11667172 DOI: 10.1016/j.isci.2024.111472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 09/29/2024] [Accepted: 11/21/2024] [Indexed: 12/26/2024] Open
Abstract
Naive perspective-takers often perceive the social world in a simplistic and uniform way, whereas sophisticated ones recognize the diversity and complexity of others' minds. This commonly accepted distinction points to a possibility of greater inter-individual variability in mentalizing for sophisticated than naive perspective-takers, a difference previously overlooked in research. In the current study, participants were asked to watch a mentalizing-related movie and their neural responses, interpretations of the characters' mental states, and eye-gaze trajectories were recorded. The results provide robust and converging evidence that the neural connectomic features within the mentalizing network, eye-gaze trajectories, and interpretations of others' mental states exhibit greater inter-individual variability among sophisticated perspective-takers compared to naive ones, supporting that sophisticated perspective-takers are more distinctive while naive ones are more similar. These findings deepen our understanding of mentalizing by highlighting the idiosyncrasy and homogeneity of neural collaboration and behavioral manifestations across varying levels of perspective-taking sophistication.
Collapse
Affiliation(s)
- Yu Zhang
- School of Psychology, Northeast Normal University, Changchun 130024, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun 130024, China
| | - Chao Ma
- School of Psychology, Northeast Normal University, Changchun 130024, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun 130024, China
| | - Haiming Li
- School of Psychology, Northeast Normal University, Changchun 130024, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun 130024, China
| | - Leonardo Assumpção
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians University, 80802 Munich, Germany
| | - Yi Liu
- School of Psychology, Northeast Normal University, Changchun 130024, China
- Jilin Provincial Key Laboratory of Cognitive Neuroscience and Brain Development, Changchun 130024, China
| |
Collapse
|
10
|
Hussain A, Walbrin J, Tochadse M, Almeida J. Primary manipulation knowledge of objects is associated with the functional coupling of pMTG and aIPS. Neuropsychologia 2024; 205:109034. [PMID: 39536937 DOI: 10.1016/j.neuropsychologia.2024.109034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/10/2024] [Accepted: 11/10/2024] [Indexed: 11/16/2024]
Abstract
Correctly using hand-held tools and manipulable objects typically relies not only on sensory and motor-related processes, but also centrally on conceptual knowledge about how objects are typically used (e.g. grasping the handle of a kitchen knife rather than the blade avoids injury). A wealth of fMRI connectivity-related evidence demonstrates that contributions from both ventral and dorsal stream areas are important for accurate tool knowledge and use. Here, we investigate the combined role of ventral and dorsal stream areas in representing "primary" manipulation knowledge - that is, knowledge that is hypothesized to be of central importance for day-to-day object use. We operationalize primary manipulation knowledge by extracting the first dimension from a multi-dimensional scaling solution over a behavioral judgement task where subjects arranged a set of 80 manipulable objects based on their overall manipulation similarity. We then relate this dimension to representational and time-course correlations between ventral and dorsal stream areas. Our results show that functional coupling between posterior middle temporal gyrus (pMTG) and anterior intraparietal sulcus (aIPS) is uniquely related to primary manipulation knowledge about objects, and that this effect is more pronounced for objects that require precision grasping. We reason this is due to precision-grasp objects requiring more ventral/temporal information relating to object shape, material and function to allow correct finger placement and controlled manipulation. These results demonstrate the importance of functional coupling across these ventral and dorsal stream areas in service of manipulation knowledge and accurate grasp-related behavior.
Collapse
Affiliation(s)
- Akbar Hussain
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Department of Cognitive Sciences, University of California, Irvine, California 92697-5100, USA
| | - Jon Walbrin
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal
| | - Marija Tochadse
- Charité - Universitätsmedizin Berlin (Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Berlin, Germany
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.
| |
Collapse
|
11
|
Haupt M, Graumann M, Teng S, Kaltenbach C, Cichy R. The transformation of sensory to perceptual braille letter representations in the visually deprived brain. eLife 2024; 13:RP98148. [PMID: 39630852 PMCID: PMC11616995 DOI: 10.7554/elife.98148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2024] Open
Abstract
Experience-based plasticity of the human cortex mediates the influence of individual experience on cognition and behavior. The complete loss of a sensory modality is among the most extreme such experiences. Investigating such a selective, yet extreme change in experience allows for the characterization of experience-based plasticity at its boundaries. Here, we investigated information processing in individuals who lost vision at birth or early in life by probing the processing of braille letter information. We characterized the transformation of braille letter information from sensory representations depending on the reading hand to perceptual representations that are independent of the reading hand. Using a multivariate analysis framework in combination with functional magnetic resonance imaging (fMRI), electroencephalography (EEG), and behavioral assessment, we tracked cortical braille representations in space and time, and probed their behavioral relevance. We located sensory representations in tactile processing areas and perceptual representations in sighted reading areas, with the lateral occipital complex as a connecting 'hinge' region. This elucidates the plasticity of the visually deprived brain in terms of information processing. Regarding information processing in time, we found that sensory representations emerge before perceptual representations. This indicates that even extreme cases of brain plasticity adhere to a common temporal scheme in the progression from sensory to perceptual transformations. Ascertaining behavioral relevance through perceived similarity ratings, we found that perceptual representations in sighted reading areas, but not sensory representations in tactile processing areas are suitably formatted to guide behavior. Together, our results reveal a nuanced picture of both the potentials and limits of experience-dependent plasticity in the visually deprived brain.
Collapse
Affiliation(s)
- Marleen Haupt
- Department of Education and Psychology, Freie Universität BerlinBerlinGermany
| | - Monika Graumann
- Department of Education and Psychology, Freie Universität BerlinBerlinGermany
- Berlin School of Mind and Brain, Faculty of Philosophy, Humboldt-Universität zu BerlinBerlinGermany
| | - Santani Teng
- Smith-Kettlewell Eye Research InstituteSan FranciscoUnited States
| | - Carina Kaltenbach
- Department of Education and Psychology, Freie Universität BerlinBerlinGermany
| | - Radoslaw Cichy
- Department of Education and Psychology, Freie Universität BerlinBerlinGermany
- Berlin School of Mind and Brain, Faculty of Philosophy, Humboldt-Universität zu BerlinBerlinGermany
- Bernstein Center for Computational Neuroscience BerlinBerlinGermany
| |
Collapse
|
12
|
Han J, Chauhan V, Philip R, Taylor MK, Jung H, Halchenko YO, Gobbini MI, Haxby JV, Nastase SA. Behaviorally-relevant features of observed actions dominate cortical representational geometry in natural vision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.11.26.624178. [PMID: 39651248 PMCID: PMC11623629 DOI: 10.1101/2024.11.26.624178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
We effortlessly extract behaviorally relevant information from dynamic visual input in order to understand the actions of others. In the current study, we develop and test a number of models to better understand the neural representational geometries supporting action understanding. Using fMRI, we measured brain activity as participants viewed a diverse set of 90 different video clips depicting social and nonsocial actions in real-world contexts. We developed five behavioral models using arrangement tasks: two models reflecting behavioral judgments of the purpose (transitivity) and the social content (sociality) of the actions depicted in the video stimuli; and three models reflecting behavioral judgments of the visual content (people, objects, and scene) depicted in still frames of the stimuli. We evaluated how well these models predict neural representational geometry and tested them against semantic models based on verb and nonverb embeddings and visual models based on gaze and motion energy. Our results revealed that behavioral judgments of similarity better reflect neural representational geometry than semantic or visual models throughout much of cortex. The sociality and transitivity models in particular captured a large portion of unique variance throughout the action observation network, extending into regions not typically associated with action perception, like ventral temporal cortex. Overall, our findings expand the action observation network and indicate that the social content and purpose of observed actions are predominant in cortical representation.
Collapse
|
13
|
Contier O, Baker CI, Hebart MN. Distributed representations of behaviour-derived object dimensions in the human visual system. Nat Hum Behav 2024; 8:2179-2193. [PMID: 39251723 PMCID: PMC11576512 DOI: 10.1038/s41562-024-01980-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 08/06/2024] [Indexed: 09/11/2024]
Abstract
Object vision is commonly thought to involve a hierarchy of brain regions processing increasingly complex image features, with high-level visual cortex supporting object recognition and categorization. However, object vision supports diverse behavioural goals, suggesting basic limitations of this category-centric framework. To address these limitations, we mapped a series of dimensions derived from a large-scale analysis of human similarity judgements directly onto the brain. Our results reveal broadly distributed representations of behaviourally relevant information, demonstrating selectivity to a wide variety of novel dimensions while capturing known selectivities for visual features and categories. Behaviour-derived dimensions were superior to categories at predicting brain responses, yielding mixed selectivity in much of visual cortex and sparse selectivity in category-selective clusters. This framework reconciles seemingly disparate findings regarding regional specialization, explaining category selectivity as a special case of sparse response profiles among representational dimensions, suggesting a more expansive view on visual processing in the human brain.
Collapse
Affiliation(s)
- Oliver Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
- Max Planck School of Cognition, Leipzig, Germany.
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
14
|
Hong B, Tran MA, Cheng H, Arenas Rodriguez B, Li KE, Barense MD. The influence of event similarity on the detailed recall of autobiographical memories. Memory 2024:1-13. [PMID: 39321317 DOI: 10.1080/09658211.2024.2406307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 09/12/2024] [Indexed: 09/27/2024]
Abstract
Memories for life events are thought to be organised based on their relationships with one another, affecting the order in which events are recalled such that similar events tend to be recalled together. However, less is known about how detailed recall for a given event is affected by its associations to other events. Here, we used a cued autobiographical memory recall task where participants verbally recalled events corresponding to personal photographs. Importantly, we characterised the temporal, spatial, and semantic associations between each event to assess how similarity between adjacently cued events affected detailed recall. We found that participants provided more non-episodic details for cued events when the preceding event was both semantically similar and either temporally or spatially dissimilar. However, similarity along time, space, or semantics between adjacent events did not affect the episodic details recalled. We interpret this by considering organisation at the level of a life narrative, rather than individual events. When recalling a stream of personal events, we may feel obligated to justify seeming discrepancies between adjacent events that are semantically similar, yet simultaneously temporally or spatially dissimilar - to do so, we provide additional supplementary detail to help maintain global coherence across the events in our lives.
Collapse
Affiliation(s)
- Bryan Hong
- Department of Psychology, University of Toronto, Toronto, Canada
| | - My An Tran
- Department of Psychology, University of Toronto, Toronto, Canada
| | - Heidi Cheng
- Department of Psychology, University of Toronto, Toronto, Canada
| | | | - Kristen E Li
- Department of Psychology, University of Toronto, Toronto, Canada
| | - Morgan D Barense
- Department of Psychology, University of Toronto, Toronto, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada
| |
Collapse
|
15
|
Robinson MM, DeStefano IC, Vul E, Brady TF. Local but not global graph theoretic measures of semantic networks generalize across tasks. Behav Res Methods 2024; 56:5279-5308. [PMID: 38017203 DOI: 10.3758/s13428-023-02271-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2023] [Indexed: 11/30/2023]
Abstract
"Dogs" are connected to "cats" in our minds, and "backyard" to "outdoors." Does the structure of this semantic knowledge differ across people? Network-based approaches are a popular representational scheme for thinking about how relations between different concepts are organized. Recent research uses graph theoretic analyses to examine individual differences in semantic networks for simple concepts and how they relate to other higher-level cognitive processes, such as creativity. However, it remains ambiguous whether individual differences captured via network analyses reflect true differences in measures of the structure of semantic knowledge, or differences in how people strategically approach semantic relatedness tasks. To test this, we examine the reliability of local and global metrics of semantic networks for simple concepts across different semantic relatedness tasks. In four experiments, we find that both weighted and unweighted graph theoretic representations reliably capture individual differences in local measures of semantic networks (e.g., how related pot is to pan versus lion). In contrast, we find that metrics of global structural properties of semantic networks, such as the average clustering coefficient and shortest path length, are less robust across tasks and may not provide reliable individual difference measures of how people represent simple concepts. We discuss the implications of these results and offer recommendations for researchers who seek to apply graph theoretic analyses in the study of individual differences in semantic memory.
Collapse
Affiliation(s)
- Maria M Robinson
- Department of Psychology, University of California, San Diego, CA, USA.
| | | | - Edward Vul
- Department of Psychology, University of California, San Diego, CA, USA
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, CA, USA
| |
Collapse
|
16
|
Contier O, Baker CI, Hebart MN. Distributed representations of behavior-derived object dimensions in the human visual system. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.23.553812. [PMID: 37662312 PMCID: PMC10473665 DOI: 10.1101/2023.08.23.553812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Object vision is commonly thought to involve a hierarchy of brain regions processing increasingly complex image features, with high-level visual cortex supporting object recognition and categorization. However, object vision supports diverse behavioral goals, suggesting basic limitations of this category-centric framework. To address these limitations, we mapped a series of dimensions derived from a large-scale analysis of human similarity judgments directly onto the brain. Our results reveal broadly distributed representations of behaviorally-relevant information, demonstrating selectivity to a wide variety of novel dimensions while capturing known selectivities for visual features and categories. Behavior-derived dimensions were superior to categories at predicting brain responses, yielding mixed selectivity in much of visual cortex and sparse selectivity in category-selective clusters. This framework reconciles seemingly disparate findings regarding regional specialization, explaining category selectivity as a special case of sparse response profiles among representational dimensions, suggesting a more expansive view on visual processing in the human brain.
Collapse
Affiliation(s)
- O Contier
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Max Planck School of Cognition, Leipzig, Germany
| | - C I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, USA
| | - M N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Medicine, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
17
|
Lützow Holm E, Fernández Slezak D, Tagliazucchi E. Contribution of low-level image statistics to EEG decoding of semantic content in multivariate and univariate models with feature optimization. Neuroimage 2024; 293:120626. [PMID: 38677632 DOI: 10.1016/j.neuroimage.2024.120626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 04/23/2024] [Accepted: 04/24/2024] [Indexed: 04/29/2024] Open
Abstract
Spatio-temporal patterns of evoked brain activity contain information that can be used to decode and categorize the semantic content of visual stimuli. However, this procedure can be biased by low-level image features independently of the semantic content present in the stimuli, prompting the need to understand the robustness of different models regarding these confounding factors. In this study, we trained machine learning models to distinguish between concepts included in the publicly available THINGS-EEG dataset using electroencephalography (EEG) data acquired during a rapid serial visual presentation paradigm. We investigated the contribution of low-level image features to decoding accuracy in a multivariate model, utilizing broadband data from all EEG channels. Additionally, we explored a univariate model obtained through data-driven feature selection applied to the spatial and frequency domains. While the univariate models exhibited better decoding accuracy, their predictions were less robust to the confounding effect of low-level image statistics. Notably, some of the models maintained their accuracy even after random replacement of the training dataset with semantically unrelated samples that presented similar low-level content. In conclusion, our findings suggest that model optimization impacts sensitivity to confounding factors, regardless of the resulting classification performance. Therefore, the choice of EEG features for semantic decoding should ideally be informed by criteria beyond classifier performance, such as the neurobiological mechanisms under study.
Collapse
Affiliation(s)
- Eric Lützow Holm
- National Scientific and Technical Research Council (CONICET), Godoy Cruz 2290, CABA 1425, Argentina; Institute of Applied and Interdisciplinary Physics and Department of Physics, University of Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina.
| | - Diego Fernández Slezak
- National Scientific and Technical Research Council (CONICET), Godoy Cruz 2290, CABA 1425, Argentina; Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina; Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-Universidad de Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina
| | - Enzo Tagliazucchi
- National Scientific and Technical Research Council (CONICET), Godoy Cruz 2290, CABA 1425, Argentina; Institute of Applied and Interdisciplinary Physics and Department of Physics, University of Buenos Aires, Pabellón 1, Ciudad Universitaria, CABA 1425, Argentina; Latin American Brain Health (BrainLat), Universidad Adolfo Ibáñez, Av. Diag. Las Torres 2640, Peñalolén 7941169, Santiago Región Metropolitana, Chile.
| |
Collapse
|
18
|
Caplette L, Turk-Browne NB. Computational reconstruction of mental representations using human behavior. Nat Commun 2024; 15:4183. [PMID: 38760341 PMCID: PMC11101448 DOI: 10.1038/s41467-024-48114-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 04/19/2024] [Indexed: 05/19/2024] Open
Abstract
Revealing how the mind represents information is a longstanding goal of cognitive science. However, there is currently no framework for reconstructing the broad range of mental representations that humans possess. Here, we ask participants to indicate what they perceive in images made of random visual features in a deep neural network. We then infer associations between the semantic features of their responses and the visual features of the images. This allows us to reconstruct the mental representations of multiple visual concepts, both those supplied by participants and other concepts extrapolated from the same semantic space. We validate these reconstructions in separate participants and further generalize our approach to predict behavior for new stimuli and in a new task. Finally, we reconstruct the mental representations of individual observers and of a neural network. This framework enables a large-scale investigation of conceptual representations.
Collapse
Affiliation(s)
| | - Nicholas B Turk-Browne
- Department of Psychology, Yale University, New Haven, CT, USA
- Wu Tsai Institute, Yale University, New Haven, CT, USA
| |
Collapse
|
19
|
Faghel-Soubeyrand S, Richoz AR, Waeber D, Woodhams J, Caldara R, Gosselin F, Charest I. Neural computations in prosopagnosia. Cereb Cortex 2024; 34:bhae211. [PMID: 38795358 PMCID: PMC11127037 DOI: 10.1093/cercor/bhae211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/30/2024] [Accepted: 05/03/2024] [Indexed: 05/27/2024] Open
Abstract
We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Woodstock Rd, Oxford OX2 6GG
| | - Anne-Raphaelle Richoz
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Delphine Waeber
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Jessica Woodhams
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | - Roberto Caldara
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| |
Collapse
|
20
|
Wang G, Foxwell MJ, Cichy RM, Pitcher D, Kaiser D. Individual differences in internal models explain idiosyncrasies in scene perception. Cognition 2024; 245:105723. [PMID: 38262271 DOI: 10.1016/j.cognition.2024.105723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 01/12/2024] [Accepted: 01/14/2024] [Indexed: 01/25/2024]
Abstract
According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences - starting from participants' personal expectations about the structure of real-world scenes.
Collapse
Affiliation(s)
- Gongting Wang
- Department of Education and Psychology, Freie Universität Berlin, Germany; Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Germany
| | | | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Germany
| | | | - Daniel Kaiser
- Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Germany; Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Germany.
| |
Collapse
|
21
|
Yeh 葉律君 LC, Thorat S, Peelen MV. Predicting Cued and Oddball Visual Search Performance from fMRI, MEG, and DNN Neural Representational Similarity. J Neurosci 2024; 44:e1107232024. [PMID: 38331583 PMCID: PMC10957208 DOI: 10.1523/jneurosci.1107-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 12/13/2023] [Accepted: 01/10/2024] [Indexed: 02/10/2024] Open
Abstract
Capacity limitations in visual tasks can be observed when the number of task-related objects increases. An influential idea is that such capacity limitations are determined by competition at the neural level: two objects that are encoded by shared neural populations interfere more in behavior (e.g., visual search) than two objects encoded by separate neural populations. However, the neural representational similarity of objects varies across brain regions and across time, raising the questions of where and when competition determines task performance. Furthermore, it is unclear whether the association between neural representational similarity and task performance is common or unique across tasks. Here, we used neural representational similarity derived from fMRI, MEG, and a deep neural network (DNN) to predict performance on two visual search tasks involving the same objects and requiring the same responses but differing in instructions: cued visual search and oddball visual search. Separate groups of human participants (both sexes) viewed the individual objects in neuroimaging experiments to establish the neural representational similarity between those objects. Results showed that performance on both search tasks could be predicted by neural representational similarity throughout the visual system (fMRI), from 80 ms after onset (MEG), and in all DNN layers. Stepwise regression analysis, however, revealed task-specific associations, with unique variability in oddball search performance predicted by early/posterior neural similarity and unique variability in cued search task performance predicted by late/anterior neural similarity. These results reveal that capacity limitations in superficially similar visual search tasks may reflect competition at different stages of visual processing.
Collapse
Affiliation(s)
- Lu-Chun Yeh 葉律君
- Department of Mathematics and Computer Science, Physics, Geography, Mathematical Institute, Justus Liebig University Gießen, Gießen 35392, Germany
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
| | - Sushrut Thorat
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
- Institute of Cognitive Science, University of Osnabrück, Osnabrück 49090, Germany
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6525 GD, The Netherlands
| |
Collapse
|
22
|
Wang J, Lapate RC. Emotional state dynamics impacts temporal memory. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.07.25.550412. [PMID: 38464043 PMCID: PMC10925226 DOI: 10.1101/2023.07.25.550412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Emotional fluctuations are ubiquitous in everyday life, but precisely how they sculpt the temporal organization of memories remains unclear. Here, we designed a novel task-the Emotion Boundary Task-wherein participants viewed sequences of negative and neutral images surrounded by a color border. We manipulated perceptual context (border color), emotional valence, as well as the direction of emotional-valence shifts (i.e., shifts from neutral-to-negative and negative-to-neutral events) to create encoding events comprised of image sequences with a shared perceptual and/or emotional context. We measured memory for temporal order and subjectively remembered temporal distances for images processed within and across events. Negative images processed within events were remembered as closer in time compared to neutral ones. In contrast, temporal distance was remembered as longer for images spanning neutral-to-negative shifts-suggesting temporal dilation in memory with the onset of a negative event following a previously-neutral state. The extent of this negative-picture induced temporal dilation in memory correlated with dispositional negativity across individuals. Lastly, temporal order memory was enhanced for recently presented negative (compared to neutral) images. These findings suggest that emotional-state dynamics matters when considering emotion-temporal memory interactions: While persistent negative events may compress subjectively remembered time, dynamic shifts from neutral to negative events produce temporal dilation in memory, which may be relevant for adaptive emotional functioning.
Collapse
|
23
|
Faghel-Soubeyrand S, Ramon M, Bamps E, Zoia M, Woodhams J, Richoz AR, Caldara R, Gosselin F, Charest I. Decoding face recognition abilities in the human brain. PNAS NEXUS 2024; 3:pgae095. [PMID: 38516275 PMCID: PMC10957238 DOI: 10.1093/pnasnexus/pgae095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 02/20/2024] [Indexed: 03/23/2024]
Abstract
Why are some individuals better at recognizing faces? Uncovering the neural mechanisms supporting face recognition ability has proven elusive. To tackle this challenge, we used a multimodal data-driven approach combining neuroimaging, computational modeling, and behavioral tests. We recorded the high-density electroencephalographic brain activity of individuals with extraordinary face recognition abilities-super-recognizers-and typical recognizers in response to diverse visual stimuli. Using multivariate pattern analyses, we decoded face recognition abilities from 1 s of brain activity with up to 80% accuracy. To better understand the mechanisms subtending this decoding, we compared representations in the brains of our participants with those in artificial neural network models of vision and semantics, as well as with those involved in human judgments of shape and meaning similarity. Compared to typical recognizers, we found stronger associations between early brain representations of super-recognizers and midlevel representations of vision models as well as shape similarity judgments. Moreover, we found stronger associations between late brain representations of super-recognizers and representations of the artificial semantic model as well as meaning similarity judgments. Overall, these results indicate that important individual variations in brain processing, including neural computations extending beyond purely visual processes, support differences in face recognition abilities. They provide the first empirical evidence for an association between semantic computations and face recognition abilities. We believe that such multimodal data-driven approaches will likely play a critical role in further revealing the complex nature of idiosyncratic face recognition in the human brain.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
| | - Meike Ramon
- Institute of Psychology, University of Lausanne, Lausanne CH-1015, Switzerland
| | - Eva Bamps
- Center for Contextual Psychiatry, Department of Neurosciences, KU Leuven, Leuven ON5, Belgium
| | - Matteo Zoia
- Department for Biomedical Research, University of Bern, Bern 3008, Switzerland
| | - Jessica Woodhams
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | | | - Roberto Caldara
- Département de Psychology, Université de Fribourg, Fribourg CH-1700, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, Montréal, Québec H2V 2S9, Canada
| |
Collapse
|
24
|
Zheng XY, Hebart MN, Grill F, Dolan RJ, Doeller CF, Cools R, Garvert MM. Parallel cognitive maps for multiple knowledge structures in the hippocampal formation. Cereb Cortex 2024; 34:bhad485. [PMID: 38204296 PMCID: PMC10839836 DOI: 10.1093/cercor/bhad485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 11/27/2023] [Accepted: 11/30/2023] [Indexed: 01/12/2024] Open
Abstract
The hippocampal-entorhinal system uses cognitive maps to represent spatial knowledge and other types of relational information. However, objects can often be characterized by different types of relations simultaneously. How does the hippocampal formation handle the embedding of stimuli in multiple relational structures that differ vastly in their mode and timescale of acquisition? Does the hippocampal formation integrate different stimulus dimensions into one conjunctive map or is each dimension represented in a parallel map? Here, we reanalyzed human functional magnetic resonance imaging data from Garvert et al. (2017) that had previously revealed a map in the hippocampal formation coding for a newly learnt transition structure. Using functional magnetic resonance imaging adaptation analysis, we found that the degree of representational similarity in the bilateral hippocampus also decreased as a function of the semantic distance between presented objects. Importantly, while both map-like structures localized to the hippocampal formation, the semantic map was located in more posterior regions of the hippocampal formation than the transition structure and thus anatomically distinct. This finding supports the idea that the hippocampal-entorhinal system forms parallel cognitive maps that reflect the embedding of objects in diverse relational structures.
Collapse
Affiliation(s)
- Xiaochen Y Zheng
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, the Netherlands
| | - Martin N Hebart
- Max-Planck-Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Department of Medicine, Justus Liebig University, 35390, Giessen, Germany
| | - Filip Grill
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, the Netherlands
- Radboud University Medical Center, Department of Neurology, 6525 GA, Nijmegen, the Netherlands
| | - Raymond J Dolan
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
- Max Planck University College London Centre for Computational Psychiatry and Ageing Research, University College London, London WC1B 5EH, United Kingdom
| | - Christian F Doeller
- Max-Planck-Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Kavli Institute for Systems Neuroscience, Centre for Neural Computation, The Egil and Pauline Braathen and Fred Kavli Centre for Cortical Microcircuits, Jebsen Centre for Alzheimer's Disease, NTNU, 7491, Trondheim, Norway
- Wilhelm Wundt Institute of Psychology, Leipzig University, 04109, Leipzig, Germany
| | - Roshan Cools
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, the Netherlands
- Radboud University Medical Center, Department of Psychiatry, 6525 GA, Nijmegen, the Netherlands
| | - Mona M Garvert
- Max-Planck-Institute for Human Cognitive and Brain Sciences, 04103, Leipzig, Germany
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, 14195, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
- Faculty of Human Sciences, Julius-Maximilians-Universität Würzburg, Würzburg, Germany
| |
Collapse
|
25
|
Jiahui G, Feilong M, Visconti di Oleggio Castello M, Nastase SA, Haxby JV, Gobbini MI. Modeling naturalistic face processing in humans with deep convolutional neural networks. Proc Natl Acad Sci U S A 2023; 120:e2304085120. [PMID: 37847731 PMCID: PMC10614847 DOI: 10.1073/pnas.2304085120] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 09/11/2023] [Indexed: 10/19/2023] Open
Abstract
Deep convolutional neural networks (DCNNs) trained for face identification can rival and even exceed human-level performance. The ways in which the internal face representations in DCNNs relate to human cognitive representations and brain activity are not well understood. Nearly all previous studies focused on static face image processing with rapid display times and ignored the processing of naturalistic, dynamic information. To address this gap, we developed the largest naturalistic dynamic face stimulus set in human neuroimaging research (700+ naturalistic video clips of unfamiliar faces). We used this naturalistic dataset to compare representational geometries estimated from DCNNs, behavioral responses, and brain responses. We found that DCNN representational geometries were consistent across architectures, cognitive representational geometries were consistent across raters in a behavioral arrangement task, and neural representational geometries in face areas were consistent across brains. Representational geometries in late, fully connected DCNN layers, which are optimized for individuation, were much more weakly correlated with cognitive and neural geometries than were geometries in late-intermediate layers. The late-intermediate face-DCNN layers successfully matched cognitive representational geometries, as measured with a behavioral arrangement task that primarily reflected categorical attributes, and correlated with neural representational geometries in known face-selective topographies. Our study suggests that current DCNNs successfully capture neural cognitive processes for categorical attributes of faces but less accurately capture individuation and dynamic features.
Collapse
Affiliation(s)
- Guo Jiahui
- Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH03755
| | - Ma Feilong
- Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH03755
| | | | - Samuel A. Nastase
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ08544
| | - James V. Haxby
- Center for Cognitive Neuroscience, Dartmouth College, Hanover, NH03755
| | - M. Ida Gobbini
- Department of Medical and Surgical Sciences, University of Bologna, Bologna40138, Italy
- Istituti di Ricovero e Cura a Carattere Scientifico, Istituto delle Scienze Neurologiche di Bologna, Bologna40139, Italia
| |
Collapse
|
26
|
Yang J, Ganea N, Kanazawa S, Yamaguchi MK, Bhattacharya J, Bremner AJ. Cortical signatures of visual body representation develop in human infancy. Sci Rep 2023; 13:14696. [PMID: 37679386 PMCID: PMC10484977 DOI: 10.1038/s41598-023-41604-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 08/28/2023] [Indexed: 09/09/2023] Open
Abstract
Human infants cannot report their experiences, limiting what we can learn about their bodily awareness. However, visual cortical responses to the body, linked to visual awareness and selective attention in adults, can be easily measured in infants and provide a promising marker of bodily awareness in early life. We presented 4- and 8-month-old infants with a flickering (7.5 Hz) video of a hand being stroked and recorded steady-state visual evoked potentials (SSVEPs). In half of the trials, the infants also received tactile stroking synchronously with visual stroking. The 8-month-old, but not the 4-month-old infants, showed a significant enhancement of SSVEP responses when they received tactile stimulation concurrent with the visually observed stroking. Follow-up experiments showed that this enhancement did not occur when the visual hand was presented in an incompatible posture with the infant's own body or when the visual stimulus was a body-irrelevant video. Our findings provide a novel insight into the development of bodily self-awareness in the first year of life.
Collapse
Affiliation(s)
- Jiale Yang
- School of Psychology, Chukyo University, Nagoya, Japan.
| | - Natasa Ganea
- Child Study Center, Yale University, New Haven, CT, USA
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Tokyo, Japan
| | | | | | - Andrew J Bremner
- Centre for Developmental Science, School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
27
|
Peelen MV, Downing PE. Testing cognitive theories with multivariate pattern analysis of neuroimaging data. Nat Hum Behav 2023; 7:1430-1441. [PMID: 37591984 PMCID: PMC7616245 DOI: 10.1038/s41562-023-01680-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 07/12/2023] [Indexed: 08/19/2023]
Abstract
Multivariate pattern analysis (MVPA) has emerged as a powerful method for the analysis of functional magnetic resonance imaging, electroencephalography and magnetoencephalography data. The new approaches to experimental design and hypothesis testing afforded by MVPA have made it possible to address theories that describe cognition at the functional level. Here we review a selection of studies that have used MVPA to test cognitive theories from a range of domains, including perception, attention, memory, navigation, emotion, social cognition and motor control. This broad view reveals properties of MVPA that make it suitable for understanding the 'how' of human cognition, such as the ability to test predictions expressed at the item or event level. It also reveals limitations and points to future directions.
Collapse
Affiliation(s)
- Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands.
| | - Paul E Downing
- Cognitive Neuroscience Institute, Department of Psychology, Bangor University, Bangor, UK.
| |
Collapse
|
28
|
Roth ZN, Merriam EP. Representations in human primary visual cortex drift over time. Nat Commun 2023; 14:4422. [PMID: 37479723 PMCID: PMC10361968 DOI: 10.1038/s41467-023-40144-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 07/13/2023] [Indexed: 07/23/2023] Open
Abstract
Primary sensory regions are believed to instantiate stable neural representations, yet a number of recent rodent studies suggest instead that representations drift over time. To test whether sensory representations are stable in human visual cortex, we analyzed a large longitudinal dataset of fMRI responses to images of natural scenes. We fit the fMRI responses using an image-computable encoding model and tested how well the model generalized across sessions. We found systematic changes in model fits that exhibited cumulative drift over many months. Convergent analyses pinpoint changes in neural responsivity as the source of the drift, while population-level representational dissimilarities between visual stimuli were unchanged. These observations suggest that downstream cortical areas may read-out a stable representation, even as representations within V1 exhibit drift.
Collapse
Affiliation(s)
- Zvi N Roth
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA.
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, NIH, Bethesda, MD, USA
| |
Collapse
|
29
|
Shao X, Li A, Chen C, Loftus EF, Zhu B. Cross-stage neural pattern similarity in the hippocampus predicts false memory derived from post-event inaccurate information. Nat Commun 2023; 14:2299. [PMID: 37085518 PMCID: PMC10121656 DOI: 10.1038/s41467-023-38046-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 04/11/2023] [Indexed: 04/23/2023] Open
Abstract
The misinformation effect occurs when people's memory of an event is altered by subsequent inaccurate information. No study has systematically tested theories about the dynamics of human hippocampal representations during the three stages of misinformation-induced false memory. This study replicates behavioral results of the misinformation effect, and investigates the cross-stage pattern similarity in the hippocampus and cortex using functional magnetic resonance imaging. Results show item-specific hippocampal pattern similarity between original-event and post-event stages. During the memory-test stage, hippocampal representations of original information are weakened for true memory, whereas hippocampal representations of misinformation compete with original information to create false memory. When false memory occurs, this conflict is resolved by the lateral prefrontal cortex. Individuals' memory traces of post-event information in the hippocampus predict false memory, whereas original information in the lateral parietal cortex predicts true memory. These findings support the multiple-trace model, and emphasize the reconstructive nature of human memory.
Collapse
Affiliation(s)
- Xuhao Shao
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 100875, Beijing, China
- Institute of Developmental Psychology, Beijing Normal University, 100875, Beijing, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, 100875, Beijing, China
- IDG/McGovern Institute for Brain Research, Beijing Normal University, 100875, Beijing, China
| | - Ao Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 100875, Beijing, China
| | - Chuansheng Chen
- Department of Psychological Science, University of California, Irvine, CA, 92697, USA
| | - Elizabeth F Loftus
- Department of Psychological Science, University of California, Irvine, CA, 92697, USA
| | - Bi Zhu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, 100875, Beijing, China.
- Institute of Developmental Psychology, Beijing Normal University, 100875, Beijing, China.
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, 100875, Beijing, China.
- IDG/McGovern Institute for Brain Research, Beijing Normal University, 100875, Beijing, China.
| |
Collapse
|
30
|
Dima DC, Hebart MN, Isik L. A data-driven investigation of human action representations. Sci Rep 2023; 13:5171. [PMID: 36997625 PMCID: PMC10063663 DOI: 10.1038/s41598-023-32192-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 03/23/2023] [Indexed: 04/01/2023] Open
Abstract
Understanding actions performed by others requires us to integrate different types of information about people, scenes, objects, and their interactions. What organizing dimensions does the mind use to make sense of this complex action space? To address this question, we collected intuitive similarity judgments across two large-scale sets of naturalistic videos depicting everyday actions. We used cross-validated sparse non-negative matrix factorization to identify the structure underlying action similarity judgments. A low-dimensional representation, consisting of nine to ten dimensions, was sufficient to accurately reconstruct human similarity judgments. The dimensions were robust to stimulus set perturbations and reproducible in a separate odd-one-out experiment. Human labels mapped these dimensions onto semantic axes relating to food, work, and home life; social axes relating to people and emotions; and one visual axis related to scene setting. While highly interpretable, these dimensions did not share a clear one-to-one correspondence with prior hypotheses of action-relevant dimensions. Together, our results reveal a low-dimensional set of robust and interpretable dimensions that organize intuitive action similarity judgments and highlight the importance of data-driven investigations of behavioral representations.
Collapse
Affiliation(s)
- Diana C Dima
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA.
- Department of Computer Science, Western University, London, Canada.
| | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
31
|
Halpern DJ, Tubridy S, Davachi L, Gureckis TM. Identifying causal subsequent memory effects. Proc Natl Acad Sci U S A 2023; 120:e2120288120. [PMID: 36952384 PMCID: PMC10068819 DOI: 10.1073/pnas.2120288120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/12/2022] [Indexed: 03/24/2023] Open
Abstract
Over 40 y of accumulated research has detailed associations between neuroimaging signals measured during a memory encoding task and later memory performance, across a variety of brain regions, measurement tools, statistical approaches, and behavioral tasks. But the interpretation of these subsequent memory effects (SMEs) remains unclear: if the identified signals reflect cognitive and neural mechanisms of memory encoding, then the underlying neural activity must be causally related to future memory. However, almost all previous SME analyses do not control for potential confounders of this causal interpretation, such as serial position and item effects. We collect a large fMRI dataset and use an experimental design and analysis approach that allows us to statistically adjust for nearly all known exogenous confounding variables. We find that, using standard approaches without adjustment, we replicate several univariate and multivariate subsequent memory effects and are able to predict memory performance across people. However, we are unable to identify any signal that reliably predicts subsequent memory after adjusting for confounding variables, bringing into doubt the causal status of these effects. We apply the same approach to subjects' judgments of learning collected following an encoding period and show that these behavioral measures of mnemonic status do predict memory after adjustments, suggesting that it is possible to measure signals near the time of encoding that reflect causal mechanisms but that existing neuroimaging measures, at least in our data, may not have the precision and specificity to do so.
Collapse
Affiliation(s)
- David J. Halpern
- Department of Psychology, New York University, New York, NY10003
| | - Shannon Tubridy
- Department of Psychology, New York University, New York, NY10003
| | - Lila Davachi
- Department of Psychology, Columbia University, New York, NY10027
| | - Todd M. Gureckis
- Department of Psychology, New York University, New York, NY10003
| |
Collapse
|
32
|
de Bruin D, van Baar JM, Rodríguez PL, FeldmanHall O. Shared neural representations and temporal segmentation of political content predict ideological similarity. SCIENCE ADVANCES 2023; 9:eabq5920. [PMID: 36724226 PMCID: PMC9891706 DOI: 10.1126/sciadv.abq5920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 01/03/2023] [Indexed: 06/18/2023]
Abstract
Despite receiving the same sensory input, opposing partisans often interpret political content in disparate ways. Jointly analyzing controlled and naturalistic functional magnetic resonance imaging data, we uncover the neurobiological mechanisms explaining how these divergent political viewpoints arise. Individuals who share an ideology have more similar neural representations of political words, experience greater neural synchrony during naturalistic political content, and temporally segment real-world information into the same meaningful units. In the striatum and amygdala, increasing intersubject similarity in neural representations of political concepts during a word reading task predicts enhanced synchronization of blood oxygen level-dependent time courses when viewing real-time, inflammatory political videos, revealing that polarization can arise from differences in the brain's affective valuations of political concepts. Together, this research shows that political ideology is shaped by semantic representations of political concepts processed in an environment free of any polarizing agenda and that these representations bias how real-world political information is construed into a polarized perspective.
Collapse
Affiliation(s)
- Daantje de Bruin
- Department of Cognitive, Linguistic, Psychological Sciences, Brown University, Providence, RI, USA
| | - Jeroen M. van Baar
- Department of Cognitive, Linguistic, Psychological Sciences, Brown University, Providence, RI, USA
- Trimbos Institute, Netherlands Institute of Mental Health and Addiction, Utrecht, Netherlands
| | - Pedro L. Rodríguez
- Center for Data Science, New York University, New York, NY, USA
- International Faculty, Instituto de Estudios Superiores de Administración, Caracas, Venezuela
| | - Oriel FeldmanHall
- Department of Cognitive, Linguistic, Psychological Sciences, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
| |
Collapse
|
33
|
Donhauser PW, Klein D. Audio-Tokens: A toolbox for rating, sorting and comparing audio samples in the browser. Behav Res Methods 2023; 55:508-515. [PMID: 35297013 PMCID: PMC10027774 DOI: 10.3758/s13428-022-01803-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/19/2022] [Indexed: 12/30/2022]
Abstract
Here we describe a JavaScript toolbox to perform online rating studies with auditory material. The main feature of the toolbox is that audio samples are associated with visual tokens on the screen that control audio playback and can be manipulated depending on the type of rating. This allows the collection of single- and multidimensional feature ratings, as well as categorical and similarity ratings. The toolbox ( github.com/pwdonh/audio_tokens ) can be used via a plugin for the widely used jsPsych, as well as using plain JavaScript for custom applications. We expect the toolbox to be useful in psychological research on speech and music perception, as well as for the curation and annotation of datasets in machine learning.
Collapse
Affiliation(s)
- Peter W Donhauser
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, H3A 2B4, Canada.
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
| | - Denise Klein
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, H3A 2B4, Canada.
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, H3G 2A8, Canada.
| |
Collapse
|
34
|
Kob L. Exploring the role of structuralist methodology in the neuroscience of consciousness: a defense and analysis. Neurosci Conscious 2023; 2023:niad011. [PMID: 37205986 PMCID: PMC10191193 DOI: 10.1093/nc/niad011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 02/27/2023] [Accepted: 04/13/2023] [Indexed: 05/21/2023] Open
Abstract
Traditional contrastive analysis has been the foundation of consciousness science, but its limitations due to the lack of a reliable method for measuring states of consciousness have prompted the exploration of alternative approaches. Structuralist theories have gained attention as an alternative that focuses on the structural properties of phenomenal experience and seeks to identify their neural encoding via structural similarities between quality spaces and neural state spaces. However, the intertwining of philosophical assumptions about structuralism and structuralist methodology may pose a challenge to those who are skeptical of the former. In this paper, I offer an analysis and defense of structuralism as a methodological approach in consciousness science, which is partly independent of structuralist assumptions on the nature of consciousness. By doing so, I aim to make structuralist methodology more accessible to a broader scientific and philosophical audience. I situate methodological structuralism in the context of questions concerning mental representation, psychophysical measurement, holism, and functional relevance of neural processes. At last, I analyze the relationship between the structural approach and the distinction between conscious and unconscious states.
Collapse
Affiliation(s)
- Lukas Kob
- *Corresponding author. Philosophy Department, Otto-von-Guericke University, Zschokkestraße 32, Magdeburg 39104, Germany. E-mail:
| |
Collapse
|
35
|
Corriveau A, Kidder A, Teichmann L, Wardle SG, Baker CI. Sustained neural representations of personally familiar people and places during cued recall. Cortex 2023; 158:71-82. [PMID: 36459788 PMCID: PMC9840701 DOI: 10.1016/j.cortex.2022.08.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/28/2022] [Accepted: 08/29/2022] [Indexed: 01/18/2023]
Abstract
The recall and visualization of people and places from memory is an everyday occurrence, yet the neural mechanisms underpinning this phenomenon are not well understood. In particular, the temporal characteristics of the internal representations generated by active recall are unclear. Here, we used magnetoencephalography (MEG) and multivariate pattern analysis to measure the evolving neural representation of familiar places and people across the whole brain when human participants engage in active recall. To isolate self-generated imagined representations, we used a retro-cue paradigm in which participants were first presented with two possible labels before being cued to recall either the first or second item. We collected personalized labels for specific locations and people familiar to each participant. Importantly, no visual stimuli were presented during the recall period, and the retro-cue paradigm allowed the dissociation of responses associated with the labels from those corresponding to the self-generated representations. First, we found that following the retro-cue it took on average ∼1000 ms for distinct neural representations of freely recalled people or places to develop. Second, we found distinct representations of personally familiar concepts throughout the 4 s recall period. Finally, we found that these representations were highly stable and generalizable across time. These results suggest that self-generated visualizations and recall of familiar places and people are subserved by a stable neural mechanism that operates relatively slowly when under conscious control.
Collapse
Affiliation(s)
- Anna Corriveau
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychology, The University of Chicago, Chicago, IL 60637, USA.
| | - Alexis Kidder
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Lina Teichmann
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Susan G Wardle
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, 20814, USA
| |
Collapse
|
36
|
Gifford AT, Dwivedi K, Roig G, Cichy RM. A large and rich EEG dataset for modeling human visual object recognition. Neuroimage 2022; 264:119754. [PMID: 36400378 PMCID: PMC9771828 DOI: 10.1016/j.neuroimage.2022.119754] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 09/14/2022] [Accepted: 11/14/2022] [Indexed: 11/16/2022] Open
Abstract
The human brain achieves visual object recognition through multiple stages of linear and nonlinear transformations operating at a millisecond scale. To predict and explain these rapid transformations, computational neuroscientists employ machine learning modeling techniques. However, state-of-the-art models require massive amounts of data to properly train, and to the present day there is a lack of vast brain datasets which extensively sample the temporal dynamics of visual object recognition. Here we collected a large and rich dataset of high temporal resolution EEG responses to images of objects on a natural background. This dataset includes 10 participants, each with 82,160 trials spanning 16,740 image conditions. Through computational modeling we established the quality of this dataset in five ways. First, we trained linearizing encoding models that successfully synthesized the EEG responses to arbitrary images. Second, we correctly identified the recorded EEG data image conditions in a zero-shot fashion, using EEG synthesized responses to hundreds of thousands of candidate image conditions. Third, we show that both the high number of conditions as well as the trial repetitions of the EEG dataset contribute to the trained models' prediction accuracy. Fourth, we built encoding models whose predictions well generalize to novel participants. Fifth, we demonstrate full end-to-end training of randomly initialized DNNs that output EEG responses for arbitrary input images. We release this dataset as a tool to foster research in visual neuroscience and computer vision.
Collapse
Affiliation(s)
- Alessandro T Gifford
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences Berlin, Charité - Universitätsmedizin Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.
| | - Kshitij Dwivedi
- Department of Computer Science, Goethe Universität, Frankfurt am Main, Germany
| | - Gemma Roig
- Department of Computer Science, Goethe Universität, Frankfurt am Main, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany; Einstein Center for Neurosciences Berlin, Charité - Universitätsmedizin Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
37
|
Generalization and Idiosyncrasy: Two Sides of the Same Brain. J Neurosci 2022; 42:8755-8757. [PMID: 36418180 PMCID: PMC9698678 DOI: 10.1523/jneurosci.1427-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 09/21/2022] [Accepted: 09/23/2022] [Indexed: 12/29/2022] Open
|
38
|
Ansteeg L, Leoné F, Dijkstra T. Characterizing the semantic and form-based similarity spaces of the mental lexicon by means of the multi-arrangement method. Front Psychol 2022; 13:945094. [PMID: 36033027 PMCID: PMC9407019 DOI: 10.3389/fpsyg.2022.945094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/25/2022] [Indexed: 12/04/2022] Open
Abstract
Collecting human similarity judgments is instrumental to measuring and modeling neurocognitive representations (e.g., through representational similarity analysis) and has been made more efficient by the multi-arrangement task. While this task has been tested for collecting semantic similarity judgments, it is unclear whether it also lends itself to phonological and orthographic similarity judgments of words. We have extended the task to include these lexical modalities and compared the results between modalities and against computational models. We find that similarity judgments can be collected for all three modalities, although word forms were considered more difficult to sort and resulted in less consistent inter- and intra-rater agreement than semantics. For all three modalities we can construct stable group-level representational similarity matrices. However, these do not capture significant idiosyncratic similarity information unique to each participant. We discuss the potential underlying causes for differences between modalities and their effect on the application of the multi-arrangement task.
Collapse
|
39
|
Kaiser D. Characterizing Dynamic Neural Representations of Scene Attractiveness. J Cogn Neurosci 2022; 34:1988-1997. [PMID: 35802607 DOI: 10.1162/jocn_a_01891] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Aesthetic experiences during natural vision are varied: They can arise from viewing scenic landscapes, interesting architecture, or attractive people. Recent research in the field of neuroaesthetics has taught us a lot about where in the brain such aesthetic experiences are represented. Much less is known about when such experiences arise during the cortical processing cascade. Particularly, the dynamic neural representation of perceived attractiveness for rich natural scenes is not well understood. Here, I present data from an EEG experiment, in which participants provided attractiveness judgments for a set of diverse natural scenes. Using multivariate pattern analysis, I demonstrate that scene attractiveness is mirrored in early brain signals that arise within 200 msec of vision, suggesting that the aesthetic appeal of scenes is first resolved during perceptual processing. In more detailed analyses, I show that even such early neural correlates of scene attractiveness are partly related to interindividual variation in aesthetic preferences and that they generalize across scene contents. Together, these results characterize the time-resolved neural dynamics that give rise to aesthetic experiences in complex natural environments.
Collapse
|
40
|
Structure of visual biases revealed by individual differences. Vision Res 2022; 195:108014. [DOI: 10.1016/j.visres.2022.108014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/14/2022] [Accepted: 01/19/2022] [Indexed: 11/21/2022]
|
41
|
Ferko KM, Blumenthal A, Martin CB, Proklova D, Minos AN, Saksida LM, Bussey TJ, Khan AR, Köhler S. Activity in perirhinal and entorhinal cortex predicts perceived visual similarities among category exemplars with highest precision. eLife 2022; 11:66884. [PMID: 35311645 PMCID: PMC9020819 DOI: 10.7554/elife.66884] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 03/17/2022] [Indexed: 01/22/2023] Open
Abstract
Vision neuroscience has made great strides in understanding the hierarchical organization of object representations along the ventral visual stream (VVS). How VVS representations capture fine-grained visual similarities between objects that observers subjectively perceive has received limited examination so far. In the current study, we addressed this question by focussing on perceived visual similarities among subordinate exemplars of real-world categories. We hypothesized that these perceived similarities are reflected with highest fidelity in neural activity patterns downstream from inferotemporal regions, namely in perirhinal (PrC) and anterolateral entorhinal cortex (alErC) in the medial temporal lobe. To address this issue with functional magnetic resonance imaging (fMRI), we administered a modified 1-back task that required discrimination between category exemplars as well as categorization. Further, we obtained observer-specific ratings of perceived visual similarities, which predicted behavioural discrimination performance during scanning. As anticipated, we found that activity patterns in PrC and alErC predicted the structure of perceived visual similarity relationships among category exemplars, including its observer-specific component, with higher precision than any other VVS region. Our findings provide new evidence that subjective aspects of object perception that rely on fine-grained visual differentiation are reflected with highest fidelity in the medial temporal lobe.
Collapse
Affiliation(s)
- Kayla M Ferko
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada
| | - Anna Blumenthal
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Cervo Brain Research Center, University of Laval, Quebec, Canada
| | - Chris B Martin
- Department of Psychology, Florida State University, Tallahassee, United States
| | - Daria Proklova
- Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Alexander N Minos
- Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Lisa M Saksida
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada.,Department of Physiology and Pharmacology, University of Western Ontario, London, Canada
| | - Timothy J Bussey
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada.,Department of Physiology and Pharmacology, University of Western Ontario, London, Canada
| | - Ali R Khan
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada.,School of Biomedical Engineering, University of Western Ontario, London, Canada.,Department of Medical Biophysics, University of Western Ontario, London, Canada
| | - Stefan Köhler
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Department of Psychology, University of Western Ontario, London, Canada
| |
Collapse
|
42
|
Verheyen S, White A, Storms G. A Comparison of the Spatial Arrangement Method and the Total-Set Pairwise Rating Method for Obtaining Similarity Data in the Conceptual Domain. MULTIVARIATE BEHAVIORAL RESEARCH 2022; 57:356-384. [PMID: 33327792 DOI: 10.1080/00273171.2020.1857216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We compare two methods for obtaining similarity data in the conceptual domain. In the Spatial Arrangement Method (SpAM), participants organize stimuli on a computer screen so that the distance between stimuli represents their perceived dissimilarity. In Total-Set Pairwise Rating Method (PRaM), participants rate the (dis)similarity of all pairs of stimuli on a Likert scale. In each of three studies, we had participants indicate the similarity of four sets of conceptual stimuli with either PRaM or SpAM. Studies 1 and 2 confirm two caveats that have been raised for SpAM. (i) While SpAM takes significantly less time to complete than PRaM, it yields less reliable data than PRaM does. (ii) Because of the spatial manner in which similarity is measured in SpAM, the method is biased against feature representations. Despite these differences, averaging SpAM and PRaM dissimilarity data across participants yields comparable aggregate data. Study 3 shows that by having participants only judge half of the pairs in PRaM, its duration can be significantly reduced, without affecting the dissimilarity distribution, but at the cost of a smaller reliability. Having participants arrange multiple subsets of the stimuli does not do away with the spatial bias of SpAM.
Collapse
|
43
|
Taylor J, Xu Y. Representation of Color, Form, and their Conjunction across the Human Ventral Visual Pathway. Neuroimage 2022; 251:118941. [PMID: 35122966 PMCID: PMC9014861 DOI: 10.1016/j.neuroimage.2022.118941] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 01/25/2022] [Indexed: 11/25/2022] Open
Abstract
Despite decades of research, our understanding of the relationship
between color and form processing in the primate ventral visual pathway remains
incomplete. Using fMRI multivoxel pattern analysis, we examined coding of color
and form, using a simple form feature (orientation) and a mid-level form feature
(curvature), in human ventral visual processing regions. We found that both
color and form could be decoded from activity in early visual areas V1 to V4, as
well as in the posterior color-selective region and shape-selective regions in
ventral and lateral occipitotemporal cortex defined based on their univariate
selectivity to color or shape, respectively (the central color region only
showed color but not form decoding). Meanwhile, decoding biases towards one
feature or the other existed in the color- and shape-selective regions,
consistent with their univariate feature selectivity reported in past studies.
Additional extensive analyses show that while all these regions contain
independent (linearly additive) coding for both features, several early visual
regions also encode the conjunction of color and the simple, but not the
complex, form feature in a nonlinear, interactive manner. Taken together, the
results show that color and form are encoded in a biased distributed and largely
independent manner across ventral visual regions in the human brain.
Collapse
Affiliation(s)
- JohnMark Taylor
- Visual Inference Laboratory, Zuckerman Institute, Columbia University.
| | - Yaoda Xu
- Department of Psychology, Yale University
| |
Collapse
|
44
|
Hakim N, Awh E, Vogel EK, Rosenberg MD. Inter-electrode correlations measured with EEG predict individual differences in cognitive ability. Curr Biol 2021; 31:4998-5008.e6. [PMID: 34637747 DOI: 10.1016/j.cub.2021.09.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/07/2021] [Accepted: 09/15/2021] [Indexed: 02/07/2023]
Abstract
Human brains share a broadly similar functional organization with consequential individual variation. This duality in brain function has primarily been observed when using techniques that consider the spatial organization of the brain, such as MRI. Here, we ask whether these common and unique signals of cognition are also present in temporally sensitive but spatially insensitive neural signals. To address this question, we compiled electroencephalogram (EEG) data from individuals of both sexes while they performed multiple working memory tasks at two different data-collection sites (n = 171 and 165). Results revealed that trial-averaged EEG activity exhibited inter-electrode correlations that were stable within individuals and unique across individuals. Furthermore, models based on these inter-electrode correlations generalized across datasets to predict participants' working memory capacity and general fluid intelligence. Thus, inter-electrode correlation patterns measured with EEG provide a signature of working memory and fluid intelligence in humans and a new framework for characterizing individual differences in cognitive abilities.
Collapse
Affiliation(s)
- Nicole Hakim
- Department of Psychology, University of Chicago, Chicago, IL 60637, USA; Institute for Mind and Biology, University of Chicago, Chicago, IL 60637, USA.
| | - Edward Awh
- Department of Psychology, University of Chicago, Chicago, IL 60637, USA; Institute for Mind and Biology, University of Chicago, Chicago, IL 60637, USA; Neuroscience Institute, University of Chicago, Chicago, IL 60637, USA
| | - Edward K Vogel
- Department of Psychology, University of Chicago, Chicago, IL 60637, USA; Institute for Mind and Biology, University of Chicago, Chicago, IL 60637, USA; Neuroscience Institute, University of Chicago, Chicago, IL 60637, USA
| | - Monica D Rosenberg
- Department of Psychology, University of Chicago, Chicago, IL 60637, USA; Neuroscience Institute, University of Chicago, Chicago, IL 60637, USA
| |
Collapse
|
45
|
Abstract
Is Mr. Hyde more similar to his alter ego Dr. Jekyll, because of their physical identity, or to Jack the Ripper, because both evoke fear and loathing? The relative weight of emotional and visual dimensions in similarity judgements is still unclear. We expected an asymmetric effect of these dimensions on similarity perception, such that faces that express the same or similar feeling are judged as more similar than different emotional expressions of same person. We selected 10 male faces with different expressions. Each face posed one neutral expression and one emotional expression (five disgust, five fear). We paired these expressions, resulting in 190 pairs, varying either in emotional expressions, physical identity, or both. Twenty healthy participants rated the similarity of paired faces on a 7-point scale. We report a symmetric effect of emotional expression and identity on similarity judgements, suggesting that people may perceive Mr. Hyde to be just as similar to Dr. Jekyll (identity) as to Jack the Ripper (emotion). We also observed that emotional mismatch decreased perceived similarity, suggesting that emotions play a prominent role in similarity judgements. From an evolutionary perspective, poor discrimination between emotional stimuli might endanger the individual.
Collapse
|
46
|
Ritchie JB, Lee Masson H, Bracci S, Op de Beeck HP. The unreliable influence of multivariate noise normalization on the reliability of neural dissimilarity. Neuroimage 2021; 245:118686. [PMID: 34728244 DOI: 10.1016/j.neuroimage.2021.118686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/21/2021] [Accepted: 10/26/2021] [Indexed: 10/19/2022] Open
Abstract
Representational similarity analysis (RSA) is a key element in the multivariate pattern analysis toolkit. The central construct of the method is the representational dissimilarity matrix (RDM), which can be generated for datasets from different modalities (neuroimaging, behavior, and computational models) and directly correlated in order to evaluate their second-order similarity. Given the inherent noisiness of neuroimaging signals it is important to evaluate the reliability of neuroimaging RDMs in order to determine whether these comparisons are meaningful. Recently, multivariate noise normalization (NNM) has been proposed as a widely applicable method for boosting signal estimates for RSA, regardless of choice of dissimilarity metrics, based on evidence that the analysis improves the within-subject reliability of RDMs (Guggenmos et al. 2018; Walther et al. 2016). We revisited this issue with three fMRI datasets and evaluated the impact of NNM on within- and between-subject reliability and RSA effect sizes using multiple dissimilarity metrics. We also assessed its impact across regions of interest from the same dataset, its interaction with spatial smoothing, and compared it to GLMdenoise, which has also been proposed as a method that improves signal estimates for RSA (Charest et al. 2018). We found that across these tests the impact of NNM was highly variable, as also seems to be the case for other analysis choices. Overall, we suggest being conservative before adding steps and complexities to the (pre)processing pipeline for RSA.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, 3000 Leuven, Flemish Brabant, Belgium.
| | - Haemy Lee Masson
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA
| | - Stefania Bracci
- Centre for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Hans P Op de Beeck
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, 3000 Leuven, Flemish Brabant, Belgium
| |
Collapse
|
47
|
A neural decoding algorithm that generates language from visual activity evoked by natural images. Neural Netw 2021; 144:90-100. [PMID: 34478941 DOI: 10.1016/j.neunet.2021.08.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 06/22/2021] [Accepted: 08/05/2021] [Indexed: 11/23/2022]
Abstract
Transforming neural activities into language is revolutionary for human-computer interaction as well as functional restoration of aphasia. Present rapid development of artificial intelligence makes it feasible to decode the neural signals of human visual activities. In this paper, a novel Progressive Transfer Language Decoding Model (PT-LDM) is proposed to decode visual fMRI signals into phrases or sentences when natural images are being watched. The PT-LDM consists of an image-encoder, a fMRI encoder and a language-decoder. The results showed that phrases and sentences were successfully generated from visual activities. Similarity analysis showed that three often-used evaluation indexes BLEU, ROUGE and CIDEr reached 0.182, 0.197 and 0.680 averagely between the generated texts and the corresponding annotated texts in the testing set respectively, significantly higher than the baseline. Moreover, we found that higher visual areas usually had better performance than lower visual areas and the contribution curve of visual response patterns in language decoding varied at successively different time points. Our findings demonstrate that the neural representations elicited in visual cortices when scenes are being viewed have already contained semantic information that can be utilized to generate human language. Our study shows potential application of language-based brain-machine interfaces in the future, especially for assisting aphasics in communicating more efficiently with fMRI signals.
Collapse
|
48
|
Lee D. Which deep learning model can best explain object representations of within-category exemplars? J Vis 2021; 21:12. [PMID: 34520508 PMCID: PMC8444465 DOI: 10.1167/jov.21.10.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Deep neural network (DNN) models realize human-equivalent performance in tasks such as object recognition. Recent developments in the field have enabled testing the hierarchical similarity of object representation between the human brain and DNNs. However, the representational geometry of object exemplars within a single category using DNNs is unclear. In this study, we investigate which DNN model has the greatest ability to explain invariant within-category object representations by computing the similarity between representational geometries of visual features extracted at the high-level layers of different DNN models. We also test for the invariability of within-category object representations of these models by identifying object exemplars. Our results show that transfer learning models based on ResNet50 best explained both within-category object representation and object identification. These results suggest that the invariability of object representations in deep learning depends not on deepening the neural network but on building a better transfer learning model.
Collapse
Affiliation(s)
- Dongha Lee
- Cognitive Science Research Group, Korea Brain Research Institute, Daegu, Republic of Korea.,
| |
Collapse
|
49
|
|
50
|
Freund MC, Etzel JA, Braver TS. Neural Coding of Cognitive Control: The Representational Similarity Analysis Approach. Trends Cogn Sci 2021; 25:622-638. [PMID: 33895065 PMCID: PMC8279005 DOI: 10.1016/j.tics.2021.03.011] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 03/17/2021] [Accepted: 03/18/2021] [Indexed: 01/07/2023]
Abstract
Cognitive control relies on distributed and potentially high-dimensional frontoparietal task representations. Yet, the classical cognitive neuroscience approach in this domain has focused on aggregating and contrasting neural measures - either via univariate or multivariate methods - along highly abstracted, 1D factors (e.g., Stroop congruency). Here, we present representational similarity analysis (RSA) as a complementary approach that can powerfully inform representational components of cognitive control theories. We review several exemplary uses of RSA in this regard. We further show that most classical paradigms, given their factorial structure, can be optimized for RSA with minimal modification. Our aim is to illustrate how RSA can be incorporated into cognitive control investigations to shed new light on old questions.
Collapse
Affiliation(s)
- Michael C Freund
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA
| | - Joset A Etzel
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA
| | - Todd S Braver
- Department of Psychological and Brain Sciences, Washington University in St Louis, St Louis, MO 63130, USA; Department of Radiology, Washington University in St Louis, School of Medicine, St Louis, MO 63110, USA; Department of Neuroscience, Washington University in St Louis, School of Medicine, St Louis, MO 63110, USA.
| |
Collapse
|