201
|
Muttenthaler L, Hebart MN. THINGSvision: A Python Toolbox for Streamlining the Extraction of Activations From Deep Neural Networks. Front Neuroinform 2021; 15:679838. [PMID: 34630062 PMCID: PMC8494008 DOI: 10.3389/fninf.2021.679838] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 08/10/2021] [Indexed: 11/25/2022] Open
Abstract
Over the past decade, deep neural network (DNN) models have received a lot of attention due to their near-human object classification performance and their excellent prediction of signals recorded from biological visual systems. To better understand the function of these networks and relate them to hypotheses about brain activity and behavior, researchers need to extract the activations to images across different DNN layers. The abundance of different DNN variants, however, can often be unwieldy, and the task of extracting DNN activations from different layers may be non-trivial and error-prone for someone without a strong computational background. Thus, researchers in the fields of cognitive science and computational neuroscience would benefit from a library or package that supports a user in the extraction task. THINGSvision is a new Python module that aims at closing this gap by providing a simple and unified tool for extracting layer activations for a wide range of pretrained and randomly-initialized neural network architectures, even for users with little to no programming experience. We demonstrate the general utility of THINGsvision by relating extracted DNN activations to a number of functional MRI and behavioral datasets using representational similarity analysis, which can be performed as an integral part of the toolbox. Together, THINGSvision enables researchers across diverse fields to extract features in a streamlined manner for their custom image dataset, thereby improving the ease of relating DNNs, brain activity, and behavior, and improving the reproducibility of findings in these research fields.
Collapse
Affiliation(s)
- Lukas Muttenthaler
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Machine Learning Group, Technical University of Berlin, Berlin, Germany
| | - Martin N. Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
202
|
Farzmahdi A, Fallah F, Rajimehr R, Ebrahimpour R. Task-dependent neural representations of visual object categories. Eur J Neurosci 2021; 54:6445-6462. [PMID: 34480766 DOI: 10.1111/ejn.15440] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 08/14/2021] [Accepted: 08/28/2021] [Indexed: 11/29/2022]
Abstract
What do we perceive in a glance of an object? If we are questioned about it, will our perception be affected? How does the task demand influence visual processing in the brain and, consequently, our behaviour? To address these questions, we conducted an object categorisation experiment with three tasks, one at the superordinate level ('animate/inanimate') and two at the basic levels ('face/body' and 'animal/human face') along with a passive task in which participants were not required to categorise objects. To control bottom-up information and eliminate the effect of sensory-driven dissimilarity, we used a particular set of animal face images as the identical target stimuli across all tasks. We then investigated the impact of top-down task demands on behaviour and brain representations. Behavioural results demonstrated a superordinate advantage in the reaction time, while the accuracy was similar for all categorisation levels. The event-related potentials (ERPs) for all categorisation levels were highly similar except for about 170 ms and after 300 ms from stimulus onset. In these time windows, the animal/human face categorisation, which required fine-scale discrimination, elicited a differential ERP response. Similarly, decoding analysis over all electrodes showed the highest peak value of task decoding around 170 ms, followed by a few significant timepoints, generally after 300 ms. Moreover, brain responses revealed task-related neural modulation during categorisation tasks compared with the passive task. Overall, these findings demonstrate different task-related effects on the behavioural response and brain representations. The early and late components of neural modulation could be linked to perceptual and top-down processing of object categories, respectively.
Collapse
Affiliation(s)
- Amirhossein Farzmahdi
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Fatemeh Fallah
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
| | - Reza Rajimehr
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Reza Ebrahimpour
- Department of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran
| |
Collapse
|
203
|
Marrazzo G, Vaessen MJ, de Gelder B. Decoding the difference between explicit and implicit body expression representation in high level visual, prefrontal and inferior parietal cortex. Neuroimage 2021; 243:118545. [PMID: 34478822 DOI: 10.1016/j.neuroimage.2021.118545] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 11/28/2022] Open
Abstract
Recent studies provide an increasing understanding of how visual objects categories like faces or bodies are represented in the brain and also raised the question whether a category based or more dynamic network inspired models are more powerful. Two important and so far sidestepped issues in this debate are, first, how major category attributes like the emotional expression directly influence category representation and second, whether category and attribute representation are sensitive to task demands. This study investigated the impact of a crucial category attribute like emotional expression on category area activity and whether this varies with the participants' task. Using (fMRI) we measured BOLD responses while participants viewed whole body expressions and performed either an explicit (emotion) or an implicit (shape) recognition task. Our results based on multivariate methods show that the type of task is the strongest determinant of brain activity and can be decoded in EBA, VLPFC and IPL. Brain activity was higher for the explicit task condition in VLPFC and was not emotion specific. This pattern suggests that during explicit recognition of the body expression, body category representation may be strengthened, and emotion and action related activity suppressed. Taken together these results stress the importance of the task and of the role of category attributes for understanding the functional organization of high level visual cortex.
Collapse
Affiliation(s)
- Giuseppe Marrazzo
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands
| | - Maarten J Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Limburg 6200 MD, Maastricht, the Netherlands; Department of Computer Science, University College London, London WC1E 6BT, United Kingdom.
| |
Collapse
|
204
|
Mocz V, Vaziri-Pashkam M, Chun MM, Xu Y. Predicting Identity-Preserving Object Transformations across the Human Ventral Visual Stream. J Neurosci 2021; 41:7403-7419. [PMID: 34253629 PMCID: PMC8412993 DOI: 10.1523/jneurosci.2137-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 06/25/2021] [Accepted: 07/01/2021] [Indexed: 11/21/2022] Open
Abstract
In everyday life, we have no trouble categorizing objects varying in position, size, and orientation. Previous fMRI research shows that higher-level object processing regions in the human lateral occipital cortex may link object responses from different affine states (i.e., size and viewpoint) through a general linear mapping function capable of predicting responses to novel objects. In this study, we extended this approach to examine the mapping for both Euclidean (e.g., position and size) and non-Euclidean (e.g., image statistics and spatial frequency) transformations across the human ventral visual processing hierarchy, including areas V1, V2, V3, V4, ventral occipitotemporal cortex, and lateral occipitotemporal cortex. The predicted pattern generated from a linear mapping function could capture a significant amount of the changes associated with the transformations throughout the ventral visual stream. The derived linear mapping functions were not category independent as performance was better for the categories included than those not included in training and better between two similar versus two dissimilar categories in both lower and higher visual regions. Consistent with object representations being stronger in higher than in lower visual regions, pattern selectivity and object category representational structure were somewhat better preserved in the predicted patterns in higher than in lower visual regions. There were no notable differences between Euclidean and non-Euclidean transformations. These findings demonstrate a near-orthogonal representation of object identity and these nonidentity features throughout the human ventral visual processing pathway with these nonidentity features largely untangled from the identity features early in visual processing.SIGNIFICANCE STATEMENT Presently we still do not fully understand how object identity and nonidentity (e.g., position, size) information are simultaneously represented in the primate ventral visual system to form invariant representations. Previous work suggests that the human lateral occipital cortex may be linking different affine states of object representations through general linear mapping functions. Here, we show that across the entire human ventral processing pathway, we could link object responses in different states of nonidentity transformations through linear mapping functions for both Euclidean and non-Euclidean transformations. These mapping functions are not identity independent, suggesting that object identity and nonidentity features are represented in a near rather than a completely orthogonal manner.
Collapse
Affiliation(s)
- Viola Mocz
- Visual Cognitive Neuroscience Lab, Department of Psychology, Yale University, New Haven, Connecticut 06520
| | - Maryam Vaziri-Pashkam
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20892
| | - Marvin M Chun
- Visual Cognitive Neuroscience Lab, Department of Psychology, Yale University, New Haven, Connecticut 06520
- Department of Neuroscience, Yale School of Medicine, New Haven, Connecticut 06520
| | - Yaoda Xu
- Visual Cognitive Neuroscience Lab, Department of Psychology, Yale University, New Haven, Connecticut 06520
| |
Collapse
|
205
|
Storrs KR, Kietzmann TC, Walther A, Mehrer J, Kriegeskorte N. Diverse Deep Neural Networks All Predict Human Inferior Temporal Cortex Well, After Training and Fitting. J Cogn Neurosci 2021; 33:2044-2064. [PMID: 34272948 DOI: 10.1101/2020.05.07.082743] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Deep neural networks (DNNs) trained on object recognition provide the best current models of high-level visual cortex. What remains unclear is how strongly experimental choices, such as network architecture, training, and fitting to brain data, contribute to the observed similarities. Here, we compare a diverse set of nine DNN architectures on their ability to explain the representational geometry of 62 object images in human inferior temporal cortex (hIT), as measured with fMRI. We compare untrained networks to their task-trained counterparts and assess the effect of cross-validated fitting to hIT, by taking a weighted combination of the principal components of features within each layer and, subsequently, a weighted combination of layers. For each combination of training and fitting, we test all models for their correlation with the hIT representational dissimilarity matrix, using independent images and subjects. Trained models outperform untrained models (accounting for 57% more of the explainable variance), suggesting that structured visual features are important for explaining hIT. Model fitting further improves the alignment of DNN and hIT representations (by 124%), suggesting that the relative prevalence of different features in hIT does not readily emerge from the Imagenet object-recognition task used to train the networks. The same models can also explain the disparate representations in primary visual cortex (V1), where stronger weights are given to earlier layers. In each region, all architectures achieved equivalently high performance once trained and fitted. The models' shared properties-deep feedforward hierarchies of spatially restricted nonlinear filters-seem more important than their differences, when modeling human visual representations.
Collapse
Affiliation(s)
- Katherine R Storrs
- Justus Liebig University Giessen, Germany
- Centre for Mind, Brain and Behaviour (CMBB), Research Campus Central Hessen
| | - Tim C Kietzmann
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | | | - Johannes Mehrer
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | | |
Collapse
|
206
|
Pezzulo G, Zorzi M, Corbetta M. The secret life of predictive brains: what's spontaneous activity for? Trends Cogn Sci 2021; 25:730-743. [PMID: 34144895 PMCID: PMC8363551 DOI: 10.1016/j.tics.2021.05.007] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 05/14/2021] [Accepted: 05/19/2021] [Indexed: 01/23/2023]
Abstract
Brains at rest generate dynamical activity that is highly structured in space and time. We suggest that spontaneous activity, as in rest or dreaming, underlies top-down dynamics of generative models. During active tasks, generative models provide top-down predictive signals for perception, cognition, and action. When the brain is at rest and stimuli are weak or absent, top-down dynamics optimize the generative models for future interactions by maximizing the entropy of explanations and minimizing model complexity. Spontaneous fluctuations of correlated activity within and across brain regions may reflect transitions between 'generic priors' of the generative model: low dimensional latent variables and connectivity patterns of the most common perceptual, motor, cognitive, and interoceptive states. Even at rest, brains are proactive and predictive.
Collapse
Affiliation(s)
- Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Roma, Italy.
| | - Marco Zorzi
- Department of General Psychology and Padova Neuroscience Center (PNC), University of Padova, Padova, Italy; IRCCS San Camillo Hospital, Venice, Italy
| | - Maurizio Corbetta
- Department of Neuroscience and Padova Neuroscience Center (PNC), University of Padova, Padova, Italy; Venetian Institute of Molecular Medicine (VIMM), Fondazione Biomedica, Padova, Italy
| |
Collapse
|
207
|
Arcaro MJ, Livingstone MS. On the relationship between maps and domains in inferotemporal cortex. Nat Rev Neurosci 2021; 22:573-583. [PMID: 34345018 PMCID: PMC8865285 DOI: 10.1038/s41583-021-00490-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2021] [Indexed: 02/07/2023]
Abstract
How does the brain encode information about the environment? Decades of research have led to the pervasive notion that the object-processing pathway in primate cortex consists of multiple areas that are each specialized to process different object categories (such as faces, bodies, hands, non-face objects and scenes). The anatomical consistency and modularity of these regions have been interpreted as evidence that these regions are innately specialized. Here, we propose that ventral-stream modules do not represent clusters of circuits that each evolved to process some specific object category particularly important for survival, but instead reflect the effects of experience on a domain-general architecture that evolved to be able to adapt, within a lifetime, to its particular environment. Furthermore, we propose that the mechanisms underlying the development of domains are both evolutionarily old and universal across cortex. Topographic maps are fundamental, governing the development of specializations across systems, providing a framework for brain organization.
Collapse
|
208
|
Braunsdorf M, Blazquez Freches G, Roumazeilles L, Eichert N, Schurz M, Uithol S, Bryant KL, Mars RB. Does the temporal cortex make us human? A review of structural and functional diversity of the primate temporal lobe. Neurosci Biobehav Rev 2021; 131:400-410. [PMID: 34480913 DOI: 10.1016/j.neubiorev.2021.08.032] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 08/03/2021] [Accepted: 08/30/2021] [Indexed: 10/20/2022]
Abstract
Temporal cortex is a primate specialization that shows considerable variation in size, morphology, and connectivity across species. Human temporal cortex is involved in many behaviors that are considered especially well developed in humans, including semantic processing, language, and theory of mind. Here, we ask whether the involvement of temporal cortex in these behaviors can be explained in the context of the 'general' primate organization of the temporal lobe or whether the human temporal lobe contains unique specializations indicative of a 'step change' in the lineage leading to modern humans. We propose that many human behaviors can be explained as elaborations of temporal cortex functions observed in other primates. However, changes in temporal lobe white matter suggest increased integration of information within temporal cortex and between posterior temporal cortex and other association areas, which likely enable behaviors not possible in other species.
Collapse
Affiliation(s)
- Marius Braunsdorf
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands.
| | - Guilherme Blazquez Freches
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Lea Roumazeilles
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Nicole Eichert
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, United Kingdom
| | - Matthias Schurz
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands; Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Sebo Uithol
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Katherine L Bryant
- Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, United Kingdom
| | - Rogier B Mars
- Donders Institute for Brain Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands; Wellcome Centre for Integrative Neuroimaging, Centre for Functional MRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
209
|
Jaswal SM, De Bleser AKF, Handy TC. Misokinesia is a sensitivity to seeing others fidget that is prevalent in the general population. Sci Rep 2021; 11:17204. [PMID: 34446737 PMCID: PMC8390668 DOI: 10.1038/s41598-021-96430-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Accepted: 08/10/2021] [Indexed: 11/25/2022] Open
Abstract
Misokinesia––or the ‘hatred of movements’––is a psychological phenomenon that is defined by a strong negative affective or emotional response to the sight of someone else’s small and repetitive movements, such as seeing someone fidget with a hand or foot. Among those who regularly experience misokinesia sensitivity, there is a growing grass-roots recognition of the challenges that it presents as evidenced by on-line support groups. Yet surprisingly, scientific research on the topic is lacking. This article is novel in systematically examining whether misokinesia sensitivity actually exists in the general population, and if so, whether there is individual variability in the intensity or extent of what sensitivities are reported. Across three studies that included 4100 participants, we confirmed the existence of misokinesia sensitivity in both student and non-student populations, with approximately one-third of our participants self-reporting some degree of sensitivity to seeing the repetitive, fidgeting behaviors of others as encountered in their daily lives. Moreover, individual variability in the range and intensity of sensitivities reported suggest that the negative social-affective impacts associated with misokinesia sensitivities may grow with age. Our findings thus confirm that a large segment of the general population may have a visual-social sensitivity that has received little formal recognition.
Collapse
Affiliation(s)
- Sumeet M Jaswal
- Department of Psychology, University of British Columbia, 3406-2136 West Mall, Vancouver, BC, V6T 1Z4, Canada
| | - Andreas K F De Bleser
- Faculty of Psychology and Educational Sciences, Ghent University, Campus DunantDunantlaan 2, 9000, HenriGent, Belgium
| | - Todd C Handy
- Department of Psychology, University of British Columbia, 3406-2136 West Mall, Vancouver, BC, V6T 1Z4, Canada.
| |
Collapse
|
210
|
Ritchie JB, Zeman AA, Bosmans J, Sun S, Verhaegen K, Op de Beeck HP. Untangling the Animacy Organization of Occipitotemporal Cortex. J Neurosci 2021; 41:7103-7119. [PMID: 34230104 PMCID: PMC8372013 DOI: 10.1523/jneurosci.2628-20.2021] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Revised: 04/20/2021] [Accepted: 05/20/2021] [Indexed: 11/21/2022] Open
Abstract
Some of the most impressive functional specializations in the human brain are found in the occipitotemporal cortex (OTC), where several areas exhibit selectivity for a small number of visual categories, such as faces and bodies, and spatially cluster based on stimulus animacy. Previous studies suggest this animacy organization reflects the representation of an intuitive taxonomic hierarchy, distinct from the presence of face- and body-selective areas in OTC. Using human functional magnetic resonance imaging, we investigated the independent contribution of these two factors-the face-body division and taxonomic hierarchy-in accounting for the animacy organization of OTC and whether they might also be reflected in the architecture of several deep neural networks that have not been explicitly trained to differentiate taxonomic relations. We found that graded visual selectivity, based on animal resemblance to human faces and bodies, masquerades as an apparent animacy continuum, which suggests that taxonomy is not a separate factor underlying the organization of the ventral visual pathway.SIGNIFICANCE STATEMENT Portions of the visual cortex are specialized to determine whether types of objects are animate in the sense of being capable of self-movement. Two factors have been proposed as accounting for this animacy organization: representations of faces and bodies and an intuitive taxonomic continuum of humans and animals. We performed an experiment to assess the independent contribution of both of these factors. We found that graded visual representations, based on animal resemblance to human faces and bodies, masquerade as an apparent animacy continuum, suggesting that taxonomy is not a separate factor underlying the organization of areas in the visual cortex.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Astrid A Zeman
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Joyce Bosmans
- Faculty of Medicine and Health Sciences, University of Antwerp, 2000 Antwerp, Belgium
| | - Shuo Sun
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Kirsten Verhaegen
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Hans P Op de Beeck
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| |
Collapse
|
211
|
Taschereau-Dumouchel V, Cortese A, Lau H, Kawato M. Conducting decoded neurofeedback studies. Soc Cogn Affect Neurosci 2021; 16:838-848. [PMID: 32367138 PMCID: PMC8343564 DOI: 10.1093/scan/nsaa063] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 02/13/2020] [Accepted: 04/27/2020] [Indexed: 12/20/2022] Open
Abstract
Closed-loop neurofeedback has sparked great interest since its inception in the late 1960s. However, the field has historically faced various methodological challenges. Decoded fMRI neurofeedback may provide solutions to some of these problems. Notably, thanks to the recent advancements of machine learning approaches, it is now possible to target unconscious occurrences of specific multivoxel representations. In this tools of the trade paper, we discuss how to implement these interventions in rigorous double-blind placebo-controlled experiments. We aim to provide a step-by-step guide to address some of the most common methodological and analytical considerations. We also discuss tools that can be used to facilitate the implementation of new experiments. We hope that this will encourage more researchers to try out this powerful new intervention method.
Collapse
Affiliation(s)
- Vincent Taschereau-Dumouchel
- Department of Decoded Neurofeedback, ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan
- Department of Psychology, UCLA, Los Angeles, CA 90095, USA
| | - Aurelio Cortese
- Department of Decoded Neurofeedback, ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan
| | - Hakwan Lau
- Department of Decoded Neurofeedback, ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan
- Department of Psychology, UCLA, Los Angeles, CA 90095, USA
- State Key Laboratory of Brain and Cognitive Sciences, University of Hong Kong, Hong Kong
- Brain Research Institute, UCLA, Los Angeles, CA 90095, USA
- Department of Psychology, University of Hong Kong, Pokfulam, Hong Kong
| | - Mitsuo Kawato
- Department of Decoded Neurofeedback, ATR Computational Neuroscience Laboratories, Kyoto 619-0288, Japan
- RIKEN Center for Advanced Intelligence Project, ATR Institute International, Kyoto, Japan
| |
Collapse
|
212
|
Zhang C, Duan XH, Wang LY, Li YL, Yan B, Hu GE, Zhang RY, Tong L. Dissociable Neural Representations of Adversarially Perturbed Images in Convolutional Neural Networks and the Human Brain. Front Neuroinform 2021; 15:677925. [PMID: 34421567 PMCID: PMC8375771 DOI: 10.3389/fninf.2021.677925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 06/28/2021] [Indexed: 11/28/2022] Open
Abstract
Despite the remarkable similarities between convolutional neural networks (CNN) and the human brain, CNNs still fall behind humans in many visual tasks, indicating that there still exist considerable differences between the two systems. Here, we leverage adversarial noise (AN) and adversarial interference (AI) images to quantify the consistency between neural representations and perceptual outcomes in the two systems. Humans can successfully recognize AI images as the same categories as their corresponding regular images but perceive AN images as meaningless noise. In contrast, CNNs can recognize AN images similar as corresponding regular images but classify AI images into wrong categories with surprisingly high confidence. We use functional magnetic resonance imaging to measure brain activity evoked by regular and adversarial images in the human brain, and compare it to the activity of artificial neurons in a prototypical CNN-AlexNet. In the human brain, we find that the representational similarity between regular and adversarial images largely echoes their perceptual similarity in all early visual areas. In AlexNet, however, the neural representations of adversarial images are inconsistent with network outputs in all intermediate processing layers, providing no neural foundations for the similarities at the perceptual level. Furthermore, we show that voxel-encoding models trained on regular images can successfully generalize to the neural responses to AI images but not AN images. These remarkable differences between the human brain and AlexNet in representation-perception association suggest that future CNNs should emulate both behavior and the internal neural presentations of the human brain.
Collapse
Affiliation(s)
- Chi Zhang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Xiao-Han Duan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Lin-Yuan Wang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Yong-Li Li
- People’s Hospital of Henan Province, Zhengzhou, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Guo-En Hu
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Ru-Yuan Zhang
- Institute of Psychology and Behavioral Science, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Li Tong
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| |
Collapse
|
213
|
Guassi Moreira JF, McLaughlin KA, Silvers JA. Characterizing the Network Architecture of Emotion Regulation Neurodevelopment. Cereb Cortex 2021; 31:4140-4150. [PMID: 33949645 PMCID: PMC8521747 DOI: 10.1093/cercor/bhab074] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/25/2021] [Accepted: 02/26/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to regulate emotions is key to goal attainment and well-being. Although much has been discovered about neurodevelopment and the acquisition of emotion regulation, very little of this work has leveraged information encoded in whole-brain networks. Here we employed a network neuroscience framework to parse the neural underpinnings of emotion regulation skill acquisition, while accounting for age, in a sample of children and adolescents (N = 70, 34 female, aged 8-17 years). Focusing on three key network metrics-network differentiation, modularity, and community number differences between active regulation and a passive emotional baseline-we found that the control network, the default mode network, and limbic network were each related to emotion regulation ability while controlling for age. Greater network differentiation in the control and limbic networks was related to better emotion regulation ability. With regards to network community structure (modularity and community number), more communities and more crosstalk between modules (i.e., less modularity) in the control network were associated with better regulatory ability. By contrast, less crosstalk (i.e., greater modularity) between modules in the default mode network was associated with better regulatory ability. Together, these findings highlight whole-brain connectome features that support the acquisition of emotion regulation in youth.
Collapse
Affiliation(s)
| | | | - Jennifer A Silvers
- Department of Psychology, University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
214
|
Kalafatis C, Modarres MH, Apostolou P, Marefat H, Khanbagi M, Karimi H, Vahabi Z, Aarsland D, Khaligh-Razavi SM. Validity and Cultural Generalisability of a 5-Minute AI-Based, Computerised Cognitive Assessment in Mild Cognitive Impairment and Alzheimer's Dementia. Front Psychiatry 2021; 12:706695. [PMID: 34366938 PMCID: PMC8339427 DOI: 10.3389/fpsyt.2021.706695] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 06/17/2021] [Indexed: 11/13/2022] Open
Abstract
Introduction: Early detection and monitoring of mild cognitive impairment (MCI) and Alzheimer's Disease (AD) patients are key to tackling dementia and providing benefits to patients, caregivers, healthcare providers and society. We developed the Integrated Cognitive Assessment (ICA); a 5-min, language independent computerised cognitive test that employs an Artificial Intelligence (AI) model to improve its accuracy in detecting cognitive impairment. In this study, we aimed to evaluate the generalisability of the ICA in detecting cognitive impairment in MCI and mild AD patients. Methods: We studied the ICA in 230 participants. 95 healthy volunteers, 80 MCI, and 55 mild AD participants completed the ICA, Montreal Cognitive Assessment (MoCA) and Addenbrooke's Cognitive Examination (ACE) cognitive tests. Results: The ICA demonstrated convergent validity with MoCA (Pearson r=0.58, p<0.0001) and ACE (r=0.62, p<0.0001). The ICA AI model was able to detect cognitive impairment with an AUC of 81% for MCI patients, and 88% for mild AD patients. The AI model demonstrated improved performance with increased training data and showed generalisability in performance from one population to another. The ICA correlation of 0.17 (p = 0.01) with education years is considerably smaller than that of MoCA (r = 0.34, p < 0.0001) and ACE (r = 0.41, p < 0.0001) which displayed significant correlations. In a separate study the ICA demonstrated no significant practise effect over the duration of the study. Discussion: The ICA can support clinicians by aiding accurate diagnosis of MCI and AD and is appropriate for large-scale screening of cognitive impairment. The ICA is unbiased by differences in language, culture, and education.
Collapse
Affiliation(s)
- Chris Kalafatis
- Cognetivity Ltd, London, United Kingdom
- South London & Maudsley NHS Foundation Trust, London, United Kingdom
- Department of Old Age Psychiatry, King's College London, London, United Kingdom
| | | | | | - Haniye Marefat
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Mahdiyeh Khanbagi
- Department of Stem Cells and Developmental Biology, Cell Science Research Centre, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| | - Hamed Karimi
- Department of Stem Cells and Developmental Biology, Cell Science Research Centre, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| | - Zahra Vahabi
- Tehran University of Medical Sciences, Tehran, Iran
| | - Dag Aarsland
- Department of Old Age Psychiatry, King's College London, London, United Kingdom
| | - Seyed-Mahdi Khaligh-Razavi
- Cognetivity Ltd, London, United Kingdom
- Department of Stem Cells and Developmental Biology, Cell Science Research Centre, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| |
Collapse
|
215
|
Abstract
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Justin L Gardner
- Department of Psychology, Stanford University, Stanford, California 94305, USA;
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892, USA;
| |
Collapse
|
216
|
Moran R, Dayan P, Dolan RJ. Efficiency and prioritization of inference-based credit assignment. Curr Biol 2021; 31:2747-2756.e6. [PMID: 33887181 PMCID: PMC8279739 DOI: 10.1016/j.cub.2021.03.091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 02/11/2021] [Accepted: 03/29/2021] [Indexed: 11/16/2022]
Abstract
Organisms adapt to their environments by learning to approach states that predict rewards and avoid states associated with punishments. Knowledge about the affective value of states often relies on credit assignment (CA), whereby state values are updated on the basis of reward feedback. Remarkably, humans assign credit to states that are not observed but are instead inferred based on a cognitive map that represents structural knowledge of an environment. A pertinent example is authors attempting to infer the identity of anonymous reviewers to assign them credit or blame and, on this basis, inform future referee recommendations. Although inference is cognitively costly, it is unknown how it influences CA or how it is apportioned between hidden and observable states (for example, both anonymous and revealed reviewers). We addressed these questions in a task that provided choices between lotteries where each led to a unique pair of occasionally rewarding outcome states. On some trials, both states were observable (rendering inference nugatory), whereas on others, the identity of one of the states was concealed. Importantly, by exploiting knowledge of choice-state associations, subjects could infer the identity of this hidden state. We show that having to perform inference reduces state-value updates. Strikingly, and in violation of normative theories, this reduction in CA was selective for the observed outcome alone. These findings have implications for the operation of putative cognitive maps.
Collapse
Affiliation(s)
- Rani Moran
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, 10-12 Russell Square, London WC1B 5EH, UK; Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3BG, UK.
| | - Peter Dayan
- Max Planck Institute for Biological Cybernetics, Max Planck-Ring 8, 72076 Tübingen, Germany; University of Tübingen, 72074 Tübingen, Germany
| | - Raymond J Dolan
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, 10-12 Russell Square, London WC1B 5EH, UK; Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3BG, UK
| |
Collapse
|
217
|
Grootswagers T, Robinson AK. Overfitting the Literature to One Set of Stimuli and Data. Front Hum Neurosci 2021; 15:682661. [PMID: 34305552 PMCID: PMC8295535 DOI: 10.3389/fnhum.2021.682661] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 06/16/2021] [Indexed: 12/02/2022] Open
Abstract
A large number of papers in Computational Cognitive Neuroscience are developing and testing novel analysis methods using one specific neuroimaging dataset and problematic experimental stimuli. Publication bias and confirmatory exploration will result in overfitting to the limited available data. We highlight the problems with this specific dataset and argue for the need to collect more good quality open neuroimaging data using a variety of experimental stimuli, in order to test the generalisability of current published results, and allow for more robust results in future work.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Sydney, NSW, Australia.,School of Psychology, Western Sydney University, Sydney, NSW, Australia.,School of Psychology, University of Sydney, Sydney, NSW, Australia
| | | |
Collapse
|
218
|
Yang J, Huber L, Yu Y, Bandettini PA. Linking cortical circuit models to human cognition with laminar fMRI. Neurosci Biobehav Rev 2021; 128:467-478. [PMID: 34245758 DOI: 10.1016/j.neubiorev.2021.07.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 06/29/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Laboratory animal research has provided significant knowledge into the function of cortical circuits at the laminar level, which has yet to be fully leveraged towards insights about human brain function on a similar spatiotemporal scale. The use of functional magnetic resonance imaging (fMRI) in conjunction with neural models provides new opportunities to gain important insights from current knowledge. During the last five years, human studies have demonstrated the value of high-resolution fMRI to study laminar-specific activity in the human brain. This is mostly performed at ultra-high-field strengths (≥ 7 T) and is known as laminar fMRI. Advancements in laminar fMRI are beginning to open new possibilities for studying questions in basic cognitive neuroscience. In this paper, we first review recent methodological advances in laminar fMRI and describe recent human laminar fMRI studies. Then, we discuss how the use of laminar fMRI can help bridge the gap between cortical circuit models and human cognition.
Collapse
Affiliation(s)
- Jiajia Yang
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan; Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA.
| | - Laurentius Huber
- MR-Methods Group, Faculty of Psychology and Neuroscience, University of Maastricht, Maastricht, the Netherlands
| | - Yinghua Yu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan; Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA
| | - Peter A Bandettini
- Section on Functional Imaging Methods, National Institute of Mental Health, Bethesda, MD, USA; Functional MRI Core Facility, National Institute of Mental Health, Bethesda, MD, USA
| |
Collapse
|
219
|
Topography of Visual Features in the Human Ventral Visual Pathway. Neurosci Bull 2021; 37:1454-1468. [PMID: 34215969 DOI: 10.1007/s12264-021-00734-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 02/24/2021] [Indexed: 10/21/2022] Open
Abstract
Visual object recognition in humans and nonhuman primates is achieved by the ventral visual pathway (ventral occipital-temporal cortex, VOTC), which shows a well-documented object domain structure. An on-going question is what type of information is processed in the higher-order VOTC that underlies such observations, with recent evidence suggesting effects of certain visual features. Combining computational vision models, fMRI experiment using a parametric-modulation approach, and natural image statistics of common objects, we depicted the neural distribution of a comprehensive set of visual features in the VOTC, identifying voxel sensitivities with specific feature sets across geometry/shape, Fourier power, and color. The visual feature combination pattern in the VOTC is significantly explained by their relationships to different types of response-action computation (fight-or-flight, navigation, and manipulation), as derived from behavioral ratings and natural image statistics. These results offer a comprehensive visual feature map in the VOTC and a plausible theoretical explanation as a mapping onto different types of downstream response-action systems.
Collapse
|
220
|
Matsumoto A, Soshi T, Fujimaki N, Ihara AS. Distinctive responses in anterior temporal lobe and ventrolateral prefrontal cortex during categorization of semantic information. Sci Rep 2021; 11:13343. [PMID: 34172800 PMCID: PMC8233387 DOI: 10.1038/s41598-021-92726-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 06/15/2021] [Indexed: 11/11/2022] Open
Abstract
Semantic categorization is a fundamental ability in language as well as in interaction with the environment. However, it is unclear what cognitive and neural basis generates this flexible and context dependent categorization of semantic information. We performed behavioral and fMRI experiments with a semantic priming paradigm to clarify this. Participants conducted semantic decision tasks in which a prime word preceded target words, using names of animals (mammals, birds, or fish). We focused on the categorization of unique marine mammals, having characteristics of both mammals and fish. Behavioral experiments indicated that marine mammals were semantically closer to fish than terrestrial mammals, inconsistent with the category membership. The fMRI results showed that the left anterior temporal lobe was sensitive to the semantic distance between prime and target words rather than category membership, while the left ventrolateral prefrontal cortex was sensitive to the consistency of category membership of word pairs. We interpreted these results as evidence of existence of dual processes for semantic categorization. The combination of bottom-up processing based on semantic characteristics in the left anterior temporal lobe and top-down processing based on task and/or context specific information in the left ventrolateral prefrontal cortex is required for the flexible categorization of semantic information.
Collapse
Affiliation(s)
- Atsushi Matsumoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, and Osaka University, 588-2 Iwaoka, Iwaoka-cho, Nishi-ku, Kobe, Japan
- Kansai University of Welfare Sciences, Kashiwara, Japan
| | - Takahiro Soshi
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, and Osaka University, 588-2 Iwaoka, Iwaoka-cho, Nishi-ku, Kobe, Japan
- Kyoto University, Kyoto, Japan
| | - Norio Fujimaki
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, and Osaka University, 588-2 Iwaoka, Iwaoka-cho, Nishi-ku, Kobe, Japan
| | - Aya S Ihara
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, and Osaka University, 588-2 Iwaoka, Iwaoka-cho, Nishi-ku, Kobe, Japan.
| |
Collapse
|
221
|
Jia X, Hong H, DiCarlo JJ. Unsupervised changes in core object recognition behavior are predicted by neural plasticity in inferior temporal cortex. eLife 2021; 10:e60830. [PMID: 34114566 PMCID: PMC8324291 DOI: 10.7554/elife.60830] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 06/10/2021] [Indexed: 11/13/2022] Open
Abstract
Temporal continuity of object identity is a feature of natural visual input and is potentially exploited - in an unsupervised manner - by the ventral visual stream to build the neural representation in inferior temporal (IT) cortex. Here, we investigated whether plasticity of individual IT neurons underlies human core object recognition behavioral changes induced with unsupervised visual experience. We built a single-neuron plasticity model combined with a previously established IT population-to-recognition-behavior-linking model to predict human learning effects. We found that our model, after constrained by neurophysiological data, largely predicted the mean direction, magnitude, and time course of human performance changes. We also found a previously unreported dependency of the observed human performance change on the initial task difficulty. This result adds support to the hypothesis that tolerant core object recognition in human and non-human primates is instructed - at least in part - by naturally occurring unsupervised temporal contiguity experience.
Collapse
Affiliation(s)
- Xiaoxuan Jia
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain ResearchCambridgeUnited States
| | - Ha Hong
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain ResearchCambridgeUnited States
- Harvard-MIT Division of Health Sciences and TechnologyCambridgeUnited States
| | - James J DiCarlo
- Department of Brain and Cognitive Sciences, Massachusetts Institute of TechnologyCambridgeUnited States
- McGovern Institute for Brain ResearchCambridgeUnited States
- Center for Brains, Minds and MachinesCambridgeUnited States
| |
Collapse
|
222
|
Nishida S, Blanc A, Maeda N, Kado M, Nishimoto S. Behavioral correlates of cortical semantic representations modeled by word vectors. PLoS Comput Biol 2021; 17:e1009138. [PMID: 34161315 PMCID: PMC8260002 DOI: 10.1371/journal.pcbi.1009138] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 07/06/2021] [Accepted: 06/01/2021] [Indexed: 12/03/2022] Open
Abstract
The quantitative modeling of semantic representations in the brain plays a key role in understanding the neural basis of semantic processing. Previous studies have demonstrated that word vectors, which were originally developed for use in the field of natural language processing, provide a powerful tool for such quantitative modeling. However, whether semantic representations in the brain revealed by the word vector-based models actually capture our perception of semantic information remains unclear, as there has been no study explicitly examining the behavioral correlates of the modeled brain semantic representations. To address this issue, we compared the semantic structure of nouns and adjectives in the brain estimated from word vector-based brain models with that evaluated from human behavior. The brain models were constructed using voxelwise modeling to predict the functional magnetic resonance imaging (fMRI) response to natural movies from semantic contents in each movie scene through a word vector space. The semantic dissimilarity of brain word representations was then evaluated using the brain models. Meanwhile, data on human behavior reflecting the perception of semantic dissimilarity between words were collected in psychological experiments. We found a significant correlation between brain model- and behavior-derived semantic dissimilarities of words. This finding suggests that semantic representations in the brain modeled via word vectors appropriately capture our perception of word meanings.
Collapse
Affiliation(s)
- Satoshi Nishida
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Antoine Blanc
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan
| | | | | | - Shinji Nishimoto
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
- Graduate School of Medicine, Osaka University, Suita, Osaka, Japan
| |
Collapse
|
223
|
Takagi Y, Hunt LT, Woolrich MW, Behrens TEJ, Klein-Flügge MC. Adapting non-invasive human recordings along multiple task-axes shows unfolding of spontaneous and over-trained choice. eLife 2021; 10:e60988. [PMID: 33973522 PMCID: PMC8143794 DOI: 10.7554/elife.60988] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 04/26/2021] [Indexed: 12/28/2022] Open
Abstract
Choices rely on a transformation of sensory inputs into motor responses. Using invasive single neuron recordings, the evolution of a choice process has been tracked by projecting population neural responses into state spaces. Here, we develop an approach that allows us to recover similar trajectories on a millisecond timescale in non-invasive human recordings. We selectively suppress activity related to three task-axes, relevant and irrelevant sensory inputs and response direction, in magnetoencephalography data acquired during context-dependent choices. Recordings from premotor cortex show a progression from processing sensory input to processing the response. In contrast to previous macaque recordings, information related to choice-irrelevant features is represented more weakly than choice-relevant sensory information. To test whether this mechanistic difference between species is caused by extensive over-training common in non-human primate studies, we trained humans on >20,000 trials of the task. Choice-irrelevant features were still weaker than relevant features in premotor cortex after over-training.
Collapse
Affiliation(s)
- Yu Takagi
- Wellcome Centre for Integrative Neuroimaging (WIN), Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Department of Psychiatry, University of Oxford, Warneford HospitalOxfordUnited Kingdom
- Department of Neuropsychiatry, Graduate School of Medicine, University of TokyoTokyoJapan
| | - Laurence Tudor Hunt
- Department of Psychiatry, University of Oxford, Warneford HospitalOxfordUnited Kingdom
- Wellcome Centre for Integrative Neuroimaging (WIN), Centre for Functional MRI of the Brain (FMRIB), University of Oxford, Nuffield Department of Clinical Neurosciences, John Radcliffe HospitalOxfordUnited Kingdom
| | - Mark W Woolrich
- Department of Psychiatry, University of Oxford, Warneford HospitalOxfordUnited Kingdom
- Wellcome Centre for Integrative Neuroimaging (WIN), Centre for Functional MRI of the Brain (FMRIB), University of Oxford, Nuffield Department of Clinical Neurosciences, John Radcliffe HospitalOxfordUnited Kingdom
| | - Timothy EJ Behrens
- Wellcome Centre for Integrative Neuroimaging (WIN), Centre for Functional MRI of the Brain (FMRIB), University of Oxford, Nuffield Department of Clinical Neurosciences, John Radcliffe HospitalOxfordUnited Kingdom
- Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology, University College London (UCL)LondonUnited Kingdom
| | - Miriam C Klein-Flügge
- Wellcome Centre for Integrative Neuroimaging (WIN), Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Wellcome Centre for Integrative Neuroimaging (WIN), Centre for Functional MRI of the Brain (FMRIB), University of Oxford, Nuffield Department of Clinical Neurosciences, John Radcliffe HospitalOxfordUnited Kingdom
| |
Collapse
|
224
|
Puccetti NA, Schaefer SM, van Reekum CM, Ong AD, Almeida DM, Ryff CD, Davidson RJ, Heller AS. Linking Amygdala Persistence to Real-World Emotional Experience and Psychological Well-Being. J Neurosci 2021; 41:3721-3730. [PMID: 33753544 PMCID: PMC8055079 DOI: 10.1523/jneurosci.1637-20.2021] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 02/03/2021] [Accepted: 02/24/2021] [Indexed: 11/21/2022] Open
Abstract
Neural dynamics in response to affective stimuli are linked to momentary emotional experiences. The amygdala, in particular, is involved in subjective emotional experience and assigning value to neutral stimuli. Because amygdala activity persistence following aversive events varies across individuals, some may evaluate subsequent neutral stimuli more negatively than others. This may lead to more frequent and long-lasting momentary emotional experiences, which may also be linked to self-evaluative measures of psychological well-being (PWB). Despite extant links between daily affect and PWB, few studies have directly explored the links between amygdala persistence, daily affective experience, and PWB. To that end, we examined data from 52 human adults (67% female) in the Midlife in the United States study who completed measures of PWB, daily affect, and functional MRI (fMRI). During fMRI, participants viewed affective images followed by a neutral facial expression, permitting quantification of individual differences in the similarity of amygdala representations of affective stimuli and neutral facial expressions that follow. Using representational similarity analysis, neural persistence following aversive stimuli was operationalized as similarity between the amygdala activation patterns while encoding negative images and the neutral facial expressions shown afterward. Individuals demonstrating less persistent activation patterns in the left amygdala to aversive stimuli reported more positive and less negative affect in daily life. Further, daily positive affect served as an indirect link between left amygdala persistence and PWB. These results clarify important connections between individual differences in brain function, daily experiences of affect, and well-being.SIGNIFICANCE STATEMENT At the intersection of affective neuroscience and psychology, researchers have aimed to understand how individual differences in the neural processing of affective events map onto to real-world emotional experiences and evaluations of well-being. Using a longitudinal dataset from 52 adults in the Midlife in the United States (MIDUS) study, we provide an integrative model of affective functioning: less amygdala persistence following negative images predicts greater positive affect (PA) in daily life, which in turn predicts greater psychological well-being (PWB) seven years later. Thus, day-to-day experiences of PA comprise a promising intermediate step that links individual differences in neural dynamics to complex judgements of PWB.
Collapse
Affiliation(s)
- Nikki A Puccetti
- Department of Psychology, University of Miami, Coral Gables, Florida 33124
| | - Stacey M Schaefer
- Center for Healthy Minds, University of Wisconsin-Madison, Madison, Wisconsin 53703
| | - Carien M van Reekum
- School of Psychology and Clinical Language Science, University of Reading, Reading RG6 6AL, United Kingdom
| | - Anthony D Ong
- Department of Human Development, Cornell University, Ithaca, New York 14853
| | - David M Almeida
- Department of Human Development and Family Studies and Center for Healthy Aging, The Pennsylvania State University, University Park, Pennsylvania 16802
| | - Carol D Ryff
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin 53706
| | - Richard J Davidson
- Center for Healthy Minds, University of Wisconsin-Madison, Madison, Wisconsin 53703
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin 53706
| | - Aaron S Heller
- Department of Psychology, University of Miami, Coral Gables, Florida 33124
| |
Collapse
|
225
|
Qin S, Mudur N, Pehlevan C. Contrastive Similarity Matching for Supervised Learning. Neural Comput 2021; 33:1300-1328. [PMID: 33617744 DOI: 10.1162/neco_a_01374] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/23/2020] [Indexed: 11/04/2022]
Abstract
We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.
Collapse
Affiliation(s)
- Shanshan Qin
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| | - Nayantara Mudur
- Department of Physics, Harvard University, Cambridge, MA 02138, U.S.A.
| | - Cengiz Pehlevan
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, U.S.A.
| |
Collapse
|
226
|
Carpenter AC, Thakral PP, Preston AR, Schacter DL. Reinstatement of item-specific contextual details during retrieval supports recombination-related false memories. Neuroimage 2021; 236:118033. [PMID: 33836273 PMCID: PMC8375312 DOI: 10.1016/j.neuroimage.2021.118033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Revised: 03/25/2021] [Accepted: 03/27/2021] [Indexed: 12/17/2022] Open
Abstract
Flexible retrieval mechanisms that allow us to infer relationships across events may also lead to memory errors or distortion when details of one event are misattributed to the related event. Here, we tested how making successful inferences alters representation of overlapping events, leading to false memories. Participants encoded overlapping associations ('AB' and 'BC'), each of which was superimposed on different indoor and outdoor scenes that were pre-exposed prior to associative learning. Participants were subsequently tested on both the directly learned pairs ('AB' and 'BC') and inferred relationships across pairs ('AC'). We predicted that when people make a correct inference, features associated with overlapping events may become integrated in memory. To test this hypothesis, participants completed a final detailed retrieval test, in which they had to recall the scene associated with initially learned 'AB' pairs (or 'BC' pairs). We found that the outcome of inference decisions impacted the degree to which neural patterns elicited during detailed 'AB' retrieval reflected reinstatement of the scene associated with the overlapping 'BC' event. After successful inference, neural patterns in the anterior hippocampus, posterior medial prefrontal cortex, and our content-reinstatement region (left inferior temporal gyrus) were more similar to the overlapping, yet incorrect 'BC' context relative to after unsuccessful inference. Further, greater hippocampal activity during inference was associated with greater reinstatement of the incorrect, overlapping context in our content-reinstatement region, which in turn tracked contextual misattributions during detailed retrieval. These results suggest recombining memories during successful inference can lead to misattribution of contextual details across related events, resulting in false memories.
Collapse
Affiliation(s)
- Alexis C Carpenter
- Department of Psychology and Center for Brain Science, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, United States.
| | - Preston P Thakral
- Department of Psychology and Center for Brain Science, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, United States; Department of Psychology and Neuroscience, Boston College, United States
| | - Alison R Preston
- Center for Learning and Memory and Department of Psychology, University of Texas at Austin, United States
| | - Daniel L Schacter
- Department of Psychology and Center for Brain Science, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, United States
| |
Collapse
|
227
|
Prichard A, Chhibber R, Athanassiades K, Chiu V, Spivak M, Berns GS. 2D or not 2D? An fMRI study of how dogs visually process objects. Anim Cogn 2021; 24:1143-1151. [PMID: 33772693 DOI: 10.1007/s10071-021-01506-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 03/09/2021] [Accepted: 03/12/2021] [Indexed: 10/21/2022]
Abstract
Given humans' habitual use of screens, they rarely consider potential differences when viewing two-dimensional (2D) stimuli and real-world versions of dimensional stimuli. Dogs also have access to many forms of screens and touchpads, with owners even subscribing to dog-directed content. Humans understand that 2D stimuli are representations of real-world objects, but do dogs? In canine cognition studies, 2D stimuli are almost always used to study what is normally 3D, like faces, and may assume that both 2D and 3D stimuli are represented in the brain the same way. Here, we used awake fMRI in 15 dogs to examine the neural mechanisms underlying dogs' perception of two- and three-dimensional objects after the dogs were trained on either two- or three-dimensional versions of the objects. Activation within reward processing regions and parietal cortex of the dog brain to 2D and 3D versions of objects was determined by their training experience, as dogs trained on one dimensionality showed greater differential activation within the dimension on which they were trained. These results show that dogs do not automatically generalize between two- and three-dimensional versions of object stimuli and suggest that future research consider the implicit assumptions when using pictures or videos.
Collapse
Affiliation(s)
- Ashley Prichard
- Psychology Department, Emory University, Atlanta, GA, 30322, USA
| | - Raveena Chhibber
- Psychology Department, Emory University, Atlanta, GA, 30322, USA
| | | | - Veronica Chiu
- Psychology Department, Emory University, Atlanta, GA, 30322, USA
| | - Mark Spivak
- Comprehensive Pet Therapy, Inc, Sandy Springs, GA, 30328, USA
| | - Gregory S Berns
- Psychology Department, Emory University, Atlanta, GA, 30322, USA.
| |
Collapse
|
228
|
Overlapping but distinct: Distal connectivity dissociates hand and tool processing networks. Cortex 2021; 140:1-13. [PMID: 33901719 DOI: 10.1016/j.cortex.2021.03.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 01/18/2021] [Accepted: 03/04/2021] [Indexed: 12/31/2022]
Abstract
The processes and organizational principles of information involved in object recognition have been a subject of intense debate. These research efforts led to the understanding that local computations and feedforward/feedback connections are essential to our representations and their organization. Recent data, however, has demonstrated that distal computations also play a role in how information is locally processed. Here we focus on how long-range connectivity and local functional organization of information are related, by exploring regions that show overlapping category-preferences for two categories and testing whether their connections are related with distal representations in a category-specific way. We used an approach that relates functional connectivity with distal areas to local voxel-wise category-preferences. Specifically, we focused on two areas that show an overlap in category-preferences for tools and hands-the inferior parietal lobule/anterior intraparietal sulcus (IPL/aIPS) and the posterior middle temporal gyrus/lateral occipital temporal cortex (pMTG/LOTC) - and how connectivity from these two areas relate to voxel-wise category-preferences in two ventral temporal regions dedicated to the processing of tools and hands separately-the left medial fusiform gyrus and the fusiform body area respectively-as well as across the brain. We show that the functional connections of the two overlap areas correlate with categorical preferences for each category independently. These results show that regions that process both tools and hands maintain object topography in a category-specific way. This potentially allows for a category-specific flow of information that is pertinent to computing object representations.
Collapse
|
229
|
Russo AG, Lührs M, Di Salle F, Esposito F, Goebel R. Towards semantic fMRI neurofeedback: navigating among mental states using real-time representational similarity analysis. J Neural Eng 2021; 18. [PMID: 33684900 DOI: 10.1088/1741-2552/abecc3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/08/2021] [Indexed: 11/12/2022]
Abstract
Objective. Real-time functional magnetic resonance imaging neurofeedback (rt-fMRI-NF) is a non-invasive MRI procedure allowing examined participants to learn to self-regulate brain activity by performing mental tasks. A novel two-step rt-fMRI-NF procedure is proposed whereby the feedback display is updated in real-time based on high-level representations of experimental stimuli (e.g. objects to imagine) via real-time representational similarity analysis of multi-voxel patterns of brain activity.Approach. In a localizer session, the stimuli become associated with anchored points on a two-dimensional representational space where distances approximate between-pattern (dis)similarities. In the NF session, participants modulate their brain response, displayed as a movable point, to engage in a specific neural representation. The developed method pipeline is verified in a proof-of-concept rt-fMRI-NF study at 7 T involving a single healthy participant imagining concrete objects. Based on this data and artificial data sets with similar (simulated) spatio-temporal structure and variable (injected) signal and noise, the dependence on noise is systematically assessed.Main results. The participant in the proof-of-concept study exhibited robust activation patterns in the localizer session and managed to control the neural representation of a stimulus towards the selected target in the NF session. The offline analyses validated the rt-fMRI-NF results, showing that the rapid convergence to the target representation is noise-dependent.Significance. Our proof-of-concept study introduces a new NF method allowing the participant to navigate among different mental states. Compared to traditional NF designs (e.g. using a thermometer display to set the level of the neural signal), the proposed approach provides content-specific feedback to the participant and extra degrees of freedom to the experimenter enabling real-time control of the neural activity towards a target brain state without suggesting a specific mental strategy to the subject.
Collapse
Affiliation(s)
- Andrea G Russo
- Department of Political and Communication Sciences, University of Salerno, Fisciano (Salerno), Italy.,Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi (Salerno), Italy
| | - Michael Lührs
- Department of Cognitive Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Brain Innovation B.V., Maastricht, The Netherlands
| | - Francesco Di Salle
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi (Salerno), Italy
| | - Fabrizio Esposito
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi (Salerno), Italy.,Department of Cognitive Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Department of Advanced Medical and Surgical Sciences,University of Campania 'Luigi Vanvitelli', Napoli,Italy
| | - Rainer Goebel
- Department of Cognitive Neuroscience, University of Maastricht, Maastricht, The Netherlands.,Brain Innovation B.V., Maastricht, The Netherlands
| |
Collapse
|
230
|
Prichard A, Chhibber R, Athanassiades K, Chiu V, Spivak M, Berns GS. The mouth matters most: A functional magnetic resonance imaging study of how dogs perceive inanimate objects. J Comp Neurol 2021; 529:2987-2994. [PMID: 33745141 DOI: 10.1002/cne.25142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 02/24/2021] [Accepted: 03/15/2021] [Indexed: 11/12/2022]
Abstract
The perception and representation of objects in the world are foundational to all animals. The relative importance of objects' physical properties versus how the objects are interacted with continues to be debated. Neural evidence in humans and nonhuman primates suggests animate-inanimate and face-body dimensions of objects are represented in the temporal cortex. However, because primates have opposable thumbs and interact with objects in similar ways, the question remains as to whether this similarity represents the evolution of a common cognitive process or whether it reflects a similarity of physical interaction. Here, we used functional magnetic resonance imaging (fMRI) in dogs to test whether the type of interaction affects object processing in an animal that interacts primarily with its mouth. In Study 1, we identified object-processing regions of cortex by having dogs passively view movies of faces and objects. In Study 2, dogs were trained to interact with two new objects with either the mouth or the paw. Then, we measured responsivity in the object regions to the presentation of these objects. Mouth-objects elicited significantly greater activity in object regions than paw-objects. Mouth-objects were also associated with activity in somatosensory cortex, suggesting dogs were anticipating mouthing interactions. These findings suggest that object perception in dogs is affected by how dogs expect to interact with familiar objects.
Collapse
Affiliation(s)
- Ashley Prichard
- Psychology Department, Emory University, Atlanta, Georgia, USA
| | | | | | - Veronica Chiu
- Psychology Department, Emory University, Atlanta, Georgia, USA
| | - Mark Spivak
- Comprehensive Pet Therapy, Inc., Sandy Springs, Georgia, USA
| | - Gregory S Berns
- Psychology Department, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
231
|
Semantic Knowledge of Famous People and Places Is Represented in Hippocampus and Distinct Cortical Networks. J Neurosci 2021; 41:2762-2779. [PMID: 33547163 DOI: 10.1523/jneurosci.2034-19.2021] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 01/14/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
Studies have found that anterior temporal lobe (ATL) is critical for detailed knowledge of object categories, suggesting that it has an important role in semantic memory. However, in addition to information about entities, such as people and objects, semantic memory also encompasses information about places. We tested predictions stemming from the PMAT model, which proposes there are distinct systems that support different kinds of semantic knowledge: an anterior temporal (AT) network, which represents information about entities; and a posterior medial (PM) network, which represents information about places. We used representational similarity analysis to test for activation of semantic features when human participants viewed pictures of famous people and places, while controlling for visual similarity. We used machine learning techniques to quantify the semantic similarity of items based on encyclopedic knowledge in the Wikipedia page for each item and found that these similarity models accurately predict human similarity judgments. We found that regions within the AT network, including ATL and inferior frontal gyrus, represented detailed semantic knowledge of people. In contrast, semantic knowledge of places was represented within PM network areas, including precuneus, posterior cingulate cortex, angular gyrus, and parahippocampal cortex. Finally, we found that hippocampus, which has been proposed to serve as an interface between the AT and PM networks, represented fine-grained semantic similarity for both individual people and places. Our results provide evidence that semantic knowledge of people and places is represented separately in AT and PM areas, whereas hippocampus represents semantic knowledge of both categories.SIGNIFICANCE STATEMENT Humans acquire detailed semantic knowledge about people (e.g., their occupation and personality) and places (e.g., their cultural or historical significance). While research has demonstrated that brain regions preferentially respond to pictures of people and places, less is known about whether these regions preferentially represent semantic knowledge about specific people and places. We used machine learning techniques to develop a model of semantic similarity based on information available from Wikipedia, validating the model against similarity ratings from human participants. Using our computational model, we found that semantic knowledge about people and places is represented in distinct anterior temporal and posterior medial brain networks, respectively. We further found that hippocampus, an important memory center, represented semantic knowledge for both types of stimuli.
Collapse
|
232
|
Orban GA, Lanzilotto M, Bonini L. From Observed Action Identity to Social Affordances. Trends Cogn Sci 2021; 25:493-505. [PMID: 33745819 DOI: 10.1016/j.tics.2021.02.012] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 02/24/2021] [Accepted: 02/27/2021] [Indexed: 01/08/2023]
Abstract
Others' observed actions cause continuously changing retinal images, making it challenging to build neural representations of action identity. The monkey anterior intraparietal area (AIP) and its putative human homologue (phAIP) host neurons selective for observed manipulative actions (OMAs). The neuronal activity of both AIP and phAIP allows a stable readout of OMA identity across visual formats, but human neurons exhibit greater invariance and generalize from observed actions to action verbs. These properties stem from the convergence in AIP of superior temporal signals concerning: (i) observed body movements; and (ii) the changes in the body-object relationship. We propose that evolutionarily preserved mechanisms underlie the specification of observed-actions identity and the selection of motor responses afforded by them, thereby promoting social behavior.
Collapse
Affiliation(s)
- G A Orban
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - M Lanzilotto
- Department of Psychology, University of Turin, Turin, Italy
| | - L Bonini
- Department of Medicine and Surgery, University of Parma, Parma, Italy.
| |
Collapse
|
233
|
Mehrer J, Spoerer CJ, Jones EC, Kriegeskorte N, Kietzmann TC. An ecologically motivated image dataset for deep learning yields better models of human vision. Proc Natl Acad Sci U S A 2021; 118:e2011417118. [PMID: 33593900 PMCID: PMC7923360 DOI: 10.1073/pnas.2011417118] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Deep neural networks provide the current best models of visual information processing in the primate brain. Drawing on work from computer vision, the most commonly used networks are pretrained on data from the ImageNet Large Scale Visual Recognition Challenge. This dataset comprises images from 1,000 categories, selected to provide a challenging testbed for automated visual object recognition systems. Moving beyond this common practice, we here introduce ecoset, a collection of >1.5 million images from 565 basic-level categories selected to better capture the distribution of objects relevant to humans. Ecoset categories were chosen to be both frequent in linguistic usage and concrete, thereby mirroring important physical objects in the world. We test the effects of training on this ecologically more valid dataset using multiple instances of two neural network architectures: AlexNet and vNet, a novel architecture designed to mimic the progressive increase in receptive field sizes along the human ventral stream. We show that training on ecoset leads to significant improvements in predicting representations in human higher-level visual cortex and perceptual judgments, surpassing the previous state of the art. Significant and highly consistent benefits are demonstrated for both architectures on two separate functional magnetic resonance imaging (fMRI) datasets and behavioral data, jointly covering responses to 1,292 visual stimuli from a wide variety of object categories. These results suggest that computational visual neuroscience may take better advantage of the deep learning framework by using image sets that reflect the human perceptual and cognitive experience. Ecoset and trained network models are openly available to the research community.
Collapse
Affiliation(s)
- Johannes Mehrer
- MRC Cognition and Brain Sciences Unit, University of Cambridge, CB2 7EF Cambridge, United Kingdom
| | - Courtney J Spoerer
- MRC Cognition and Brain Sciences Unit, University of Cambridge, CB2 7EF Cambridge, United Kingdom
| | - Emer C Jones
- MRC Cognition and Brain Sciences Unit, University of Cambridge, CB2 7EF Cambridge, United Kingdom
| | - Nikolaus Kriegeskorte
- Department of Psychology, Zuckerman Institute, Columbia University, New York, NY 10027
| | - Tim C Kietzmann
- MRC Cognition and Brain Sciences Unit, University of Cambridge, CB2 7EF Cambridge, United Kingdom;
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 XZ Nijmegen, Netherlands
| |
Collapse
|
234
|
Comparison of neuronal responses in primate inferior-temporal cortex and feed-forward deep neural network model with regard to information processing of faces. J Comput Neurosci 2021; 49:251-257. [PMID: 33595764 DOI: 10.1007/s10827-021-00778-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 12/24/2020] [Accepted: 01/27/2021] [Indexed: 10/22/2022]
Abstract
Feed-forward deep neural networks have better performance in object categorization tasks than other models of computer vision. To understand the relationship between feed-forward deep networks and the primate brain, we investigated representations of upright and inverted faces in a convolutional deep neural network model and compared them with representations by neurons in the monkey anterior inferior-temporal cortex, area TE. We applied principal component analysis to feature vectors in each model layer to visualize the relationship between the vectors of the upright and inverted faces. The vectors of the upright and inverted monkey faces were more separated through the convolution layers. In the fully-connected layers, the separation among human individuals for upright faces was larger than for inverted faces. The Spearman correlation between each model layer and TE neurons reached a maximum at the fully-connected layers. These results indicate that the processing of faces in the fully-connected layers might resemble the asymmetric representation of upright and inverted faces by the TE neurons. The separation of upright and inverted faces might take place by feed-forward processing in the visual cortex, and separations among human individuals for upright faces, which were larger than those for inverted faces, might occur in area TE.
Collapse
|
235
|
Liu Y, McNally GP. Dopamine and relapse to drug seeking. J Neurochem 2021; 157:1572-1584. [PMID: 33486769 DOI: 10.1111/jnc.15309] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 01/04/2021] [Accepted: 01/13/2021] [Indexed: 12/29/2022]
Abstract
The actions of dopamine are essential to relapse to drug seeking but we still lack a precise understanding of how dopamine achieves these effects. Here we review recent advances from animal models in understanding how dopamine controls relapse to drug seeking. These advances have been enabled by important developments in understanding the basic neurochemical, molecular, anatomical, physiological and functional properties of the major dopamine pathways in the mammalian brain. The literature shows that although different forms of relapse to seeking different drugs of abuse each depend on dopamine, there are distinct dopamine mechanisms for relapse. Different circuit-level mechanisms, different populations of dopamine neurons and different activity profiles within these dopamine neurons, are important for driving different forms of relapse. This diversity highlights the need to better understand when, where and how dopamine contributes to relapse behaviours.
Collapse
Affiliation(s)
- Yu Liu
- School of Psychology, UNSW Sydney, Sydney, NSW, Australia
| | | |
Collapse
|
236
|
Correspondence between Monkey Visual Cortices and Layers of a Saliency Map Model Based on a Deep Convolutional Neural Network for Representations of Natural Images. eNeuro 2021; 8:ENEURO.0200-20.2020. [PMID: 33234544 PMCID: PMC7890521 DOI: 10.1523/eneuro.0200-20.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 11/08/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
Attentional selection is a function that allocates the brain’s computational resources to the most important part of a visual scene at a specific moment. Saliency map models have been proposed as computational models to predict attentional selection within a spatial location. Recent saliency map models based on deep convolutional neural networks (DCNNs) exhibit the highest performance for predicting the location of attentional selection and human gaze, which reflect overt attention. Trained DCNNs potentially provide insight into the perceptual mechanisms of biological visual systems. However, the relationship between artificial and neural representations used for determining attentional selection and gaze location remains unknown. To understand the mechanism underlying saliency map models based on DCNNs and the neural system of attentional selection, we investigated the correspondence between layers of a DCNN saliency map model and monkey visual areas for natural image representations. We compared the characteristics of the responses in each layer of the model with those of the neural representation in the primary visual (V1), intermediate visual (V4), and inferior temporal (IT) cortices. Regardless of the DCNN layer level, the characteristics of the responses were consistent with that of the neural representation in V1. We found marked peaks of correspondence between V1 and the early level and higher-intermediate-level layers of the model. These results provide insight into the mechanism of the trained DCNN saliency map model and suggest that the neural representations in V1 play an important role in computing the saliency that mediates attentional selection, which supports the V1 saliency hypothesis.
Collapse
|
237
|
Lee D, Almeida J. Within-category representational stability through the lens of manipulable objects. Cortex 2021; 137:282-291. [PMID: 33662692 DOI: 10.1016/j.cortex.2020.12.026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 09/14/2020] [Accepted: 12/10/2020] [Indexed: 11/28/2022]
Abstract
Our ability to recognize an object amongst many exemplars is one of our most important features, and one that putatively distinguishes humans from non-human animals and potentially from (current) computational and artificial intelligence models. We can recognize objects consistently regardless of when we see them suggesting that we have stable representations across time and different contexts. Importantly, little is known about how humans can replicate within-category object representations across time. Here, we investigate neural stability of within-category object representations by computing the similarity between representational geometries of activity patterns for 80 images of tools obtained on different functional magnetic resonance imaging (fMRI) scanning days. We show that within-category representational stability is observable in regions that span lateral and ventral temporal cortex, inferior and superior parietal cortex, and premotor cortex - regions typically associated with tool processing and visuospatial processing. We then focus on what kinds of representations best explain the representational geometries within these regions. We test the similarity of these geometries with those coming from the different layers of a convolutional neural network, and those coming from perceived and veridical visual similarity models. We find that regions supporting within-category representational stability show stronger relationship with higher-level visual/semantic features, suggesting that neural replicability is derived from perceived and higher-level visual information. Within category representational stability may thus originate from long-range cross talk between category-specific regions (and in this case strongly within ventral and lateral temporal cortex) over more abstract, rather than veridical/lower-level, visual (sensorial) representations, and perhaps in the service of object-centered representations.
Collapse
Affiliation(s)
- Dongha Lee
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; Korea Brain Research Institute, Daegu, Republic of Korea
| | - Jorge Almeida
- Proaction Laboratory, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal; CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Portugal.
| |
Collapse
|
238
|
Cox CR, Rogers TT. Finding Distributed Needles in Neural Haystacks. J Neurosci 2021; 41:1019-1032. [PMID: 33334868 PMCID: PMC7880292 DOI: 10.1523/jneurosci.0904-20.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 12/02/2020] [Accepted: 12/04/2020] [Indexed: 11/21/2022] Open
Abstract
The human cortex encodes information in complex networks that can be anatomically dispersed and variable in their microstructure across individuals. Using simulations with neural network models, we show that contemporary statistical methods for functional brain imaging-including univariate contrast, searchlight multivariate pattern classification, and whole-brain decoding with L1 or L2 regularization-each have critical and complementary blind spots under these conditions. We then introduce the sparse-overlapping-sets (SOS) LASSO-a whole-brain multivariate approach that exploits structured sparsity to find network-distributed information-and show in simulation that it captures the advantages of other approaches while avoiding their limitations. When applied to fMRI data to find neural responses that discriminate visually presented faces from other visual stimuli, each method yields a different result, but existing approaches all support the canonical view that face perception engages localized areas in posterior occipital and temporal regions. In contrast, SOS LASSO uncovers a network spanning all four lobes of the brain. The result cannot reflect spurious selection of out-of-system areas because decoding accuracy remains exceedingly high even when canonical face and place systems are removed from the dataset. When used to discriminate visual scenes from other stimuli, the same approach reveals a localized signal consistent with other methods-illustrating that SOS LASSO can detect both widely distributed and localized representational structure. Thus, structured sparsity can provide an unbiased method for testing claims of functional localization. For faces and possibly other domains, such decoding may reveal representations more widely distributed than previously suspected.SIGNIFICANCE STATEMENT Brain systems represent information as patterns of activation over neural populations connected in networks that can be widely distributed anatomically, variable across individuals, and intermingled with other networks. We show that four widespread statistical approaches to functional brain imaging have critical blind spots in this scenario and use simulations with neural network models to illustrate why. We then introduce a new approach designed specifically to find radically distributed representations in neural networks. In simulation and in fMRI data collected in the well studied domain of face perception, the new approach discovers extensive signal missed by the other methods-suggesting that prior functional imaging work may have significantly underestimated the degree to which neurocognitive representations are distributed and variable across individuals.
Collapse
Affiliation(s)
- Christopher R Cox
- Department of Psychology, Louisiana State University, Baton Rouge, Louisiana 70803
| | - Timothy T Rogers
- Department of Psychology, University of Wisconsin, Madison, Wisconsin 53706
| |
Collapse
|
239
|
Abstract
Comparative neuroscience is entering the era of big data. New high-throughput methods and data-sharing initiatives have resulted in the availability of large, digital data sets containing many types of data from ever more species. Here, we present a framework for exploiting the new possibilities offered. The multimodality of the data allows vertical translations, which are comparisons of different aspects of brain organization within a single species and across scales. Horizontal translations compare particular aspects of brain organization across species, often by building abstract feature spaces. Combining vertical and horizontal translations allows for more sophisticated comparisons, including relating principles of brain organization across species by contrasting horizontal translations, and for making formal predictions of unobtainable data based on observed results in a model species.
Collapse
Affiliation(s)
- Rogier B Mars
- Wellcome Centre for Integrative Neuroimaging, Centre for fMRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford OX3 9DU, United Kingdom; .,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 HR Nijmegen, The Netherlands
| | - Saad Jbabdi
- Wellcome Centre for Integrative Neuroimaging, Centre for fMRI of the Brain (FMRIB), Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford OX3 9DU, United Kingdom;
| | - Matthew F S Rushworth
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, United Kingdom
| |
Collapse
|
240
|
Expert Programmers Have Fine-Tuned Cortical Representations of Source Code. eNeuro 2021; 8:ENEURO.0405-20.2020. [PMID: 33318072 PMCID: PMC7877476 DOI: 10.1523/eneuro.0405-20.2020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 11/14/2020] [Accepted: 12/01/2020] [Indexed: 11/30/2022] Open
Abstract
Expertise enables humans to achieve outstanding performance on domain-specific tasks, and programming is no exception. Many studies have shown that expert programmers exhibit remarkable differences from novices in behavioral performance, knowledge structure, and selective attention. However, the underlying differences in the brain of programmers are still unclear. We here address this issue by associating the cortical representation of source code with individual programming expertise using a data-driven decoding approach. This approach enabled us to identify seven brain regions, widely distributed in the frontal, parietal, and temporal cortices, that have a tight relationship with programming expertise. In these brain regions, functional categories of source code could be decoded from brain activity and the decoding accuracies were significantly correlated with individual behavioral performances on a source-code categorization task. Our results suggest that programming expertise is built on fine-tuned cortical representations specialized for the domain of programming.
Collapse
|
241
|
Abstract
Credit assignment (CA) to relevant actions poses a challenge because one is often flooded with reward feedback that is not easily causally attributed. We addressed this issue in a reinforcement learning framework wherein choice is mutually controlled by value-caching model-free (MF) and prospective, planning model-based (MB) systems. We find knowledge, stored in a cognitive map, filters exuberant reward feedback to guide CA in both systems but based on different attribute dimensions. In MF, CA is boosted for outcomes that are relevant (causally related) to one’s choice, whereas in MB, CA is enhanced for outcomes that attract greater attention during the deliberation process that preceded a choice. We consider normative and mechanistic accounts, including how these processes are instrumental to adaptation. An influential reinforcement learning framework proposes that behavior is jointly governed by model-free (MF) and model-based (MB) controllers. The former learns the values of actions directly from past encounters, and the latter exploits a cognitive map of the task to calculate these prospectively. Considerable attention has been paid to how these systems interact during choice, but how and whether knowledge of a cognitive map contributes to the way MF and MB controllers assign credit (i.e., to how they revaluate actions and states following the receipt of an outcome) remains underexplored. Here, we examine such sophisticated credit assignment using a dual-outcome bandit task. We provide evidence that knowledge of a cognitive map influences credit assignment in both MF and MB systems, mediating subtly different aspects of apparent relevance. Specifically, we show MF credit assignment is enhanced for those rewards that are related to a choice, and this contrasted with choice-unrelated rewards that reinforced subsequent choices negatively. This modulation is only possible based on knowledge of task structure. On the other hand, MB credit assignment was boosted for outcomes that impacted on differences in values between offered bandits. We consider mechanistic accounts and the normative status of these findings. We suggest the findings extend the scope and sophistication of cognitive map-based credit assignment during reinforcement learning, with implications for understanding behavioral control.
Collapse
|
242
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
243
|
Tamè L, Tucciarelli R, Sadibolova R, Sereno MI, Longo MR. Reconstructing neural representations of tactile space. Neuroimage 2021; 229:117730. [PMID: 33454399 DOI: 10.1016/j.neuroimage.2021.117730] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/18/2020] [Accepted: 01/07/2021] [Indexed: 01/29/2023] Open
Abstract
Psychophysical experiments have demonstrated large and highly systematic perceptual distortions of tactile space. Such a space can be referred to our experience of the spatial organisation of objects, at representational level, through touch, in analogy with the familiar concept of visual space. We investigated the neural basis of tactile space by analysing activity patterns induced by tactile stimulation of nine points on a 3 × 3 square grid on the hand dorsum using functional magnetic resonance imaging. We used a searchlight approach within pre-defined regions of interests to compute the pairwise Euclidean distances between the activity patterns elicited by tactile stimulation. Then, we used multidimensional scaling to reconstruct tactile space at the neural level and compare it with skin space at the perceptual level. Our reconstructions of the shape of skin space in contralateral primary somatosensory and motor cortices reveal that it is distorted in a way that matches the perceptual shape of skin space. This suggests that early sensorimotor areas critically contribute to the distorted internal representation of tactile space on the hand dorsum.
Collapse
Affiliation(s)
- Luigi Tamè
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK; School of Psychology, University of Kent, Canterbury CT2 7NP, UK.
| | - Raffaele Tucciarelli
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK
| | - Renata Sadibolova
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK; Department of Psychology, Goldsmith, University of London, London, UK
| | - Martin I Sereno
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK; University College London, University of London, London, UK; San Diego State University, San Diego, USA
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, London WC1E 7HX, UK.
| |
Collapse
|
244
|
Lu Z, Ku Y. NeuroRA: A Python Toolbox of Representational Analysis From Multi-Modal Neural Data. Front Neuroinform 2021; 14:563669. [PMID: 33424573 PMCID: PMC7787009 DOI: 10.3389/fninf.2020.563669] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 12/03/2020] [Indexed: 11/26/2022] Open
Abstract
In studies of cognitive neuroscience, multivariate pattern analysis (MVPA) is widely used as it offers richer information than traditional univariate analysis. Representational similarity analysis (RSA), as one method of MVPA, has become an effective decoding method based on neural data by calculating the similarity between different representations in the brain under different conditions. Moreover, RSA is suitable for researchers to compare data from different modalities and even bridge data from different species. However, previous toolboxes have been made to fit specific datasets. Here, we develop NeuroRA, a novel and easy-to-use toolbox for representational analysis. Our toolbox aims at conducting cross-modal data analysis from multi-modal neural data (e.g., EEG, MEG, fNIRS, fMRI, and other sources of neruroelectrophysiological data), behavioral data, and computer-simulated data. Compared with previous software packages, our toolbox is more comprehensive and powerful. Using NeuroRA, users can not only calculate the representational dissimilarity matrix (RDM), which reflects the representational similarity among different task conditions and conduct a representational analysis among different RDMs to achieve a cross-modal comparison. Besides, users can calculate neural pattern similarity (NPS), spatiotemporal pattern similarity (STPS), and inter-subject correlation (ISC) with this toolbox. NeuroRA also provides users with functions performing statistical analysis, storage, and visualization of results. We introduce the structure, modules, features, and algorithms of NeuroRA in this paper, as well as examples applying the toolbox in published datasets.
Collapse
Affiliation(s)
- Zitong Lu
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, China.,Peng Cheng Laboratory, Shenzhen, China.,Shanghai Key Laboratory of Brain Functional Genomics, Shanghai Changning-East China Normal University (ECNU) Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yixuan Ku
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, China.,Peng Cheng Laboratory, Shenzhen, China
| |
Collapse
|
245
|
Barron HC, Mars RB, Dupret D, Lerch JP, Sampaio-Baptista C. Cross-species neuroscience: closing the explanatory gap. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190633. [PMID: 33190601 PMCID: PMC7116399 DOI: 10.1098/rstb.2019.0633] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2020] [Indexed: 12/17/2022] Open
Abstract
Neuroscience has seen substantial development in non-invasive methods available for investigating the living human brain. However, these tools are limited to coarse macroscopic measures of neural activity that aggregate the diverse responses of thousands of cells. To access neural activity at the cellular and circuit level, researchers instead rely on invasive recordings in animals. Recent advances in invasive methods now permit large-scale recording and circuit-level manipulations with exquisite spatio-temporal precision. Yet, there has been limited progress in relating these microcircuit measures to complex cognition and behaviour observed in humans. Contemporary neuroscience thus faces an explanatory gap between macroscopic descriptions of the human brain and microscopic descriptions in animal models. To close the explanatory gap, we propose adopting a cross-species approach. Despite dramatic differences in the size of mammalian brains, this approach is broadly justified by preserved homology. Here, we outline a three-armed approach for effective cross-species investigation that highlights the need to translate different measures of neural activity into a common space. We discuss how a cross-species approach has the potential to transform basic neuroscience while also benefiting neuropsychiatric drug development where clinical translation has, to date, seen minimal success. This article is part of the theme issue 'Key relationships between non-invasive functional neuroimaging and the underlying neuronal activity'.
Collapse
Affiliation(s)
- Helen C. Barron
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Mansfield Road, Oxford OX1 3TH, UK
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
| | - Rogier B. Mars
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
- Donders Institute for Brain, Cognition and Behavior, Radboud University, 6525 AJ Nijmegen, The Netherlands
| | - David Dupret
- Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, Mansfield Road, Oxford OX1 3TH, UK
| | - Jason P. Lerch
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, CanadaM5G 1L7
| | - Cassandra Sampaio-Baptista
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, FMRIB, John Radcliffe Hospital, Oxford OX3 9DU, UK
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK
| |
Collapse
|
246
|
Moran R, Keramati M, Dolan RJ. Model based planners reflect on their model-free propensities. PLoS Comput Biol 2021; 17:e1008552. [PMID: 33411724 PMCID: PMC7817042 DOI: 10.1371/journal.pcbi.1008552] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 01/20/2021] [Accepted: 11/23/2020] [Indexed: 12/19/2022] Open
Abstract
Dual-reinforcement learning theory proposes behaviour is under the tutelage of a retrospective, value-caching, model-free (MF) system and a prospective-planning, model-based (MB), system. This architecture raises a question as to the degree to which, when devising a plan, a MB controller takes account of influences from its MF counterpart. We present evidence that such a sophisticated self-reflective MB planner incorporates an anticipation of the influences its own MF-proclivities exerts on the execution of its planned future actions. Using a novel bandit task, wherein subjects were periodically allowed to design their environment, we show that reward-assignments were constructed in a manner consistent with a MB system taking account of its MF propensities. Thus, in the task participants assigned higher rewards to bandits that were momentarily associated with stronger MF tendencies. Our findings have implications for a range of decision making domains that includes drug abuse, pre-commitment, and the tension between short and long-term decision horizons in economics.
Collapse
Affiliation(s)
- Rani Moran
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Mehdi Keramati
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
- Department of Psychology, City, University of London, London, United Kingdom
| | - Raymond J. Dolan
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| |
Collapse
|
247
|
Levine SM, Schwarzbach JV. Individualizing Representational Similarity Analysis. Front Psychiatry 2021; 12:729457. [PMID: 34707520 PMCID: PMC8542717 DOI: 10.3389/fpsyt.2021.729457] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 09/10/2021] [Indexed: 11/13/2022] Open
Abstract
Representational similarity analysis (RSA) is a popular multivariate analysis technique in cognitive neuroscience that uses functional neuroimaging to investigate the informational content encoded in brain activity. As RSA is increasingly being used to investigate more clinically-geared questions, the focus of such translational studies turns toward the importance of individual differences and their optimization within the experimental design. In this perspective, we focus on two design aspects: applying individual vs. averaged behavioral dissimilarity matrices to multiple participants' neuroimaging data and ensuring the congruency between tasks when measuring behavioral and neural representational spaces. Incorporating these methods permits the detection of individual differences in representational spaces and yields a better-defined transfer of information from representational spaces onto multivoxel patterns. Such design adaptations are prerequisites for optimal translation of RSA to the field of precision psychiatry.
Collapse
Affiliation(s)
- Seth M Levine
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Jens V Schwarzbach
- Department of Psychiatry and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
248
|
Gärdenfors P. Primary Cognitive Categories Are Determined by Their Invariances. Front Psychol 2020; 11:584017. [PMID: 33363496 PMCID: PMC7753358 DOI: 10.3389/fpsyg.2020.584017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 11/13/2020] [Indexed: 11/18/2022] Open
Abstract
The world as we perceive it is structured into objects, actions and places that form parts of events. In this article, my aim is to explain why these categories are cognitively primary. From an empiricist and evolutionary standpoint, it is argued that the reduction of the complexity of sensory signals is based on the brain's capacity to identify various types of invariances that are evolutionarily relevant for the activities of the organism. The first aim of the article is to explain why places, object and actions are primary cognitive categories in our constructions of the external world. It is shown that the invariances that determine these categories have their separate characteristics and that they are, by and large, independent of each other. This separation is supported by what is known about the neural mechanisms. The second aim is to show that the category of events can be analyzed as being constituted of the primary categories. The category of numbers is briefly discussed. Some implications for computational models of the categories are also presented.
Collapse
Affiliation(s)
- Peter Gärdenfors
- Cognitive Science, Department of Philosophy, Lund University, Lund, Sweden.,Faculty of Humanities, Palaeo-Research Institute, University of Johannesburg, Johannesburg, South Africa
| |
Collapse
|
249
|
Ubaldi S, Fairhall SL. fMRI-Indexed neural temporal tuning reveals the hierarchical organsiation of the face and person selective network. Neuroimage 2020; 227:117690. [PMID: 33385559 PMCID: PMC7611695 DOI: 10.1016/j.neuroimage.2020.117690] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 12/24/2020] [Accepted: 12/25/2020] [Indexed: 11/04/2022] Open
Abstract
Recognising and knowing about conspecifics is vital to human interaction and is served in the brain by a well-characterised cortical network. Understanding the temporal dynamics of this network is critical to gaining insight into both hierarchical organisation and regional coordination. Here, we combine the high spatial resolution of fMRI with a paradigm that permits investigation of differential temporal tuning across cortical regions. We cognitively under- and overload the system using the rapid presentation (100-1200msec) of famous faces and buildings. We observed an increase in activity as presentation rates slowed and a negative deflection when inter-stimulus intervals (ISIs) were extended to longer periods. The primary distinction in tuning patterns was between core (perceptual) and extended (non-perceptual) systems but there was also evidence for nested hierarchies within systems, as well as indications of widespread parallel processing. Extended regions demonstrated common temporal tuning across regions which may indicate coordinated activity as they cooperate to manifest the diverse cognitive representation accomplished by this network. With the support of an additional psychophysical study, we demonstrated that ISIs necessary for different levels of semantic access are consistent with temporal tuning patterns. Collectively, these results show that regions of the person-knowledge network operate over different temporal timescales consistent with hierarchical organisation.
Collapse
Affiliation(s)
- Silvia Ubaldi
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN 38068, Italy
| | - Scott L Fairhall
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, Rovereto, TN 38068, Italy.
| |
Collapse
|
250
|
Alfred KL, Hillis ME, Kraemer DJM. Individual Differences in the Neural Localization of Relational Networks of Semantic Concepts. J Cogn Neurosci 2020; 33:390-401. [PMID: 33284078 DOI: 10.1162/jocn_a_01657] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Semantic concepts relate to each other to varying degrees to form a network of zero-order relations, and these zero-order relations serve as input into networks of general relation types as well as higher order relations. Previous work has studied the neural mapping of semantic concepts across domains, although much work remains to be done to understand how the localization and structure of those architectures differ depending on various individual differences in attentional bias toward different content presentation formats. Using an item-wise model of semantic distance of zero-order relations (Word2vec) between stimuli (presented both in word and picture forms), we used representational similarity analysis to identify individual differences in the neural localization of semantic concepts and how those localization differences can be predicted by individual variance in the degree to which individuals attend to word information instead of pictures. Importantly, there were no reliable representations of this zero-order semantic relational network when looking at the full group, and it was only through considering individual differences that a stable localization difference became evident. These results indicate that individual differences in the degree to which a person habitually attends to word information instead of picture information substantially affects the neural localization of zero-order semantic representations.
Collapse
|