151
|
Rust NC, Jannuzi BGL. Identifying Objects and Remembering Images: Insights From Deep Neural Networks. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022. [DOI: 10.1177/09637214221083663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
People have a remarkable ability to identify the objects that they are looking at, as well as remember the images that they have seen. Researchers know that high-level visual cortex contributes in important ways to supporting both of these functions, but developing models that describe how processing in high-level visual cortex supports these behaviors has been challenging. Recent breakthroughs in this modeling effort have arrived by way of the illustration that deep artificial neural networks trained to categorize objects, developed for computer vision purposes, reflect brainlike patterns of activity. Here we summarize how deep artificial neural networks have been used to gain important insights into the contributions of high-level visual cortex to object identification, as well as one characteristic of visual memory behavior: image memorability, the systematic variation with which some images are remembered better than others.
Collapse
|
152
|
Coggan DD, Watson DM, Wang A, Brownbridge R, Ellis C, Jones K, Kilroy C, Andrews TJ. The representation of shape and texture in category-selective regions of ventral-temporal cortex. Eur J Neurosci 2022; 56:4107-4120. [PMID: 35703007 PMCID: PMC9545892 DOI: 10.1111/ejn.15737] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 05/31/2022] [Accepted: 06/01/2022] [Indexed: 11/27/2022]
Abstract
Neuroimaging studies using univariate and multivariate approaches have shown that the fusiform face area (FFA) and parahippocampal place area (PPA) respond selectively to images of faces and places. The aim of this study was to determine the extent to which this selectivity to faces or places is based on the shape or texture properties of the images. Faces and houses were filtered to manipulate their texture properties, while preserving the shape properties (spatial envelope) of the images. In Experiment 1, multivariate pattern analysis (MVPA) showed that patterns of fMRI response to faces and houses in FFA and PPA were predicted by the shape properties, but not by the texture properties of the image. In Experiment 2, a univariate analysis (fMR‐adaptation) showed that responses in the FFA and PPA were sensitive to changes in both the shape and texture properties of the image. These findings can be explained by the spatial scale of the representation of images in the FFA and PPA. At a coarser scale (revealed by MVPA), the neural selectivity to faces and houses is sensitive to variation in the shape properties of the image. However, at a finer scale (revealed by fMR‐adaptation), the neural selectivity is sensitive to the texture properties of the image. By combining these neuroimaging paradigms, our results provide insights into the spatial scale of the neural representation of faces and places in the ventral‐temporal cortex.
Collapse
Affiliation(s)
- David D Coggan
- Department of Psychology, University of York, York, UK.,Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
| | | | - Ao Wang
- Department of Psychology, University of York, York, UK
| | | | | | - Kathryn Jones
- Department of Psychology, University of York, York, UK
| | | | | |
Collapse
|
153
|
Ueda R. Neural Processing of Facial Attractiveness and Romantic Love: An Overview and Suggestions for Future Empirical Studies. Front Psychol 2022; 13:896514. [PMID: 35774950 PMCID: PMC9239166 DOI: 10.3389/fpsyg.2022.896514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 05/25/2022] [Indexed: 11/25/2022] Open
Abstract
Romantic love is universally observed in human communities, and the manner in which a person chooses a long-term romantic partner has been a central question in studies on close relationships. Numerous empirical psychological studies have demonstrated that facial attractiveness greatly impacts initial romantic attraction. This close link was further investigated by neuroimaging studies showing that both viewing attractive faces and having romantic thoughts recruit the reward system. However, it remains unclear how our brains integrate perceived facial attractiveness into initial romantic attraction. In addition, it remains unclear how our brains shape a persistent attraction to a particular person through interactions; this persistent attraction is hypothesized to contribute to a long-term relationship. After reviewing related studies, I introduce methodologies that could help address these questions.
Collapse
Affiliation(s)
- Ryuhei Ueda
- Institute for the Future of Human Society, Kyoto University, Kyoto, Japan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan
| |
Collapse
|
154
|
Neural signature of the perceptual decision in the neural population responses of the inferior temporal cortex. Sci Rep 2022; 12:8628. [PMID: 35606516 PMCID: PMC9127116 DOI: 10.1038/s41598-022-12236-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 04/20/2022] [Indexed: 11/24/2022] Open
Abstract
Rapid categorization of visual objects is critical for comprehending our complex visual world. The role of individual cortical neurons and neural populations in categorizing visual objects during passive vision has previously been studied. However, it is unclear whether and how perceptually guided behaviors affect the encoding of stimulus categories by neural population activity in the higher visual cortex. Here we studied the activity of the inferior temporal (IT) cortical neurons in macaque monkeys during both passive viewing and categorization of ambiguous body and object images. We found enhanced category information in the IT neural population activity during the correct, but not wrong, trials of the categorization task compared to the passive task. This encoding enhancement was task difficulty dependent with progressively larger values in trials with more ambiguous stimuli. Enhancement of IT neural population information for behaviorally relevant stimulus features suggests IT neural networks' involvement in perceptual decision-making behavior.
Collapse
|
155
|
Kaniuth P, Hebart MN. Feature-reweighted representational similarity analysis: A method for improving the fit between computational models, brains, and behavior. Neuroimage 2022; 257:119294. [PMID: 35580810 DOI: 10.1016/j.neuroimage.2022.119294] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 04/10/2022] [Accepted: 05/09/2022] [Indexed: 11/26/2022] Open
Abstract
Representational Similarity Analysis (RSA) has emerged as a popular method for relating representational spaces from human brain activity, behavioral data, and computational models. RSA is based on the comparison of representational (dis-)similarity matrices (RDM or RSM), which characterize the pairwise (dis-)similarities of all conditions across all features (e.g. fMRI voxels or units of a model). However, classical RSA treats each feature as equally important. This 'equal weights' assumption contrasts with the flexibility of multivariate decoding, which reweights individual features for predicting a target variable. As a consequence, classical RSA may lead researchers to underestimate the correspondence between a model and a brain region and, in case of model comparison, may lead them to select an inferior model. The aim of this work is twofold: First, we sought to broadly test feature-reweighted RSA (FR-RSA) applied to computational models and reveal the extent to which reweighting model features improves RSM correspondence and affects model selection. Previous work suggested that reweighting can improve model selection in RSA but it has remained unclear to what extent these results generalize across datasets and data modalities. To draw more general conclusions, we utilized a range of publicly available datasets and three popular deep neural networks (DNNs). Second, we propose voxel-reweighted RSA, a novel use case of FR-RSA that reweights fMRI voxels, mirroring the rationale of multivariate decoding of optimally combining voxel activity patterns. We found that reweighting individual model units markedly improved the fit between model RSMs and target RSMs derived from several fMRI and behavioral datasets and affected model selection, highlighting the importance of considering FR-RSA. For voxel-reweighted RSA, improvements in RSM correspondence were even more pronounced, demonstrating the utility of this novel approach. We additionally show that classical noise ceilings can be exceeded when FR-RSA is applied and propose an updated approach for their computation. Taken together, our results broadly validate the use of FR-RSA for improving the fit between computational models, brain, and behavioral data, possibly allowing us to better adjudicate between competing computational models. Further, our results suggest that FR-RSA applied to brain measurement channels could become an important new method to assess the correspondence between representational spaces.
Collapse
Affiliation(s)
- Philipp Kaniuth
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Martin N Hebart
- Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
156
|
Ciarlo A, Russo AG, Ponticorvo S, Di Salle F, Lührs M, Goebel R, Esposito F. Semantic fMRI neurofeedback: A Multi-Subject Study at 3 Tesla. J Neural Eng 2022; 19. [PMID: 35561669 DOI: 10.1088/1741-2552/ac6f81] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 05/13/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Real-time fMRI neurofeedback is a non-invasive procedure allowing the self-regulation of brain functions via enhanced self-control of fMRI based neural activation. In semantic real-time fMRI neurofeedback, an estimated relation between multivariate fMRI activation patterns and abstract mental states is exploited for a multi-dimensional feedback stimulus via real-time representational similarity analysis (rt-RSA). Here, we assessed the performances of this framework in a multi-subject multi-session study on a 3T MRI clinical scanner. APPROACH Eighteen healthy volunteers underwent two semantic real-time fMRI neurofeedback sessions on two different days. In each session, participants were first requested to engage in specific mental states while local fMRI patterns of brain activity were recorded during stimulated mental imagery of concrete objects (pattern generation). The obtained neural representations were to be replicated and modulated by the participants in subsequent runs of the same session under the guidance of a rt-RSA generated visual feedback (pattern modulation). Performance indicators were derived from the rt-RSA output to assess individual abilities in replicating (and maintaining over time) a target pattern. Simulations were carried out to assess the impact of the geometric distortions implied by the low-dimensional representation of patterns' dissimilarities in the visual feedback. MAIN RESULTS Sixteen subjects successfully completed both semantic real-time fMRI neurofeedback sessions. Considering some performance indicators, a significant improvement between the first and the second runs, and within run increasing modulation performances were observed, whereas no improvements were found between sessions. Simulations confirmed that in a small percentage of cases visual feedback could be affected by metric distortions due to dimensionality reduction implicit to the rt-RSA approach. SIGNIFICANCE Our results proved the feasibility of the semantic real-time fMRI neurofeedback at 3T, showing that subjects can successfully modulate and maintain a target mental state, guided by rt-RSA derived feedback. Further development is needed to encourage future clinical applications.
Collapse
Affiliation(s)
- Assunta Ciarlo
- University of Salerno - Baronissi Campus, Via S. Allende, Baronissi, Campania, 84081, ITALY
| | | | - Sara Ponticorvo
- University of Salerno - Baronissi Campus, Via S. Allende, Baronissi, Campania, 84081, ITALY
| | - Francesco Di Salle
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno - Baronissi Campus, Via S. Allende, Baronissi, Campania, 84081, ITALY
| | - Michael Lührs
- Department of Cognitive Neuroscience, Maastricht University, P.O. Box 616, Maastricht, Limburg, 6200 MD, NETHERLANDS
| | - Rainer Goebel
- Faculty of Psychology, University of Maastricht, P.O. Box 616, 6200 MD Maastricht, The Netherlands, Maastricht, 6200 MD, NETHERLANDS
| | - Fabrizio Esposito
- Department of Advanced Medical and Surgical Sciences, University of Campania Luigi Vanvitelli School of Medicine and Surgery, Piazza L. Miraglia, Napoli, 80138, ITALY
| |
Collapse
|
157
|
Petzka M, Chatburn A, Charest I, Balanos GM, Staresina BP. Sleep spindles track cortical learning patterns for memory consolidation. Curr Biol 2022; 32:2349-2356.e4. [PMID: 35561681 DOI: 10.1016/j.cub.2022.04.045] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/11/2022] [Accepted: 04/14/2022] [Indexed: 10/18/2022]
Abstract
Memory consolidation-the transformation of labile memory traces into stable long-term representations-is facilitated by post-learning sleep. Computational and biophysical models suggest that sleep spindles may play a key mechanistic role for consolidation, igniting structural changes at cortical sites involved in prior learning. Here, we tested the resulting prediction that spindles are most pronounced over learning-related cortical areas and that the extent of this learning-spindle overlap predicts behavioral measures of memory consolidation. Using high-density scalp electroencephalography (EEG) and polysomnography (PSG) in healthy volunteers, we first identified cortical areas engaged during a temporospatial associative memory task (power decreases in the alpha/beta frequency range, 6-20 Hz). Critically, we found that participant-specific topographies (i.e., spatial distributions) of post-learning sleep spindle amplitude correlated with participant-specific learning topographies. Importantly, the extent to which spindles tracked learning patterns further predicted memory consolidation across participants. Our results provide empirical evidence for a role of post-learning sleep spindles in tracking learning networks, thereby facilitating memory consolidation.
Collapse
Affiliation(s)
- Marit Petzka
- School of Psychology and Centre for Human Brain Health, University of Birmingham, Birmingham, UK; Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
| | - Alex Chatburn
- Cognitive and Systems Neuroscience Research Hub, University of South Australia, Adelaide, SA, Australia
| | - Ian Charest
- Department of Psychology, University of Montreal, Montreal, QC, Canada
| | - George M Balanos
- School of Sport, Exercise and Rehabilitation, University of Birmingham, Birmingham, UK
| | - Bernhard P Staresina
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
158
|
Gurariy G, Mruczek REB, Snow JC, Caplovitz GP. Using High-Density Electroencephalography to Explore Spatiotemporal Representations of Object Categories in Visual Cortex. J Cogn Neurosci 2022; 34:967-987. [PMID: 35286384 PMCID: PMC9169880 DOI: 10.1162/jocn_a_01845] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual object perception involves neural processes that unfold over time and recruit multiple regions of the brain. Here, we use high-density EEG to investigate the spatiotemporal representations of object categories across the dorsal and ventral pathways. In , human participants were presented with images from two animate object categories (birds and insects) and two inanimate categories (tools and graspable objects). In , participants viewed images of tools and graspable objects from a different stimulus set, one in which a shape confound that often exists between these categories (elongation) was controlled for. To explore the temporal dynamics of object representations, we employed time-resolved multivariate pattern analysis on the EEG time series data. This was performed at the electrode level as well as in source space of two regions of interest: one encompassing the ventral pathway and another encompassing the dorsal pathway. Our results demonstrate shape, exemplar, and category information can be decoded from the EEG signal. Multivariate pattern analysis within source space revealed that both dorsal and ventral pathways contain information pertaining to shape, inanimate object categories, and animate object categories. Of particular interest, we note striking similarities obtained in both ventral stream and dorsal stream regions of interest. These findings provide insight into the spatio-temporal dynamics of object representation and contribute to a growing literature that has begun to redefine the traditional role of the dorsal pathway.
Collapse
|
159
|
Recognition Of Pareidolic Objects In Developmental Prosopagnosic And Neurotypical Individuals. Cortex 2022; 153:21-31. [DOI: 10.1016/j.cortex.2022.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 02/02/2022] [Accepted: 04/05/2022] [Indexed: 11/18/2022]
|
160
|
Liu X, Zou X, Ji Z, Tian G, Mi Y, Huang T, Michael Wong K, Wu S. Neural feedback facilitates rough-to-fine information retrieval. Neural Netw 2022; 151:349-364. [DOI: 10.1016/j.neunet.2022.03.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 03/08/2022] [Accepted: 03/29/2022] [Indexed: 10/18/2022]
|
161
|
van Baar JM, FeldmanHall O. The polarized mind in context: Interdisciplinary approaches to the psychology of political polarization. AMERICAN PSYCHOLOGIST 2022; 77:394-408. [PMID: 34060885 PMCID: PMC8630091 DOI: 10.1037/amp0000814] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Existing research into the psychological roots of political polarization centers around two main approaches: one studying cognitive traits that predict susceptibility to holding polarized beliefs and one studying contextual influences that spread and reinforce polarized attitudes. Although both accounts have made valuable progress, political polarization is neither a purely cognitive trait nor a contextual issue. We argue that a new approach aiming to uncover interactions between cognition and context will be fruitful for understanding how polarization arises. Furthermore, recent developments in neuroimaging methods can overcome long-standing issues of measurement and ecological validity to critically help identify in which psychological processing steps-e.g., attention, semantic understanding, emotion-polarization takes hold. This interdisciplinary research agenda can thereby provide new avenues for interventions against the political polarization that plagues democracies around the world. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- Jeroen M. van Baar
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer St, Providence, RI 02912, United States
| | - Oriel FeldmanHall
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, 190 Thayer St, Providence, RI 02912, United States
- Carney Institute for Brain Science, Brown University, 164 Angell Street, Providence, RI 02912, United States
| |
Collapse
|
162
|
Skocypec RM, Peterson MA. Semantic Expectation Effects on Object Detection: Using Figure Assignment to Elucidate Mechanisms. Vision (Basel) 2022; 6:vision6010019. [PMID: 35324604 PMCID: PMC8953613 DOI: 10.3390/vision6010019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 03/02/2022] [Accepted: 03/15/2022] [Indexed: 11/16/2022] Open
Abstract
Recent evidence suggesting that object detection is improved following valid rather than invalid labels implies that semantics influence object detection. It is not clear, however, whether the results index object detection or feature detection. Further, because control conditions were absent and labels and objects were repeated multiple times, the mechanisms are unknown. We assessed object detection via figure assignment, whereby objects are segmented from backgrounds. Masked bipartite displays depicting a portion of a mono-oriented object (a familiar configuration) on one side of a central border were shown once only for 90 or 100 ms. Familiar configuration is a figural prior. Accurate detection was indexed by reports of an object on the familiar configuration side of the border. Compared to control experiments without labels, valid labels improved accuracy and reduced response times (RTs) more for upright than inverted objects (Studies 1 and 2). Invalid labels denoting different superordinate-level objects (DSC; Study 1) or same superordinate-level objects (SSC; Study 2) reduced accuracy for upright displays only. Orientation dependency indicates that effects are mediated by activated object representations rather than features which are invariant over orientation. Following invalid SSC labels (Study 2), accurate detection RTs were longer than control for both orientations, implicating conflict between semantic representations that had to be resolved before object detection. These results demonstrate that object detection is not just affected by semantics, it entails semantics.
Collapse
Affiliation(s)
- Rachel M. Skocypec
- Visual Perception Lab, Department of Psychology, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Cognitive Science Program, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Correspondence: (R.M.S.); (M.A.P.)
| | - Mary A. Peterson
- Visual Perception Lab, Department of Psychology, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Cognitive Science Program, School of Mind, Brain and Behavior, University of Arizona, Tucson, AZ 85721, USA
- Correspondence: (R.M.S.); (M.A.P.)
| |
Collapse
|
163
|
Ferko KM, Blumenthal A, Martin CB, Proklova D, Minos AN, Saksida LM, Bussey TJ, Khan AR, Köhler S. Activity in perirhinal and entorhinal cortex predicts perceived visual similarities among category exemplars with highest precision. eLife 2022; 11:66884. [PMID: 35311645 PMCID: PMC9020819 DOI: 10.7554/elife.66884] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 03/17/2022] [Indexed: 01/22/2023] Open
Abstract
Vision neuroscience has made great strides in understanding the hierarchical organization of object representations along the ventral visual stream (VVS). How VVS representations capture fine-grained visual similarities between objects that observers subjectively perceive has received limited examination so far. In the current study, we addressed this question by focussing on perceived visual similarities among subordinate exemplars of real-world categories. We hypothesized that these perceived similarities are reflected with highest fidelity in neural activity patterns downstream from inferotemporal regions, namely in perirhinal (PrC) and anterolateral entorhinal cortex (alErC) in the medial temporal lobe. To address this issue with functional magnetic resonance imaging (fMRI), we administered a modified 1-back task that required discrimination between category exemplars as well as categorization. Further, we obtained observer-specific ratings of perceived visual similarities, which predicted behavioural discrimination performance during scanning. As anticipated, we found that activity patterns in PrC and alErC predicted the structure of perceived visual similarity relationships among category exemplars, including its observer-specific component, with higher precision than any other VVS region. Our findings provide new evidence that subjective aspects of object perception that rely on fine-grained visual differentiation are reflected with highest fidelity in the medial temporal lobe.
Collapse
Affiliation(s)
- Kayla M Ferko
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada
| | - Anna Blumenthal
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Cervo Brain Research Center, University of Laval, Quebec, Canada
| | - Chris B Martin
- Department of Psychology, Florida State University, Tallahassee, United States
| | - Daria Proklova
- Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Alexander N Minos
- Brain and Mind Institute, University of Western Ontario, London, Canada
| | - Lisa M Saksida
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada.,Department of Physiology and Pharmacology, University of Western Ontario, London, Canada
| | - Timothy J Bussey
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada.,Department of Physiology and Pharmacology, University of Western Ontario, London, Canada
| | - Ali R Khan
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Robarts Research Institute Schulich School of Medicine and Dentistry, University of Western Ontario, London, Canada.,School of Biomedical Engineering, University of Western Ontario, London, Canada.,Department of Medical Biophysics, University of Western Ontario, London, Canada
| | - Stefan Köhler
- Brain and Mind Institute, University of Western Ontario, London, Canada.,Department of Psychology, University of Western Ontario, London, Canada
| |
Collapse
|
164
|
Abstract
In human neuroscience, studies of cognition are rarely grounded in non-task-evoked, 'spontaneous' neural activity. Indeed, studies of spontaneous activity tend to focus predominantly on intrinsic neural patterns (for example, resting-state networks). Taking a 'representation-rich' approach bridges the gap between cognition and resting-state communities: this approach relies on decoding task-related representations from spontaneous neural activity, allowing quantification of the representational content and rich dynamics of such activity. For example, if we know the neural representation of an episodic memory, we can decode its subsequent replay during rest. We argue that such an approach advances cognitive research beyond a focus on immediate task demand and provides insight into the functional relevance of the intrinsic neural pattern (for example, the default mode network). This in turn enables a greater integration between human and animal neuroscience, facilitating experimental testing of theoretical accounts of intrinsic activity, and opening new avenues of research in psychiatry.
Collapse
|
165
|
Liu N, Iijima A, Iwata Y, Ohashi K, Fujisawa N, Sasaoka T, Hasegawa I. Mental construction of object symbols from meaningless elements by Japanese macaques (Macaca fuscata). Sci Rep 2022; 12:3566. [PMID: 35246592 PMCID: PMC8897398 DOI: 10.1038/s41598-022-07563-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/21/2022] [Indexed: 11/09/2022] Open
Abstract
When writing an object's name, humans mentally construct its spelling. This capacity critically depends on use of the dual-structured linguistic system, in which meaningful words are represented by combinations of meaningless letters. Here we search for the evolutionary origin of this capacity in primates by designing dual-structured bigram symbol systems where different combinations of meaningless elements represent different objects. Initially, we trained Japanese macaques (Macaca fuscata) in an object-bigram symbolization task and in a visually-guided bigram construction task. Subsequently, we conducted a probe test using a symbolic bigram construction task. From the initial trial of the probe test, the Japanese macaques could sequentially choose the two elements of a bigram that was not actually seen but signified by a visually presented object. Moreover, the animals' spontaneous choice order bias, developed through the visually-guided bigram construction learning, was immediately generalized to the symbolic bigram construction test. Learning of dual-structured symbols by the macaques possibly indicates pre-linguistic adaptations for the ability of mentally constructing symbols in the common ancestors of humans and Old World monkeys.
Collapse
Affiliation(s)
- Nanxi Liu
- Department of Physiology, Niigata University School of Medicine, 1-757 Asahimachi St, Chuo-ku, Niigata, 951-8510, Japan
| | - Atsuhiko Iijima
- Department of Physiology, Niigata University School of Medicine, 1-757 Asahimachi St, Chuo-ku, Niigata, 951-8510, Japan. .,Graduate School of Science and Technology, Niigata University, Niigata, Japan. .,School of Health Sciences, Niigata University, Niigata, Japan. .,Neurophysiology & Biomedical Engineering Lab, Interdisciplinary Program of Biomedical Engineering, Assistive Technology and Art and Sports Sciences, Faculty of Engineering, Niigata University, 8050 2-no-chou, Ikarashi, Nishi-ku, Niigata, 950-2181, Japan.
| | - Yutaka Iwata
- Department of Physiology, Niigata University School of Medicine, 1-757 Asahimachi St, Chuo-ku, Niigata, 951-8510, Japan.,Graduate School of Science and Technology, Niigata University, Niigata, Japan
| | - Kento Ohashi
- Department of Physiology, Niigata University School of Medicine, 1-757 Asahimachi St, Chuo-ku, Niigata, 951-8510, Japan.,Graduate School of Science and Technology, Niigata University, Niigata, Japan
| | | | | | - Isao Hasegawa
- Department of Physiology, Niigata University School of Medicine, 1-757 Asahimachi St, Chuo-ku, Niigata, 951-8510, Japan.
| |
Collapse
|
166
|
The role of animal faces in the animate-inanimate distinction in the ventral temporal cortex. Neuropsychologia 2022; 169:108192. [PMID: 35245528 DOI: 10.1016/j.neuropsychologia.2022.108192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 02/26/2022] [Accepted: 02/27/2022] [Indexed: 01/26/2023]
Abstract
Animate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal's ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that faces are not necessary in order to observe high-level animacy information (e.g., agency) in parts of the VTC. A possible explanation could be that this animacy-related activity is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC might treat the face as a proxy for agency, a ubiquitous feature of familiar animals.
Collapse
|
167
|
The spatiotemporal neural dynamics of object location representations in the human brain. Nat Hum Behav 2022; 6:796-811. [PMID: 35210593 PMCID: PMC9225954 DOI: 10.1038/s41562-022-01302-0] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 01/14/2022] [Indexed: 12/30/2022]
Abstract
To interact with objects in complex environments, we must know what they are and where they are in spite of challenging viewing conditions. Here, we investigated where, how and when representations of object location and category emerge in the human brain when objects appear on cluttered natural scene images using a combination of functional magnetic resonance imaging, electroencephalography and computational models. We found location representations to emerge along the ventral visual stream towards lateral occipital complex, mirrored by gradual emergence in deep neural networks. Time-resolved analysis suggested that computing object location representations involves recurrent processing in high-level visual cortex. Object category representations also emerged gradually along the ventral visual stream, with evidence for recurrent computations. These results resolve the spatiotemporal dynamics of the ventral visual stream that give rise to representations of where and what objects are present in a scene under challenging viewing conditions.
Collapse
|
168
|
Karimi H, Marefat H, Khanbagi M, Kalafatis C, Modarres MH, Vahabi Z, Khaligh-Razavi SM. Temporal dynamics of animacy categorization in the brain of patients with mild cognitive impairment. PLoS One 2022; 17:e0264058. [PMID: 35196356 PMCID: PMC8865635 DOI: 10.1371/journal.pone.0264058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 02/02/2022] [Indexed: 11/18/2022] Open
Abstract
Electroencephalography (EEG) has been commonly used to measure brain alterations in Alzheimer’s Disease (AD). However, reported changes are limited to those obtained from using univariate measures, including activation level and frequency bands. To look beyond the activation level, we used multivariate pattern analysis (MVPA) to extract patterns of information from EEG responses to images in an animacy categorization task. Comparing healthy controls (HC) with patients with mild cognitive impairment (MCI), we found that the neural speed of animacy information processing is decreased in MCI patients. Moreover, we found critical time-points during which the representational pattern of animacy for MCI patients was significantly discriminable from that of HC, while the activation level remained unchanged. Together, these results suggest that the speed and pattern of animacy information processing provide clinically useful information as a potential biomarker for detecting early changes in MCI and AD patients.
Collapse
Affiliation(s)
- Hamed Karimi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
- Department of Mathematics and Computer Science, Amirkabir University of Technology, Tehran, Iran
- * E-mail: (HK); (SMKR)
| | - Haniyeh Marefat
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | - Mahdiyeh Khanbagi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
| | - Chris Kalafatis
- South London & Maudsley NHS Foundation Trust, London, United Kingdom
- Department of Old Age Psychiatry, King’s College London, London, United Kingdom
- Cognetivity Ltd, London, United Kingdom
| | | | - Zahra Vahabi
- Department of Geriatric Medicine, Ziaeian Hospital, Tehran University of Medical Sciences, Tehran, Iran
- Memory and Behavioral Neurology Division, Roozbeh Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Seyed-Mahdi Khaligh-Razavi
- Department of Stem Cells and Developmental Biology, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran, Iran
- Cognetivity Ltd, London, United Kingdom
- * E-mail: (HK); (SMKR)
| |
Collapse
|
169
|
Abstract
Categorization is the basis of thinking and reasoning. Through the analysis of infants’ gaze, we describe the trajectory through which visual object representations in infancy incrementally match categorical object representations as mapped onto adults’ visual cortex. Using a methodological approach that allows for a comparison of findings obtained with behavioral and brain measures in infants and adults, we identify the transition from visual exploration guided by perceptual salience to an organization of objects by categories, which begins with the animate–inanimate distinction in the first months of life and continues with a spurt of biologically relevant categories (human bodies, nonhuman bodies, nonhuman faces, small natural objects) through the second year of life. Humans make sense of the world by organizing things into categories. When and how does this process begin? We investigated whether real-world object categories that spontaneously emerge in the first months of life match categorical representations of objects in the human visual cortex. Using eye tracking, we measured the differential looking time of 4-, 10-, and 19-mo-olds as they looked at pairs of pictures belonging to eight animate or inanimate categories (human/nonhuman, faces/bodies, real-world size big/small, natural/artificial). Taking infants’ looking times as a measure of similarity, for each age group, we defined a representational space where each object was defined in relation to others of the same or of a different category. This space was compared with hypothesis-based and functional MRI-based models of visual object categorization in the adults’ visual cortex. Analyses across different age groups showed that, as infants grow older, their looking behavior matches neural representations in ever-larger portions of the adult visual cortex, suggesting progressive recruitment and integration of more and more feature spaces distributed over the visual cortex. Moreover, the results characterize infants’ visual categorization as an incremental process with two milestones. Between 4 and 10 mo, visual exploration guided by saliency gives way to an organization according to the animate–inanimate distinction. Between 10 and 19 mo, a category spurt leads toward a mature organization. We propose that these changes underlie the coupling between seeing and thinking in the developing mind.
Collapse
|
170
|
From words to phrases: neural basis of social event semantic composition. Brain Struct Funct 2022; 227:1683-1695. [PMID: 35184222 PMCID: PMC9098591 DOI: 10.1007/s00429-022-02465-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 01/25/2022] [Indexed: 11/13/2022]
Abstract
Events are typically composed of at least actions and entities. Both actions and entities have been shown to be represented by neural structures respecting domain organizations in the brain, including those of social/animate (face and body; person-directed action) versus inanimate (man-made object or tool; object-directed action) concepts. It is unclear whether the brain combines actions and entities into events in a (relative) domain-specific fashion or via domain-general mechanisms in regions that have been shown to support semantic and syntactic composition. We tested these hypotheses in a functional magnetic resonance imaging experiment where two domains of verb-noun event phrases (social-person versus manipulation-artifact, e.g., “hug mother” versus “fold napkin”) and their component words were contrasted. We found a series of brain region supporting social-composition effects more strongly than the manipulation phrase composition—the bilateral inferior occipital gyrus (IOG), inferior temporal gyrus (ITG) and anterior temporal lobe (ATL)—which either showed stronger activation strength tested by univariate contrast, stronger content representation tested by representation similarity analysis, or stronger relationship between the neural activation patterns of phrases and synthesis (additive and multiplication) of the neural activity patterns of the word constituents. No regions were observed showing evidence of phrase composition for both domains or stronger effects of manipulation phrases. These findings highlight the roles of the visual cortex and ATL in social event compositions, suggesting a domain-preferring, rather than domain-general, mechanisms of verbal event composition.
Collapse
|
171
|
Jaworska K, Yan Y, van Rijsbergen NJ, Ince RAA, Schyns PG. Different computations over the same inputs produce selective behavior in algorithmic brain networks. eLife 2022; 11:73651. [PMID: 35174783 PMCID: PMC8853655 DOI: 10.7554/elife.73651] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 01/06/2022] [Indexed: 11/25/2022] Open
Abstract
A key challenge in neuroimaging remains to understand where, when, and now particularly how human brain networks compute over sensory inputs to achieve behavior. To study such dynamic algorithms from mass neural signals, we recorded the magnetoencephalographic (MEG) activity of participants who resolved the classic XOR, OR, and AND functions as overt behavioral tasks (N = 10 participants/task, N-of-1 replications). Each function requires a different computation over the same inputs to produce the task-specific behavioral outputs. In each task, we found that source-localized MEG activity progresses through four computational stages identified within individual participants: (1) initial contralateral representation of each visual input in occipital cortex, (2) a joint linearly combined representation of both inputs in midline occipital cortex and right fusiform gyrus, followed by (3) nonlinear task-dependent input integration in temporal-parietal cortex, and finally (4) behavioral response representation in postcentral gyrus. We demonstrate the specific dynamics of each computation at the level of individual sources. The spatiotemporal patterns of the first two computations are similar across the three tasks; the last two computations are task specific. Our results therefore reveal where, when, and how dynamic network algorithms perform different computations over the same inputs to produce different behaviors.
Collapse
Affiliation(s)
| | - Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow
| | | | - Robin AA Ince
- School of Psychology and Neuroscience, University of Glasgow
| | | |
Collapse
|
172
|
Vinke LN, Bloem IM, Ling S. Saturating Nonlinearities of Contrast Response in Human Visual Cortex. J Neurosci 2022; 42:1292-1302. [PMID: 34921048 PMCID: PMC8883860 DOI: 10.1523/jneurosci.0106-21.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 11/29/2021] [Accepted: 12/02/2021] [Indexed: 11/21/2022] Open
Abstract
Response nonlinearities are ubiquitous throughout the brain, especially within sensory cortices where changes in stimulus intensity typically produce compressed responses. Although this relationship is well established in electrophysiological measurements, it remains controversial whether the same nonlinearities hold for population-based measurements obtained with human fMRI. We propose that these purported disparities are not contingent on measurement type and are instead largely dependent on the visual system state at the time of interrogation. We show that deploying a contrast adaptation paradigm permits reliable measurements of saturating sigmoidal contrast response functions (10 participants, 7 female). When not controlling the adaptation state, our results coincide with previous fMRI studies, yielding nonsaturating, largely linear contrast responses. These findings highlight the important role of adaptation in manifesting measurable nonlinear responses within human visual cortex, reconciling discrepancies reported in vision neuroscience, re-establishing the qualitative relationship between stimulus intensity and response across different neural measures and the concerted study of cortical gain control.SIGNIFICANCE STATEMENT Nonlinear stimulus-response relationships govern many essential brain functions, ranging from the sensory to cognitive level. Certain core response properties previously shown to be nonlinear with nonhuman electrophysiology recordings have yet to be reliably measured with human neuroimaging, prompting uncertainty and reconsideration. The results of this study stand to reconcile these incongruencies in the vision neurosciences, demonstrating the profound impact adaptation can have on brain activation throughout the early visual cortex. Moving forward, these findings facilitate the study of modulatory influences on sensory processing (i.e., arousal and attention) and help establish a closer link between neural recordings in animals and hemodynamic measurements from human fMRI, resuming a concerted effort to understand operations in the mammalian cortex.
Collapse
Affiliation(s)
- Louis N Vinke
- Graduate Program for Neuroscience, Boston University, Boston, Massachusetts 02215
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts 02215
- Department of Psychiatry, Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, Massachusetts 02115
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, Massachusetts 02129
| | - Ilona M Bloem
- Psychological and Brain Sciences, Boston University, Boston, Massachusetts 02215
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts 02215
- Department of Psychology, New York University, New York City, New York 10012
| | - Sam Ling
- Psychological and Brain Sciences, Boston University, Boston, Massachusetts 02215
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts 02215
| |
Collapse
|
173
|
Kumar M, Anderson MJ, Antony JW, Baldassano C, Brooks PP, Cai MB, Chen PHC, Ellis CT, Henselman-Petrusek G, Huberdeau D, Hutchinson JB, Li YP, Lu Q, Manning JR, Mennen AC, Nastase SA, Richard H, Schapiro AC, Schuck NW, Shvartsman M, Sundaram N, Suo D, Turek JS, Turner D, Vo VA, Wallace G, Wang Y, Williams JA, Zhang H, Zhu X, Capotă M, Cohen JD, Hasson U, Li K, Ramadge PJ, Turk-Browne NB, Willke TL, Norman KA. BrainIAK: The Brain Imaging Analysis Kit. APERTURE NEURO 2022; 1:10.52294/31bb5b68-2184-411b-8c00-a1dacb61e1da. [PMID: 35939268 PMCID: PMC9351935 DOI: 10.52294/31bb5b68-2184-411b-8c00-a1dacb61e1da] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Functional magnetic resonance imaging (fMRI) offers a rich source of data for studying the neural basis of cognition. Here, we describe the Brain Imaging Analysis Kit (BrainIAK), an open-source, free Python package that provides computationally optimized solutions to key problems in advanced fMRI analysis. A variety of techniques are presently included in BrainIAK: intersubject correlation (ISC) and intersubject functional connectivity (ISFC), functional alignment via the shared response model (SRM), full correlation matrix analysis (FCMA), a Bayesian version of representational similarity analysis (BRSA), event segmentation using hidden Markov models, topographic factor analysis (TFA), inverted encoding models (IEMs), an fMRI data simulator that uses noise characteristics from real data (fmrisim), and some emerging methods. These techniques have been optimized to leverage the efficiencies of high-performance compute (HPC) clusters, and the same code can be se amlessly transferred from a laptop to a cluster. For each of the aforementioned techniques, we describe the data analysis problem that the technique is meant to solve and how it solves that problem; we also include an example Jupyter notebook for each technique and an annotated bibliography of papers that have used and/or described that technique. In addition to the sections describing various analysis techniques in BrainIAK, we have included sections describing the future applications of BrainIAK to real-time fMRI, tutorials that we have developed and shared online to facilitate learning the techniques in BrainIAK, computational innovations in BrainIAK, and how to contribute to BrainIAK. We hope that this manuscript helps readers to understand how BrainIAK might be useful in their research.
Collapse
Affiliation(s)
- Manoj Kumar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Michael J. Anderson
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - James W. Antony
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | - Paula P. Brooks
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Ming Bo Cai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Japan
| | - Po-Hsuan Cameron Chen
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | | | | | | | | | - Y. Peeta Li
- Department of Psychology, University of Oregon, Eugene, OR
| | - Qihong Lu
- Department of Psychology, Princeton University, Princeton, NJ
| | - Jeremy R. Manning
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH
| | - Anne C. Mennen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Samuel A. Nastase
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Hugo Richard
- Parietal Team, Inria, Neurospin, CEA, Université Paris-Saclay, France
| | - Anna C. Schapiro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA
| | - Nicolas W. Schuck
- Max Planck Research Group NeuroCode, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany
| | - Michael Shvartsman
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Narayanan Sundaram
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Daniel Suo
- Department of Computer Science, Princeton University, Princeton, NJ
| | - Javier S. Turek
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - David Turner
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Vy A. Vo
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Grant Wallace
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Yida Wang
- Work done while at Parallel Computing Lab, Intel Corporation, Santa Clara, CA
| | - Jamal A. Williams
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Department of Psychology, Princeton University, Princeton, NJ
| | - Hejia Zhang
- Work done while at Princeton Neuroscience Institute, Princeton University, Princeton, NJ
| | - Xia Zhu
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Mihai Capotă
- Brain-Inspired Computing Lab, Intel Corporation, Hillsboro, OR
| | - Jonathan D. Cohen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Department of Psychology, Princeton University, Princeton, NJ
| | - Uri Hasson
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Department of Psychology, Princeton University, Princeton, NJ
| | - Kai Li
- Department of Computer Science, Princeton University, Princeton, NJ
| | - Peter J. Ramadge
- Department of Electrical Engineering, and the Center for Statistics and Machine Learning, Princeton University, Princeton, NJ
| | | | | | - Kenneth A. Norman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ
- Department of Psychology, Princeton University, Princeton, NJ
| |
Collapse
|
174
|
Mei N, Santana R, Soto D. Informative neural representations of unseen contents during higher-order processing in human brains and deep artificial networks. Nat Hum Behav 2022; 6:720-731. [PMID: 35115676 DOI: 10.1038/s41562-021-01274-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 12/08/2021] [Indexed: 11/09/2022]
Abstract
A framework to pinpoint the scope of unconscious processing is critical to improve models of visual consciousness. Previous research observed brain signatures of unconscious processing in visual cortex, but these were not reliably identified. Further, whether unconscious contents are represented in high-level stages of the ventral visual stream and linked parieto-frontal areas remains unknown. Using a within-subject, high-precision functional magnetic resonance imaging approach, we show that unconscious contents can be decoded from multi-voxel patterns that are highly distributed alongside the ventral visual pathway and also involving parieto-frontal substrates. Classifiers trained with multi-voxel patterns of conscious items generalized to predict the unconscious counterparts, indicating that their neural representations overlap. These findings suggest revisions to models of consciousness such as the neuronal global workspace. We then provide a computational simulation of visual processing/representation without perceptual sensitivity by using deep neural networks performing a similar visual task. The work provides a framework for pinpointing the representation of unconscious knowledge across different task domains.
Collapse
Affiliation(s)
- Ning Mei
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain.
| | - Roberto Santana
- Computer Science and Artificial Intelligence Department, University of Basque Country, San Sebastian, Spain
| | - David Soto
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain. .,Ikerbasque, Basque Foundation for Science, Bilbao, Spain.
| |
Collapse
|
175
|
Yip HMK, Cheung LYT, Ngan VSH, Wong YK, Wong ACN. The Effect of Task on Object Processing revealed by EEG decoding. Eur J Neurosci 2022; 55:1174-1199. [PMID: 35023230 DOI: 10.1111/ejn.15598] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 01/05/2022] [Accepted: 01/10/2022] [Indexed: 12/01/2022]
Abstract
Recent studies showed that task demand affects object representations in higher-level visual areas and beyond, but not so much in earlier areas. There are, however, limitations in those studies including the relatively weak manipulation of task due to the use of familiar real-life objects, the low temporal resolution in fMRI, and the emphasis on the amount and not the source of information carried by brain activations. In the current study, observers categorized images of artificial objects in one of two orthogonal dimensions, shape and texture, while their brain activity was recorded with electroencephalogram (EEG). Results showed that object processing along the texture dimension was affected by task demand starting from a relatively late time (320-370ms time window) after image onset. The findings are consistent with the view that task exerts an effect on the later phases of object processing.
Collapse
Affiliation(s)
- Hoi Ming Ken Yip
- Department of Psychology, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Leo Y T Cheung
- Department of Educational Psychology, Faculty of Education, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Vince S H Ngan
- Department of Psychology, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Yetta Kwailing Wong
- Department of Educational Psychology, Faculty of Education, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| | - Alan C N Wong
- Department of Psychology, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong
| |
Collapse
|
176
|
Neurons in the pigeon visual network discriminate between faces, scrambled faces, and sine grating images. Sci Rep 2022; 12:589. [PMID: 35022466 PMCID: PMC8755821 DOI: 10.1038/s41598-021-04559-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Accepted: 12/24/2021] [Indexed: 11/13/2022] Open
Abstract
Discriminating between object categories (e.g., conspecifics, food, potential predators) is a critical function of the primate and bird visual systems. We examined whether a similar hierarchical organization in the ventral stream that operates for processing faces in monkeys also exists in the avian visual system. We performed electrophysiological recordings from the pigeon Wulst of the thalamofugal pathway, in addition to the entopallium (ENTO) and mesopallium ventrolaterale (MVL) of the tectofugal pathway, while pigeons viewed images of faces, scrambled controls, and sine gratings. A greater proportion of MVL neurons fired to the stimuli, and linear discriminant analysis revealed that the population response of MVL neurons distinguished between the stimuli with greater capacity than ENTO and Wulst neurons. While MVL neurons displayed the greatest response selectivity, in contrast to the primate system no neurons were strongly face-selective and some responded best to the scrambled images. These findings suggest that MVL is primarily involved in processing the local features of images, much like the early visual cortex.
Collapse
|
177
|
Keller AS, Jagadeesh AV, Bugatus L, Williams LM, Grill-Spector K. Attention enhances category representations across the brain with strengthened residual correlations to ventral temporal cortex. Neuroimage 2022; 249:118900. [PMID: 35021039 PMCID: PMC8947761 DOI: 10.1016/j.neuroimage.2022.118900] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 01/06/2022] [Accepted: 01/08/2022] [Indexed: 11/05/2022] Open
Abstract
How does attention enhance neural representations of goal-relevant stimuli while suppressing representations of ignored stimuli across regions of the brain? While prior studies have shown that attention enhances visual responses, we lack a cohesive understanding of how selective attention modulates visual representations across the brain. Here, we used functional magnetic resonance imaging (fMRI) while participants performed a selective attention task on superimposed stimuli from multiple categories and used a data-driven approach to test how attention affects both decodability of category information and residual correlations (after regressing out stimulus-driven variance) with category-selective regions of ventral temporal cortex (VTC). Our data reveal three main findings. First, when two objects are simultaneously viewed, the category of the attended object can be decoded more readily than the category of the ignored object, with the greatest attentional enhancements observed in occipital and temporal lobes. Second, after accounting for the response to the stimulus, the correlation in the residual brain activity between a cortical region and a category-selective region of VTC was elevated when that region’s preferred category was attended vs. ignored, and more so in the right occipital, parietal, and frontal cortices. Third, we found that the stronger the residual correlations between a given region of cortex and VTC, the better visual category information could be decoded from that region. These findings suggest that heightened residual correlations by selective attention may reflect the sharing of information between sensory regions and higher-order cortical regions to provide attentional enhancement of goal-relevant information.
Collapse
Affiliation(s)
- Arielle S Keller
- Department of Psychiatry and Behavioral Sciences, Stanford University, CA 94305, USA; Neurosciences Graduate Program, Stanford University, CA 94305, USA.
| | | | - Lior Bugatus
- Department of Psychology, Stanford University, CA 94305, USA
| | - Leanne M Williams
- Department of Psychiatry and Behavioral Sciences, Stanford University, CA 94305, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, CA 94305, USA; Wu Tsai Neurosciences Institute, Stanford University, CA 94305, USA
| |
Collapse
|
178
|
|
179
|
Mahon BZ. Domain-specific connectivity drives the organization of object knowledge in the brain. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:221-244. [PMID: 35964974 PMCID: PMC11498098 DOI: 10.1016/b978-0-12-823493-8.00028-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The goal of this chapter is to review neuropsychological and functional MRI findings that inform a theory of the causes of functional specialization for semantic categories within occipito-temporal cortex-the ventral visual processing pathway. The occipito-temporal pathway supports visual object processing and recognition. The theoretical framework that drives this review considers visual object recognition through the lens of how "downstream" systems interact with the outputs of visual recognition processes. Those downstream processes include conceptual interpretation, grasping and object use, navigating and orienting in an environment, physical reasoning about the world, and inferring future actions and the inner mental states of agents. The core argument of this chapter is that innately constrained connectivity between occipito-temporal areas and other regions of the brain is the basis for the emergence of neural specificity for a limited number of semantic domains in the brain.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, United States.
| |
Collapse
|
180
|
Ta D, Tu Y, Lu ZL, Wang Y. Quantitative characterization of the human retinotopic map based on quasiconformal mapping. Med Image Anal 2022; 75:102230. [PMID: 34666194 PMCID: PMC8678293 DOI: 10.1016/j.media.2021.102230] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 07/11/2021] [Accepted: 09/10/2021] [Indexed: 01/03/2023]
Abstract
The retinotopic map depicts the cortical neurons' response to visual stimuli on the retina and has contributed significantly to our understanding of human visual system. Although recent advances in high field functional magnetic resonance imaging (fMRI) have made it possible to generate the in vivo retinotopic map with great detail, quantifying the map remains challenging. Existing quantification methods do not preserve surface topology and often introduce large geometric distortions to the map. In this study, we developed a new framework based on computational conformal geometry and quasiconformal Teichmüller theory to quantify the retinotopic map. Specifically, we introduced a general pipeline, consisting of cortical surface conformal parameterization, surface-spline-based cortical activation signal smoothing, and vertex-wise Beltrami coefficient-based map description. After correcting most of the violations of the topological conditions, the result was a "Beltrami coefficient map" (BCM) that rigorously and completely characterizes the retinotopic map by quantifying the local quasiconformal mapping distortion at each visual field location. The BCM provided topological and fully reconstructable retinotopic maps. We successfully applied the new framework to analyze the V1 retinotopic maps from the Human Connectome Project (n=181), the largest state of the art retinotopy dataset currently available. With unprecedented precision, we found that the V1 retinotopic map was quasiconformal and the local mapping distortions were similar across observers. The new framework can be applied to other visual areas and retinotopic maps of individuals with and without eye diseases, and improve our understanding of visual cortical organization in normal and clinical populations.
Collapse
Affiliation(s)
- Duyan Ta
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Yanshuai Tu
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA
| | - Zhong-Lin Lu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China; Center for Neural Science and Department of Psychology, New York University, New York, NY, USA; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Yalin Wang
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
181
|
Nenning KH, Langs G. Machine learning in neuroimaging: from research to clinical practice. RADIOLOGIE (HEIDELBERG, GERMANY) 2022; 62:1-10. [PMID: 36044070 PMCID: PMC9732070 DOI: 10.1007/s00117-022-01051-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 07/07/2022] [Indexed: 12/14/2022]
Abstract
Neuroimaging is critical in clinical care and research, enabling us to investigate the brain in health and disease. There is a complex link between the brain's morphological structure, physiological architecture, and the corresponding imaging characteristics. The shape, function, and relationships between various brain areas change during development and throughout life, disease, and recovery. Like few other areas, neuroimaging benefits from advanced analysis techniques to fully exploit imaging data for studying the brain and its function. Recently, machine learning has started to contribute (a) to anatomical measurements, detection, segmentation, and quantification of lesions and disease patterns, (b) to the rapid identification of acute conditions such as stroke, or (c) to the tracking of imaging changes over time. As our ability to image and analyze the brain advances, so does our understanding of its intricate relationships and their role in therapeutic decision-making. Here, we review the current state of the art in using machine learning techniques to exploit neuroimaging data for clinical care and research, providing an overview of clinical applications and their contribution to fundamental computational neuroscience.
Collapse
Affiliation(s)
- Karl-Heinz Nenning
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute, Orangeburg, NY, USA
- Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Georg Langs
- Department of Biomedical Imaging and Image-guided Therapy, Computational Imaging Research Lab, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
182
|
Zhang J, Jiang Y, Song Y, Zhang P, He S. Spatial tuning of face part representations within face-selective areas revealed by high-field fMRI. eLife 2021; 10:e70925. [PMID: 34964711 PMCID: PMC8716104 DOI: 10.7554/elife.70925] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/11/2021] [Indexed: 11/20/2022] Open
Abstract
Regions sensitive to specific object categories as well as organized spatial patterns sensitive to different features have been found across the whole ventral temporal cortex (VTC). However, it is unclear that within each object category region, how specific feature representations are organized to support object identification. Would object features, such as object parts, be represented in fine-scale spatial tuning within object category-specific regions? Here, we used high-field 7T fMRI to examine the spatial tuning to different face parts within each face-selective region. Our results show consistent spatial tuning of face parts across individuals that within right posterior fusiform face area (pFFA) and right occipital face area (OFA), the posterior portion of each region was biased to eyes, while the anterior portion was biased to mouth and chin stimuli. Our results demonstrate that within the occipital and fusiform face processing regions, there exist systematic spatial tuning to different face parts that support further computation combining them.
Collapse
Affiliation(s)
- Jiedong Zhang
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yong Jiang
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Yunjie Song
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Peng Zhang
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
| | - Sheng He
- Institute of Biophysics, Chinese Academy of SciencesBeijingChina
- University of Chinese Academy of SciencesBeijingChina
- Department of Psychology, University of MinnesotaMinneapolisUnited States
| |
Collapse
|
183
|
Jang H, McCormack D, Tong F. Noise-trained deep neural networks effectively predict human vision and its neural responses to challenging images. PLoS Biol 2021; 19:e3001418. [PMID: 34882676 PMCID: PMC8659651 DOI: 10.1371/journal.pbio.3001418] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 09/20/2021] [Indexed: 11/18/2022] Open
Abstract
Deep neural networks (DNNs) for object classification have been argued to provide the most promising model of the visual system, accompanied by claims that they have attained or even surpassed human-level performance. Here, we evaluated whether DNNs provide a viable model of human vision when tested with challenging noisy images of objects, sometimes presented at the very limits of visibility. We show that popular state-of-the-art DNNs perform in a qualitatively different manner than humans—they are unusually susceptible to spatially uncorrelated white noise and less impaired by spatially correlated noise. We implemented a noise training procedure to determine whether noise-trained DNNs exhibit more robust responses that better match human behavioral and neural performance. We found that noise-trained DNNs provide a better qualitative match to human performance; moreover, they reliably predict human recognition thresholds on an image-by-image basis. Functional neuroimaging revealed that noise-trained DNNs provide a better correspondence to the pattern-specific neural representations found in both early visual areas and high-level object areas. A layer-specific analysis of the DNNs indicated that noise training led to broad-ranging modifications throughout the network, with greater benefits of noise robustness accruing in progressively higher layers. Our findings demonstrate that noise-trained DNNs provide a viable model to account for human behavioral and neural responses to objects in challenging noisy viewing conditions. Further, they suggest that robustness to noise may be acquired through a process of visual learning. Unlike human observers, deep neural networks fail to recognize objects in severe visual noise. This study develops noise-trained networks and shows that these networks better predict human performance and neural responses in the visual cortex to challenging noisy object images.
Collapse
Affiliation(s)
- Hojin Jang
- Psychology Department and Vanderbilt Vision Research Center, Vanderbilt University, Nashville, Tennessee, United States of America
- * E-mail: (HJ); (FT)
| | - Devin McCormack
- Psychology Department and Vanderbilt Vision Research Center, Vanderbilt University, Nashville, Tennessee, United States of America
| | - Frank Tong
- Psychology Department and Vanderbilt Vision Research Center, Vanderbilt University, Nashville, Tennessee, United States of America
- * E-mail: (HJ); (FT)
| |
Collapse
|
184
|
Moon A, He C, Ditta AS, Cheung OS, Wu R. Rapid category selectivity for animals versus man-made objects: An N2pc study. Int J Psychophysiol 2021; 171:20-28. [PMID: 34856220 DOI: 10.1016/j.ijpsycho.2021.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 08/24/2021] [Accepted: 11/25/2021] [Indexed: 10/19/2022]
Abstract
Visual recognition occurs rapidly at multiple categorization levels, including the superordinate level (e.g., animal), basic level (e.g., cat), or exemplar level (e.g., my cat). Visual search for animals is faster than for man-made objects, even when the images from those categories have comparable gist statistics (i.e., low- or mid-level visual information), which suggests that higher-level, conceptual influences may support this search advantage for animals. However, it remains unclear whether the search advantage can be explained in part by early visual search processes via the N2pc ERP component, which emerges earlier than behavioral responses, across different categorization levels. Participants searched for 1) an exact image (e.g., a specific squirrel image, Exemplar-level Search), 2) any images of an item (e.g., any squirrels, Basic-level Search), or 3) any items in a category (e.g., any animals, Superordinate-level Search). In addition to Target Present trials, Foil trials measured involuntary attentional selection of task-irrelevant images related to the targets (e.g., other squirrel images when searching for a specific squirrel image, or other animals when searching for squirrels). ERP results revealed 1) a larger N2pc amplitude during Foil trials in Exemplar-level Search for animals than man-made objects, and 2) faster onset latencies for animal search than man-made object search across all categorization levels. These results suggest that the search advantage for animals over man-made objects emerges early, and that attentional selection is more biased toward the basic-level (e.g., squirrel) for animals than for man-made objects during visual search.
Collapse
Affiliation(s)
- Austin Moon
- Department of Psychology, University of California, Riverside, United States of America.
| | - Chenxi He
- INSERM, U992, Cognitive Neuroimaging Unit, Gif/Yvette, France
| | - Annie S Ditta
- Department of Psychology, University of California, Riverside, United States of America
| | - Olivia S Cheung
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Rachel Wu
- Department of Psychology, University of California, Riverside, United States of America
| |
Collapse
|
185
|
Moerel M, Yacoub E, Gulban OF, Lage-Castellanos A, De Martino F. Using high spatial resolution fMRI to understand representation in the auditory network. Prog Neurobiol 2021; 207:101887. [PMID: 32745500 PMCID: PMC7854960 DOI: 10.1016/j.pneurobio.2020.101887] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 05/27/2020] [Accepted: 07/15/2020] [Indexed: 12/23/2022]
Abstract
Following rapid methodological advances, ultra-high field (UHF) functional and anatomical magnetic resonance imaging (MRI) has been repeatedly and successfully used for the investigation of the human auditory system in recent years. Here, we review this work and argue that UHF MRI is uniquely suited to shed light on how sounds are represented throughout the network of auditory brain regions. That is, the provided gain in spatial resolution at UHF can be used to study the functional role of the small subcortical auditory processing stages and details of cortical processing. Further, by combining high spatial resolution with the versatility of MRI contrasts, UHF MRI has the potential to localize the primary auditory cortex in individual hemispheres. This is a prerequisite to study how sound representation in higher-level auditory cortex evolves from that in early (primary) auditory cortex. Finally, the access to independent signals across auditory cortical depths, as afforded by UHF, may reveal the computations that underlie the emergence of an abstract, categorical sound representation based on low-level acoustic feature processing. Efforts on these research topics are underway. Here we discuss promises as well as challenges that come with studying these research questions using UHF MRI, and provide a future outlook.
Collapse
Affiliation(s)
- Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, the Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands.
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| | - Omer Faruk Gulban
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA; Brain Innovation B.V., Maastricht, the Netherlands.
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Department of NeuroInformatics, Cuban Center for Neuroscience, Cuba.
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Maastricht Brain Imaging Center (MBIC), Maastricht, the Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, USA.
| |
Collapse
|
186
|
Chen C, Lou Y, Li H, Yuan J, Yang J, Winskel H, Qin S. Distinct neural-behavioral correspondence within face processing and attention networks for the composite face effect. Neuroimage 2021; 246:118756. [PMID: 34848297 DOI: 10.1016/j.neuroimage.2021.118756] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 11/14/2021] [Accepted: 11/22/2021] [Indexed: 11/29/2022] Open
Abstract
The composite face effect (CFE) is recognized as a hallmark for holistic face processing, but our knowledge remains sparse about its cognitive and neural loci. Using functional magnetic resonance imaging with independent localizer and complete composite face task, we here investigated its neural-behavioral correspondence within face processing and attention networks. Complementing classical comparisons, we adopted a dimensional reduction approach to explore the core cognitive constructs of the behavioral CFE measurement. Our univariate analyses found an alignment effect in regions associated with both the extended face processing network and attention networks. Further representational similarity analyses based on the Euclidian distances among all experimental conditions were used to identify cortical regions with reliable neural-behavioral correspondences. Multidimensional scaling and hierarchical clustering analyses for neural-behavioral correspondence data revealed two principal components underlying the behavioral CFE effect, which fit best to the neural responses in the bilateral insula and medial frontal gyrus. These findings highlight the distinct neurocognitive contributions of both face processing and attentional networks to the behavioral CFE outcome, which bridge the gaps between face recognition and attentional control models.
Collapse
Affiliation(s)
- Changming Chen
- School of Education, Chongqing Normal University, Chongqing 401331, China
| | - Yixue Lou
- Department of Psychology, Faculty of Education and Psychology, University of Jyvaskyla, Jyväskylä 40014, Finland; Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Hong Li
- School of Psychology, South China Normal University, Guangzhou 510631, China; Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China.
| | - Jiajin Yuan
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China.
| | - Jiemin Yang
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China
| | - Heather Winskel
- Psychology, James Cook University, Singapore Campus, 387380, Singapore
| | - Shaozheng Qin
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
187
|
Cesanek E, Zhang Z, Ingram JN, Wolpert DM, Flanagan JR. Motor memories of object dynamics are categorically organized. eLife 2021; 10:71627. [PMID: 34796873 PMCID: PMC8635978 DOI: 10.7554/elife.71627] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 11/18/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to predict the dynamics of objects, linking applied force to motion, underlies our capacity to perform many of the tasks we carry out on a daily basis. Thus, a fundamental question is how the dynamics of the myriad objects we interact with are organized in memory. Using a custom-built three-dimensional robotic interface that allowed us to simulate objects of varying appearance and weight, we examined how participants learned the weights of sets of objects that they repeatedly lifted. We find strong support for the novel hypothesis that motor memories of object dynamics are organized categorically, in terms of families, based on covariation in their visual and mechanical properties. A striking prediction of this hypothesis, supported by our findings and not predicted by standard associative map models, is that outlier objects with weights that deviate from the family-predicted weight will never be learned despite causing repeated lifting errors.
Collapse
Affiliation(s)
- Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - Zhaoran Zhang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - James N Ingram
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - Daniel M Wolpert
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States.,Department of Neuroscience, Columbia University, New York, NY, United States
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| |
Collapse
|
188
|
One object, two networks? Assessing the relationship between the face and body-selective regions in the primate visual system. Brain Struct Funct 2021; 227:1423-1438. [PMID: 34792643 DOI: 10.1007/s00429-021-02420-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/22/2021] [Indexed: 10/19/2022]
Abstract
Faces and bodies are often treated as distinct categories that are processed separately by face- and body-selective brain regions in the primate visual system. These regions occupy distinct regions of visual cortex and are often thought to constitute independent functional networks. Yet faces and bodies are part of the same object and their presence inevitably covary in naturalistic settings. Here, we re-evaluate both the evidence supporting the independent processing of faces and bodies and the organizational principles that have been invoked to explain this distinction. We outline four hypotheses ranging from completely separate networks to a single network supporting the perception of whole people or animals. The current evidence, especially in humans, is compatible with all of these hypotheses, making it presently unclear how the representation of faces and bodies is organized in the cortex.
Collapse
|
189
|
Rose O, Johnson J, Wang B, Ponce CR. Visual prototypes in the ventral stream are attuned to complexity and gaze behavior. Nat Commun 2021; 12:6723. [PMID: 34795262 PMCID: PMC8602238 DOI: 10.1038/s41467-021-27027-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 11/01/2021] [Indexed: 01/02/2023] Open
Abstract
Early theories of efficient coding suggested the visual system could compress the world by learning to represent features where information was concentrated, such as contours. This view was validated by the discovery that neurons in posterior visual cortex respond to edges and curvature. Still, it remains unclear what other information-rich features are encoded by neurons in more anterior cortical regions (e.g., inferotemporal cortex). Here, we use a generative deep neural network to synthesize images guided by neuronal responses from across the visuocortical hierarchy, using floating microelectrode arrays in areas V1, V4 and inferotemporal cortex of two macaque monkeys. We hypothesize these images ("prototypes") represent such predicted information-rich features. Prototypes vary across areas, show moderate complexity, and resemble salient visual attributes and semantic content of natural images, as indicated by the animals' gaze behavior. This suggests the code for object recognition represents compressed features of behavioral relevance, an underexplored aspect of efficient coding.
Collapse
Affiliation(s)
- Olivia Rose
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - James Johnson
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
| | - Binxu Wang
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA
| | - Carlos R Ponce
- Department of Neuroscience, Washington University School of Medicine, St. Louis, MO, USA.
- Department of Neurobiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
190
|
Hayashi R, Yamashita O, Yamada T, Kawaguchi H, Higo N. Diffuse Optical Tomography Using fNIRS Signals Measured from the Skull Surface of the Macaque Monkey. Cereb Cortex Commun 2021; 3:tgab064. [PMID: 35072075 PMCID: PMC8767783 DOI: 10.1093/texcom/tgab064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/29/2022] Open
Abstract
Diffuse optical tomography (DOT), as a functional near-infrared spectroscopy (fNIRS) technique, can estimate three-dimensional (3D) images of the functional hemodynamic response in brain volume from measured optical signals. In this study, we applied DOT algorithms to the fNIRS data recorded from the surface of macaque monkeys’ skulls when the animals performed food retrieval tasks using either the left- or right-hand under head-free conditions. The hemodynamic response images, reconstructed by DOT with a high sampling rate and fine voxel size, demonstrated significant activations at the upper limb regions of the primary motor area in the central sulcus and premotor, and parietal areas contralateral to the hands used in the tasks. The results were also reliable in terms of consistency across different recording dates. Time-series analyses of each brain area revealed preceding activity of premotor area to primary motor area consistent with previous physiological studies. Therefore, the fNIRS–DOT protocol demonstrated in this study provides reliable 3D functional brain images over a period of days under head-free conditions for region-of-interest–based time-series analysis.
Collapse
Affiliation(s)
- Ryusuke Hayashi
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba-shi, Ibaraki 305-8568, Japan
| | - Okito Yamashita
- Computational Brain Dynamics Team, Center for Advanced Intelligence Project, RIKEN, Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Neural Information Analysis Laboratories, Department of Computational Brain Imaging, ATR, 2-2-2 Hikaridai Seika-cho, Sorakugun, Kyoto 619-0288, Japan
| | - Toru Yamada
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba-shi, Ibaraki 305-8568, Japan
| | - Hiroshi Kawaguchi
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba-shi, Ibaraki 305-8568, Japan
| | - Noriyuki Higo
- Neurorehabilitation Research Group, Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba-shi, Ibaraki 305-8568, Japan
| |
Collapse
|
191
|
Bonnasse-Gahot L, Nadal JP. Categorical Perception: A Groundwork for Deep Learning. Neural Comput 2021; 34:437-475. [PMID: 34758487 DOI: 10.1162/neco_a_01454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 07/26/2021] [Indexed: 11/04/2022]
Abstract
Classification is one of the major tasks that deep learning is successfully tackling. Categorization is also a fundamental cognitive ability. A well-known perceptual consequence of categorization in humans and other animals, categorical per ception, is notably characterized by a within-category compression and a between-category separation: two items, close in input space, are perceived closer if they belong to the same category than if they belong to different categories. Elaborating on experimental and theoretical results in cognitive science, here we study categorical effects in artificial neural networks. We combine a theoretical analysis that makes use of mutual and Fisher information quantities and a series of numerical simulations on networks of increasing complexity. These formal and numerical analyses provide insights into the geometry of the neural representation in deep layers, with expansion of space near category boundaries and contraction far from category boundaries. We investigate categorical representation by using two complementary approaches: one mimics experiments in psychophysics and cognitive neuroscience by means of morphed continua between stimuli of different categories, while the other introduces a categoricality index that, for each layer in the network, quantifies the separability of the categories at the neural population level. We show on both shallow and deep neural networks that category learning automatically induces categorical perception. We further show that the deeper a layer, the stronger the categorical effects. As an outcome of our study, we propose a coherent view of the efficacy of different heuristic practices of the dropout regularization technique. More generally, our view, which finds echoes in the neuroscience literature, insists on the differential impact of noise in any given layer depending on the geometry of the neural representation that is being learned, that is, on how this geometry reflects the structure of the categories.
Collapse
Affiliation(s)
- Laurent Bonnasse-Gahot
- Centre d'Analyse et de Mathématique Sociales, École des Hautes Études en Sciences Sociales, 75006 Paris, France
| | - Jean-Pierre Nadal
- Centre d'Analyse et de Mathématique Sociales, École des Hautes Études en Sciences Sociales, 75006 Paris, France, and Laboratoire de Physique de l'ENS, Université de Paris, École Normale Supérieure, 75006 Paris, France
| |
Collapse
|
192
|
Guo LL, Oghli YS, Frost A, Niemeier M. Multivariate Analysis of Electrophysiological Signals Reveals the Time Course of Precision Grasps Programs: Evidence for Nonhierarchical Evolution of Grasp Control. J Neurosci 2021; 41:9210-9222. [PMID: 34551938 PMCID: PMC8570828 DOI: 10.1523/jneurosci.0992-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2021] [Revised: 09/13/2021] [Accepted: 09/16/2021] [Indexed: 11/21/2022] Open
Abstract
Current understanding of the neural processes underlying human grasping suggests that grasp computations involve gradients of higher to lower level representations and, relatedly, visual to motor processes. However, it is unclear whether these processes evolve in a strictly canonical manner from higher to intermediate and to lower levels given that this knowledge importantly relies on functional imaging, which lacks temporal resolution. To examine grasping in fine temporal detail here we used multivariate EEG analysis. We asked participants to grasp objects while controlling the time at which crucial elements of grasp programs were specified. We first specified the orientation with which participants should grasp objects, and only after a delay we instructed participants about which effector to use to grasp, either the right or the left hand. We also asked participants to grasp with both hands because bimanual and left-hand grasping share intermediate-level grasp representations. We observed that grasp programs evolved in a canonical manner from visual representations, which were independent of effectors to motor representations that distinguished between effectors. However, we found that intermediate representations of effectors that partially distinguished between effectors arose after representations that distinguished among all effector types. Our results show that grasp computations do not proceed in a strictly hierarchically canonical fashion, highlighting the importance of the fine temporal resolution of EEG for a comprehensive understanding of human grasp control.SIGNIFICANCE STATEMENT A long-standing assumption of the grasp computations is that grasp representations progress from higher to lower level control in a regular, or canonical, fashion. Here, we combined EEG and multivariate pattern analysis to characterize the temporal dynamics of grasp representations while participants viewed objects and were subsequently cued to execute an unimanual or bimanual grasp. Interrogation of the temporal dynamics revealed that lower level effector representations emerged before intermediate levels of grasp representations, thereby suggesting a partially noncanonical progression from higher to lower and then to intermediate level grasp control.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Yazan Shamli Oghli
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Adam Frost
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Toronto, Ontario M1C 1A4, Canada
- Centre for Vision Research, York University, Toronto, Ontario M4N 3M6, Canada
- Vision: Science to Applications, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
193
|
Klímová M, Bloem IM, Ling S. The specificity of orientation-tuned normalization within human early visual cortex. J Neurophysiol 2021; 126:1536-1546. [PMID: 34550028 PMCID: PMC8794056 DOI: 10.1152/jn.00203.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/20/2021] [Accepted: 09/20/2021] [Indexed: 11/22/2022] Open
Abstract
Normalization within visual cortex is modulated by contextual influences; stimuli sharing similar features suppress each other more than dissimilar stimuli. This feature-tuned component of suppression depends on multiple factors, including the orientation content of stimuli. Indeed, pairs of stimuli arranged in a center-surround configuration attenuate each other's response to a greater degree when oriented collinearly than when oriented orthogonally. Although numerous studies have examined the nature of surround suppression at these two extremes, far less is known about how the strength of tuned normalization varies as a function of continuous changes in orientation similarity, particularly in humans. In this study, we used functional magnetic resonance imaging (fMRI) to examine the bandwidth of orientation-tuned suppression within human visual cortex. Blood-oxygen-level-dependent (BOLD) responses were acquired as participants viewed a full-field circular stimulus composed of wedges of orientation-bandpass filtered noise. This stimulus configuration allowed us to parametrically vary orientation differences between neighboring wedges in gradual steps between collinear and orthogonal. We found the greatest suppression for collinearly arranged stimuli with a gradual increase in BOLD response as the orientation content became more dissimilar. We quantified the tuning width of orientation-tuned suppression, finding that the voxel-wise bandwidth of orientation tuned normalization was between 20° and 30°, and did not differ substantially between early visual areas. Voxel-wise analyses revealed that suppression width covaried with retinotopic preference, with the tightest bandwidths at outer eccentricities. Having an estimate of orientation-tuned suppression bandwidth can serve to constrain models of tuned normalization, establishing the precise degree to which suppression strength depends on similarity between visual stimulus components.NEW & NOTEWORTHY Neurons in the early visual cortex are subject to divisive normalization, but the feature-tuning aspect of this computation remains understudied, particularly in humans. We investigated orientation tuning of normalization in human early visual cortex using fMRI and estimated the bandwidth of the tuned normalization function across observers. Our findings provide a characterization of tuned normalization in early visual cortex that could help constrain models of divisive normalization in vision.
Collapse
Affiliation(s)
- Michaela Klímová
- Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| | - Ilona M Bloem
- Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts
- Department of Psychology, New York University, New York City, New York
| | - Sam Ling
- Department of Psychological and Brain Sciences, Boston University, Boston, Massachusetts
- Center for Systems Neuroscience, Boston University, Boston, Massachusetts
| |
Collapse
|
194
|
Abstract
A central goal of neuroscience is to understand the representations formed by brain activity patterns and their connection to behaviour. The classic approach is to investigate how individual neurons encode stimuli and how their tuning determines the fidelity of the neural representation. Tuning analyses often use the Fisher information to characterize the sensitivity of neural responses to small changes of the stimulus. In recent decades, measurements of large populations of neurons have motivated a complementary approach, which focuses on the information available to linear decoders. The decodable information is captured by the geometry of the representational patterns in the multivariate response space. Here we review neural tuning and representational geometry with the goal of clarifying the relationship between them. The tuning induces the geometry, but different sets of tuned neurons can induce the same geometry. The geometry determines the Fisher information, the mutual information and the behavioural performance of an ideal observer in a range of psychophysical tasks. We argue that future studies can benefit from considering both tuning and geometry to understand neural codes and reveal the connections between stimuli, brain activity and behaviour.
Collapse
|
195
|
Ritchie JB, Lee Masson H, Bracci S, Op de Beeck HP. The unreliable influence of multivariate noise normalization on the reliability of neural dissimilarity. Neuroimage 2021; 245:118686. [PMID: 34728244 DOI: 10.1016/j.neuroimage.2021.118686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/21/2021] [Accepted: 10/26/2021] [Indexed: 10/19/2022] Open
Abstract
Representational similarity analysis (RSA) is a key element in the multivariate pattern analysis toolkit. The central construct of the method is the representational dissimilarity matrix (RDM), which can be generated for datasets from different modalities (neuroimaging, behavior, and computational models) and directly correlated in order to evaluate their second-order similarity. Given the inherent noisiness of neuroimaging signals it is important to evaluate the reliability of neuroimaging RDMs in order to determine whether these comparisons are meaningful. Recently, multivariate noise normalization (NNM) has been proposed as a widely applicable method for boosting signal estimates for RSA, regardless of choice of dissimilarity metrics, based on evidence that the analysis improves the within-subject reliability of RDMs (Guggenmos et al. 2018; Walther et al. 2016). We revisited this issue with three fMRI datasets and evaluated the impact of NNM on within- and between-subject reliability and RSA effect sizes using multiple dissimilarity metrics. We also assessed its impact across regions of interest from the same dataset, its interaction with spatial smoothing, and compared it to GLMdenoise, which has also been proposed as a method that improves signal estimates for RSA (Charest et al. 2018). We found that across these tests the impact of NNM was highly variable, as also seems to be the case for other analysis choices. Overall, we suggest being conservative before adding steps and complexities to the (pre)processing pipeline for RSA.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, 3000 Leuven, Flemish Brabant, Belgium.
| | - Haemy Lee Masson
- Department of Cognitive Science, Johns Hopkins University, Baltimore, USA
| | - Stefania Bracci
- Centre for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| | - Hans P Op de Beeck
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, 3000 Leuven, Flemish Brabant, Belgium
| |
Collapse
|
196
|
Zheng L, Gao Z, McAvan AS, Isham EA, Ekstrom AD. Partially overlapping spatial environments trigger reinstatement in hippocampus and schema representations in prefrontal cortex. Nat Commun 2021; 12:6231. [PMID: 34711830 PMCID: PMC8553856 DOI: 10.1038/s41467-021-26560-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Accepted: 10/11/2021] [Indexed: 01/17/2023] Open
Abstract
When we remember a city that we have visited, we retrieve places related to finding our goal but also non-target locations within this environment. Yet, understanding how the human brain implements the neural computations underlying holistic retrieval remains unsolved, particularly for shared aspects of environments. Here, human participants learned and retrieved details from three partially overlapping environments while undergoing high-resolution functional magnetic resonance imaging (fMRI). Our findings show reinstatement of stores even when they are not related to a specific trial probe, providing evidence for holistic environmental retrieval. For stores shared between cities, we find evidence for pattern separation (representational orthogonalization) in hippocampal subfield CA2/3/DG and repulsion in CA1 (differentiation beyond orthogonalization). Additionally, our findings demonstrate that medial prefrontal cortex (mPFC) stores representations of the common spatial structure, termed schema, across environments. Together, our findings suggest how unique and common elements of multiple spatial environments are accessed computationally and neurally.
Collapse
Affiliation(s)
- Li Zheng
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| | - Zhiyao Gao
- grid.5685.e0000 0004 1936 9668Department of Psychology, University of York, Heslington, York YO10 5DD UK
| | - Andrew S. McAvan
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| | - Eve A. Isham
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| | - Arne D. Ekstrom
- grid.134563.60000 0001 2168 186XDepartment of Psychology, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA ,grid.134563.60000 0001 2168 186XEvelyn McKnight Brain Institute, University of Arizona, 1503 E. University Blvd., Tucson, AZ 85721 USA
| |
Collapse
|
197
|
RUBubbles as a novel tool to study categorization learning. Behav Res Methods 2021; 54:1778-1793. [PMID: 34671917 PMCID: PMC9374653 DOI: 10.3758/s13428-021-01695-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/23/2021] [Indexed: 11/08/2022]
Abstract
Grouping objects into discrete categories affects how we perceive the world and represents a crucial element of cognition. Categorization is a widespread phenomenon that has been thoroughly studied. However, investigating categorization learning poses several requirements on the stimulus set in order to control which stimulus feature is used and to prevent mere stimulus-response associations or rote learning. Previous studies have used a wide variety of both naturalistic and artificial categories, the latter having several advantages such as better control and more direct manipulation of stimulus features. We developed a novel stimulus type to study categorization learning, which allows a high degree of customization at low computational costs and can thus be used to generate large stimulus sets very quickly. 'RUBubbles' are designed as visual artificial category stimuli that consist of an arbitrary number of colored spheres arranged in 3D space. They are generated using custom MATLAB code in which several stimulus parameters can be adjusted and controlled separately, such as number of spheres, position in 3D-space, sphere size, and color. Various algorithms for RUBubble generation can be combined with distinct behavioral training protocols to investigate different characteristics and strategies of categorization learning, such as prototype- vs. exemplar-based learning, different abstraction levels, or the categorization of a sensory continuum and category exceptions. All necessary MATLAB code is freely available as open-source code and can be customized or expanded depending on individual needs. RUBubble stimuli can be controlled purely programmatically or via a graphical user interface without MATLAB license or programming experience.
Collapse
|
198
|
Seijdel N, Scholte HS, de Haan EHF. Visual features drive the category-specific impairments on categorization tasks in a patient with object agnosia. Neuropsychologia 2021; 161:108017. [PMID: 34487736 DOI: 10.1016/j.neuropsychologia.2021.108017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 01/18/2023]
Abstract
Object and scene recognition both require mapping of incoming sensory information to existing conceptual knowledge about the world. A notable finding in brain-damaged patients is that they may show differentially impaired performance for specific categories, such as for "living exemplars". While numerous patients with category-specific impairments have been reported, the explanations for these deficits remain controversial. In the current study, we investigate the ability of a brain injured patient with a well-established category-specific impairment of semantic memory to perform two categorization experiments: 'natural' vs. 'manmade' scenes (experiment 1) and objects (experiment 2). Our findings show that the pattern of categorical impairment does not respect the natural versus manmade distinction. This suggests that the impairments may be better explained by differences in visual features, rather than by category membership. Using Deep Convolutional Neural Networks (DCNNs) as 'artificial animal models' we further explored this idea. Results indicated that DCNNs with 'lesions' in higher order layers showed similar response patterns, with decreased relative performance for manmade scenes (experiment 1) and natural objects (experiment 2), even though they have no semantic category knowledge, apart from a mapping between pictures and labels. Collectively, these results suggest that the direction of category-effects to a large extent depends, at least in MS' case, on the degree of perceptual differentiation called for, and not semantic knowledge.
Collapse
Affiliation(s)
- Noor Seijdel
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, the Netherlands.
| | - H Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Edward H F de Haan
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands; Amsterdam Brain & Cognition (ABC) Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
199
|
Kaanders P, Nili H, O'Reilly JX, Hunt L. Medial Frontal Cortex Activity Predicts Information Sampling in Economic Choice. J Neurosci 2021; 41:8403-8413. [PMID: 34413207 PMCID: PMC8496191 DOI: 10.1523/jneurosci.0392-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 06/17/2021] [Accepted: 08/07/2021] [Indexed: 01/05/2023] Open
Abstract
Decision-making not only requires agents to decide what to choose but also how much information to sample before committing to a choice. Previously established frameworks for economic choice argue for a deliberative process of evidence accumulation across time. These tacitly acknowledge a role of information sampling in that decisions are only made once sufficient evidence is acquired, yet few experiments have explicitly placed information sampling under the participant's control. Here, we use fMRI to investigate the neural basis of information sampling in economic choice by allowing participants (n = 30, sex not recorded) to actively sample information in a multistep decision task. We show that medial frontal cortex (MFC) activity is predictive of further information sampling before choice. Choice difficulty (inverse value difference, keeping sensory difficulty constant) was also encoded in MFC, but this effect was explained away by the inclusion of information sampling as a coregressor in the general linear model. A distributed network of regions across the prefrontal cortex encoded key features of the sampled information at the time it was presented. We propose that MFC is an important controller of the extent to which information is gathered before committing to an economic choice. This role may explain why MFC activity has been associated with evidence accumulation in previous studies in which information sampling was an implicit rather than explicit feature of the decision.SIGNIFICANCE STATEMENT The decisions we make are determined by the information we have sampled before committing to a choice. Accumulator frameworks of decision-making tacitly acknowledge the need to sample further information during the evidence accumulation process until a decision boundary is reached. However, relatively few studies explicitly place this decision to sample further information under the participant's control. In this fMRI study, we find that MFC activity is related to information sampling decisions in a multistep economic choice task. This suggests that an important role of evidence representations within MFC may be to guide adaptive sequential decisions to sample further information before committing to a final decision.
Collapse
Affiliation(s)
- Paula Kaanders
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, England
| | - Hamed Nili
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, England
| | - Jill X O'Reilly
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, England
| | - Laurence Hunt
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, England
- Department of Psychiatry, University of Oxford, Oxford OX3 7JX, England
| |
Collapse
|
200
|
Wang X, Bi Y. Idiosyncratic Tower of Babel: Individual Differences in Word-Meaning Representation Increase as Word Abstractness Increases. Psychol Sci 2021; 32:1617-1635. [PMID: 34546824 DOI: 10.1177/09567976211003877] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Humans primarily rely on language to communicate, on the basis of a shared understanding of the basic building blocks of communication: words. Do we mean the same things when we use the same words? Although cognitive neural research on semantics has revealed the common principles of word-meaning representation, the factors underlying the potential individual variations in word meanings are unknown. Here, we empirically characterized the intersubject consistency of 90 words across 20 adult subjects (10 female) using both behavioral measures (rating-based semantic-relationship patterns) and neuroimaging measures (word-evoked brain activity patterns). Across both the behavioral and neuroimaging experiments, we showed that the magnitude of individual disagreements on word meanings could be modeled on the basis of how much language or sensory experience is associated with a word and that this variation increases with word abstractness. Uncovering the cognitive and neural origins of word-meaning disagreements across individuals has implications for potential mechanisms to modulate such disagreements.
Collapse
Affiliation(s)
- Xiaosha Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University.,IDG/McGovern Institute for Brain Research, Beijing Normal University.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University
| | - Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University.,IDG/McGovern Institute for Brain Research, Beijing Normal University.,Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University.,Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|