401
|
Spigler G, Wilson SP. Familiarization: A theory of repetition suppression predicts interference between overlapping cortical representations. PLoS One 2017; 12:e0179306. [PMID: 28604787 PMCID: PMC5467900 DOI: 10.1371/journal.pone.0179306] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Accepted: 05/26/2017] [Indexed: 01/16/2023] Open
Abstract
Repetition suppression refers to a reduction in the cortical response to a novel stimulus that results from repeated presentation of the stimulus. We demonstrate repetition suppression in a well established computational model of cortical plasticity, according to which the relative strengths of lateral inhibitory interactions are modified by Hebbian learning. We present the model as an extension to the traditional account of repetition suppression offered by sharpening theory, which emphasises the contribution of afferent plasticity, by instead attributing the effect primarily to plasticity of intra-cortical circuitry. In support, repetition suppression is shown to emerge in simulations with plasticity enabled only in intra-cortical connections. We show in simulation how an extended 'inhibitory sharpening theory' can explain the disruption of repetition suppression reported in studies that include an intermediate phase of exposure to additional novel stimuli composed of features similar to those of the original stimulus. The model suggests a re-interpretation of repetition suppression as a manifestation of the process by which an initially distributed representation of a novel object becomes a more localist representation. Thus, inhibitory sharpening may constitute a more general process by which representation emerges from cortical re-organisation.
Collapse
Affiliation(s)
- Giacomo Spigler
- Sheffield Robotics, The University of Sheffield, Sheffield, United Kingdom
- Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
- * E-mail:
| | - Stuart P. Wilson
- Sheffield Robotics, The University of Sheffield, Sheffield, United Kingdom
- Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
402
|
Ceylan ME, Dönmez A, Ünsalver BÖ, Evrensel A, Kaya Yertutanol FD. The Soul, as an Uninhibited Mental Activity, is Reduced into Consciousness by Rules of Quantum Physics. Integr Psychol Behav Sci 2017; 51:582-597. [PMID: 28597248 DOI: 10.1007/s12124-017-9395-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This paper is an effort to describe, in neuroscientific terms, one of the most ambiguous concepts of the universe-the soul. Previous efforts to understand what the soul is and where it may exist have accepted the soul as a subjective and individual entity. We will make two additions to this view: (1) The soul is a result of uninhibited mental activity and lacks spatial and temporal information; (2) The soul is an undivided whole and, to become divided, the soul has to be reduced into unconscious and conscious mental events. This reduction process parallels the maturation of the frontal cortex and GABA becoming the main inhibitory neurotransmitter. As examples of uninhibited mental activity, we will discuss the perceptual differences of a newborn, individuals undergoing dissociation, and individuals induced by psychedelic drugs. Then, we will explain the similarities between the structure of the universe and the structure of the brain, and we propose that consideration of the rules of quantum physics is necessary to understand how the soul is reduced into consciousness.
Collapse
Affiliation(s)
- Mehmet Emin Ceylan
- Departments of Psychology and Philosophy, Üsküdar University, İstanbul, Turkey
| | - Aslıhan Dönmez
- Department of Psychology, Üsküdar University, İstanbul, Turkey
| | - Barış Önen Ünsalver
- Vocational School of Health Services, Department of Medical Documentation and Secretariat, Üsküdar University, İstanbul, Turkey.
- Üsküdar Üniversitesi, Haluk Türksoy Sok. No: 14 Altunizade Mah. PK:34662, Üsküdar, İstanbul, Turkey.
| | - Alper Evrensel
- Department of Psychology, Üsküdar University, İstanbul, Turkey
| | | |
Collapse
|
403
|
Martin Cichy R, Khosla A, Pantazis D, Oliva A. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks. Neuroimage 2017; 153:346-358. [PMID: 27039703 PMCID: PMC5542416 DOI: 10.1016/j.neuroimage.2016.03.063] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Revised: 03/05/2016] [Accepted: 03/23/2016] [Indexed: 12/03/2022] Open
Abstract
Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain.
Collapse
Affiliation(s)
- Radoslaw Martin Cichy
- Department of Education and Psychology, Free University Berlin, Berlin, Germany; Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA.
| | - Aditya Khosla
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | | | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| |
Collapse
|
404
|
Stable and Dynamic Coding for Working Memory in Primate Prefrontal Cortex. J Neurosci 2017; 37:6503-6516. [PMID: 28559375 PMCID: PMC5511881 DOI: 10.1523/jneurosci.3364-16.2017] [Citation(s) in RCA: 136] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Revised: 03/09/2017] [Accepted: 04/13/2017] [Indexed: 12/22/2022] Open
Abstract
Working memory (WM) provides the stability necessary for high-level cognition. Influential theories typically assume that WM depends on the persistence of stable neural representations, yet increasing evidence suggests that neural states are highly dynamic. Here we apply multivariate pattern analysis to explore the population dynamics in primate lateral prefrontal cortex (PFC) during three variants of the classic memory-guided saccade task (recorded in four animals). We observed the hallmark of dynamic population coding across key phases of a working memory task: sensory processing, memory encoding, and response execution. Throughout both these dynamic epochs and the memory delay period, however, the neural representational geometry remained stable. We identified two characteristics that jointly explain these dynamics: (1) time-varying changes in the subpopulation of neurons coding for task variables (i.e., dynamic subpopulations); and (2) time-varying selectivity within neurons (i.e., dynamic selectivity). These results indicate that even in a very simple memory-guided saccade task, PFC neurons display complex dynamics to support stable representations for WM. SIGNIFICANCE STATEMENT Flexible, intelligent behavior requires the maintenance and manipulation of incoming information over various time spans. For short time spans, this faculty is labeled “working memory” (WM). Dominant models propose that WM is maintained by stable, persistent patterns of neural activity in prefrontal cortex (PFC). However, recent evidence suggests that neural activity in PFC is dynamic, even while the contents of WM remain stably represented. Here, we explored the neural dynamics in PFC during a memory-guided saccade task. We found evidence for dynamic population coding in various task epochs, despite striking stability in the neural representational geometry of WM. Furthermore, we identified two distinct cellular mechanisms that contribute to dynamic population coding.
Collapse
|
405
|
Horikawa T, Kamitani Y. Generic decoding of seen and imagined objects using hierarchical visual features. Nat Commun 2017; 8:15037. [PMID: 28530228 PMCID: PMC5458127 DOI: 10.1038/ncomms15037] [Citation(s) in RCA: 171] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2015] [Accepted: 02/21/2017] [Indexed: 11/10/2022] Open
Abstract
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Collapse
Affiliation(s)
- Tomoyasu Horikawa
- ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika, Soraku, Kyoto 619-0288, Japan
| | - Yukiyasu Kamitani
- ATR Computational Neuroscience Laboratories, 2-2-2 Hikaridai, Seika, Soraku, Kyoto 619-0288, Japan.,Graduate School of Informatics, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| |
Collapse
|
406
|
Development of visual category selectivity in ventral visual cortex does not require visual experience. Proc Natl Acad Sci U S A 2017; 114:E4501-E4510. [PMID: 28507127 DOI: 10.1073/pnas.1612862114] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience.
Collapse
|
407
|
Wegrzyn M, Vogt M, Kireclioglu B, Schneider J, Kissler J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS One 2017; 12:e0177239. [PMID: 28493921 PMCID: PMC5426715 DOI: 10.1371/journal.pone.0177239] [Citation(s) in RCA: 159] [Impact Index Per Article: 19.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Accepted: 04/24/2017] [Indexed: 11/18/2022] Open
Abstract
Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Collapse
Affiliation(s)
- Martin Wegrzyn
- Department of Psychology, Bielefeld University, Bielefeld, Germany
- Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- * E-mail:
| | - Maria Vogt
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | | | - Julia Schneider
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | - Johanna Kissler
- Department of Psychology, Bielefeld University, Bielefeld, Germany
- Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
408
|
Nothing more than a pair of curvatures: A common mechanism for the detection of both radial and non-radial frequency patterns. Vision Res 2017; 134:18-25. [DOI: 10.1016/j.visres.2017.03.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 03/19/2017] [Accepted: 03/20/2017] [Indexed: 11/20/2022]
|
409
|
Distributed representations of action sequences in anterior cingulate cortex: A recurrent neural network approach. Psychon Bull Rev 2017; 25:302-321. [DOI: 10.3758/s13423-017-1280-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
410
|
Grootswagers T, Wardle SG, Carlson TA. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data. J Cogn Neurosci 2017; 29:677-697. [DOI: 10.1162/jocn_a_01068] [Citation(s) in RCA: 329] [Impact Index Per Article: 41.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Abstract
Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain–computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to “decode” different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.
Collapse
Affiliation(s)
- Tijl Grootswagers
- 1Macquarie University, Sydney, Australia
- 2ARC Centre of Excellence in Cognition and its Disorders
- 3University of Sydney
| | - Susan G. Wardle
- 1Macquarie University, Sydney, Australia
- 2ARC Centre of Excellence in Cognition and its Disorders
| | - Thomas A. Carlson
- 2ARC Centre of Excellence in Cognition and its Disorders
- 3University of Sydney
| |
Collapse
|
411
|
Diedrichsen J, Kriegeskorte N. Representational models: A common framework for understanding encoding, pattern-component, and representational-similarity analysis. PLoS Comput Biol 2017; 13:e1005508. [PMID: 28437426 PMCID: PMC5421820 DOI: 10.1371/journal.pcbi.1005508] [Citation(s) in RCA: 154] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Revised: 05/08/2017] [Accepted: 04/09/2017] [Indexed: 12/17/2022] Open
Abstract
Representational models specify how activity patterns in populations of neurons (or, more generally, in multivariate brain-activity measurements) relate to sensory stimuli, motor responses, or cognitive processes. In an experimental context, representational models can be defined as hypotheses about the distribution of activity profiles across experimental conditions. Currently, three different methods are being used to test such hypotheses: encoding analysis, pattern component modeling (PCM), and representational similarity analysis (RSA). Here we develop a common mathematical framework for understanding the relationship of these three methods, which share one core commonality: all three evaluate the second moment of the distribution of activity profiles, which determines the representational geometry, and thus how well any feature can be decoded from population activity. Using simulated data for three different experimental designs, we compare the power of the methods to adjudicate between competing representational models. PCM implements a likelihood-ratio test and therefore provides the most powerful test if its assumptions hold. However, the other two approaches-when conducted appropriately-can perform similarly. In encoding analysis, the linear model needs to be appropriately regularized, which effectively imposes a prior on the activity profiles. With such a prior, an encoding model specifies a well-defined distribution of activity profiles. In RSA, the unequal variances and statistical dependencies of the dissimilarity estimates need to be taken into account to reach near-optimal power in inference. The three methods render different aspects of the information explicit (e.g. single-response tuning in encoding analysis and population-response representational dissimilarity in RSA) and have specific advantages in terms of computational demands, ease of use, and extensibility. The three methods are properly construed as complementary components of a single data-analytical toolkit for understanding neural representations on the basis of multivariate brain-activity data.
Collapse
Affiliation(s)
- Jörn Diedrichsen
- Brain and Mind Institute, Department for Computer Science, Department for Statistical and Actuarial Science, Western University, London, Canada
| | | |
Collapse
|
412
|
Reithler J, Peters JC, Goebel R. Characterizing object- and position-dependent response profiles to uni- and bilateral stimulus configurations in human higher visual cortex: a 7T fMRI study. Neuroimage 2017; 152:551-562. [PMID: 28336425 DOI: 10.1016/j.neuroimage.2017.03.038] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Revised: 03/07/2017] [Accepted: 03/18/2017] [Indexed: 11/19/2022] Open
Abstract
Visual scenes are initially processed via segregated neural pathways dedicated to either of the two visual hemifields. Although higher-order visual areas are generally believed to utilize invariant object representations (abstracted away from features such as stimulus position), recent findings suggest they retain more spatial information than previously thought. Here, we assessed the nature of such higher-order object representations in human cortex using high-resolution fMRI at 7T, supported by corroborative 3T data. We show that multi-voxel activation patterns in both the contra- and ipsilateral hemisphere can be exploited to successfully classify the object category of unilaterally presented stimuli. Moreover, robustly identified rank order-based response profiles demonstrated a strong contralateral bias which frequently outweighed object category preferences. Finally, we contrasted different combinatorial operations to predict the responses during bilateral stimulation conditions based on responses to their constituent unilateral elements. Results favored a max operation predominantly reflecting the contralateral stimuli. The current findings extend previous work by showing that configuration-dependent modulations in higher-order visual cortex responses as observed in single unit activity have a counterpart in human neural population coding. They furthermore corroborate the emerging view that position coding is a fundamental functional characteristic of ventral visual stream processing.
Collapse
Affiliation(s)
- Joel Reithler
- Cognitive Neuroscience Department, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands; Maastricht Brain Imaging Center (M-BIC), Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands; Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands.
| | - Judith C Peters
- Cognitive Neuroscience Department, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands; Maastricht Brain Imaging Center (M-BIC), Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands; Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Rainer Goebel
- Cognitive Neuroscience Department, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands; Maastricht Brain Imaging Center (M-BIC), Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands; Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| |
Collapse
|
413
|
Ince RA, Giordano BL, Kayser C, Rousselet GA, Gross J, Schyns PG. A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp 2017; 38:1541-1573. [PMID: 27860095 PMCID: PMC5324576 DOI: 10.1002/hbm.23471] [Citation(s) in RCA: 166] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 10/25/2016] [Accepted: 11/07/2016] [Indexed: 12/17/2022] Open
Abstract
We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Bruno L. Giordano
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | | | - Joachim Gross
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| |
Collapse
|
414
|
Khalighinejad B, Cruzatto da Silva G, Mesgarani N. Dynamic Encoding of Acoustic Features in Neural Responses to Continuous Speech. J Neurosci 2017; 37:2176-2185. [PMID: 28119400 PMCID: PMC5338759 DOI: 10.1523/jneurosci.2383-16.2017] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2016] [Revised: 12/08/2016] [Accepted: 01/12/2017] [Indexed: 11/21/2022] Open
Abstract
Humans are unique in their ability to communicate using spoken language. However, it remains unclear how the speech signal is transformed and represented in the brain at different stages of the auditory pathway. In this study, we characterized electroencephalography responses to continuous speech by obtaining the time-locked responses to phoneme instances (phoneme-related potential). We showed that responses to different phoneme categories are organized by phonetic features. We found that each instance of a phoneme in continuous speech produces multiple distinguishable neural responses occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Comparing the patterns of phoneme similarity in the neural responses and the acoustic signals confirms a repetitive appearance of acoustic distinctions of phonemes in the neural data. Analysis of the phonetic and speaker information in neural activations revealed that different time intervals jointly encode the acoustic similarity of both phonetic and speaker categories. These findings provide evidence for a dynamic neural transformation of low-level speech features as they propagate along the auditory pathway, and form an empirical framework to study the representational changes in learning, attention, and speech disorders.SIGNIFICANCE STATEMENT We characterized the properties of evoked neural responses to phoneme instances in continuous speech. We show that each instance of a phoneme in continuous speech produces several observable neural responses at different times occurring as early as 50 ms and as late as 400 ms after the phoneme onset. Each temporal event explicitly encodes the acoustic similarity of phonemes, and linguistic and nonlinguistic information are best represented at different time intervals. Finally, we show a joint encoding of phonetic and speaker information, where the neural representation of speakers is dependent on phoneme category. These findings provide compelling new evidence for dynamic processing of speech sounds in the auditory pathway.
Collapse
Affiliation(s)
- Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | | | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
415
|
Contini EW, Wardle SG, Carlson TA. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions. Neuropsychologia 2017; 105:165-176. [PMID: 28215698 DOI: 10.1016/j.neuropsychologia.2017.02.013] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2016] [Revised: 02/16/2017] [Accepted: 02/16/2017] [Indexed: 11/29/2022]
Abstract
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing.
Collapse
Affiliation(s)
- Erika W Contini
- Department of Cognitive Science, Macquarie University, Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders and Perception in Action Research Centre, Macquarie University, Australia.
| | - Susan G Wardle
- Department of Cognitive Science, Macquarie University, Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders and Perception in Action Research Centre, Macquarie University, Australia
| | - Thomas A Carlson
- Department of Cognitive Science, Macquarie University, Sydney, Australia; ARC Centre of Excellence in Cognition and its Disorders and Perception in Action Research Centre, Macquarie University, Australia; School of Psychology, University of Sydney, Australia
| |
Collapse
|
416
|
Neural Representations of Observed Actions Generalize across Static and Dynamic Visual Input. J Neurosci 2017; 37:3056-3071. [PMID: 28209734 DOI: 10.1523/jneurosci.2496-16.2017] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 01/16/2017] [Accepted: 02/03/2017] [Indexed: 11/21/2022] Open
Abstract
People interact with entities in the environment in distinct and categorizable ways (e.g., kicking is making contact with foot). We can recognize these action categories across variations in actors, objects, and settings; moreover, we can recognize them from both dynamic and static visual input. However, the neural systems that support action recognition across these perceptual differences are unclear. Here, we used multivoxel pattern analysis of fMRI data to identify brain regions that support visual action categorization in a format-independent way. Human participants were scanned while viewing eight categories of interactions (e.g., pulling) depicted in two visual formats: (1) visually controlled videos of two interacting actors and (2) visually varied photographs selected from the internet involving different actors, objects, and settings. Action category was decodable across visual formats in bilateral inferior parietal, bilateral occipitotemporal, left premotor, and left middle frontal cortex. In most of these regions, the representational similarity of action categories was consistent across subjects and visual formats, a property that can contribute to a common understanding of actions among individuals. These results suggest that the identified brain regions support action category codes that are important for action recognition and action understanding.SIGNIFICANCE STATEMENT Humans tend to interpret the observed actions of others in terms of categories that are invariant to incidental features: whether a girl pushes a boy or a button and whether we see it in real-time or in a single snapshot, it is still pushing Here, we investigated the brain systems that facilitate the visual recognition of these action categories across such differences. Using fMRI, we identified several areas of parietal, occipitotemporal, and frontal cortex that exhibit action category codes that are similar across viewing of dynamic videos and still photographs. Our results provide strong evidence for the involvement of these brain regions in recognizing the way that people interact physically with objects and other people.
Collapse
|
417
|
Khaligh-Razavi SM, Henriksson L, Kay K, Kriegeskorte N. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models. JOURNAL OF MATHEMATICAL PSYCHOLOGY 2017; 76:184-197. [PMID: 28298702 PMCID: PMC5341758 DOI: 10.1016/j.jmp.2016.10.007] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.
Collapse
Affiliation(s)
- Seyed-Mahdi Khaligh-Razavi
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
- Computer Science & Artificial intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Linda Henriksson
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Kendrick Kay
- Department of Psychology, Washington University in St. Louis, St. Louis, MO, USA
| | | |
Collapse
|
418
|
Park J, Park S. Conjoint representation of texture ensemble and location in the parahippocampal place area. J Neurophysiol 2017; 117:1595-1607. [PMID: 28123006 DOI: 10.1152/jn.00338.2016] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 01/23/2017] [Accepted: 01/23/2017] [Indexed: 11/22/2022] Open
Abstract
Texture provides crucial information about the category or identity of a scene. Nonetheless, not much is known about how the texture information in a scene is represented in the brain. Previous studies have shown that the parahippocampal place area (PPA), a scene-selective part of visual cortex, responds to simple patches of texture ensemble. However, in natural scenes textures exist in spatial context within a scene. Here we tested two hypotheses that make different predictions on how textures within a scene context are represented in the PPA. The Texture-Only hypothesis suggests that the PPA represents texture ensemble (i.e., the kind of texture) as is, irrespective of its location in the scene. On the other hand, the Texture and Location hypothesis suggests that the PPA represents texture and its location within a scene (e.g., ceiling or wall) conjointly. We tested these two hypotheses across two experiments, using different but complementary methods. In experiment 1, by using multivoxel pattern analysis (MVPA) and representational similarity analysis, we found that the representational similarity of the PPA activation patterns was significantly explained by the Texture-Only hypothesis but not by the Texture and Location hypothesis. In experiment 2, using a repetition suppression paradigm, we found no repetition suppression for scenes that had the same texture ensemble but differed in location (supporting the Texture and Location hypothesis). On the basis of these results, we propose a framework that reconciles contrasting results from MVPA and repetition suppression and draw conclusions about how texture is represented in the PPA.NEW & NOTEWORTHY This study investigates how the parahippocampal place area (PPA) represents texture information within a scene context. We claim that texture is represented in the PPA at multiple levels: the texture ensemble information at the across-voxel level and the conjoint information of texture and its location at the within-voxel level. The study proposes a working hypothesis that reconciles contrasting results from multivoxel pattern analysis and repetition suppression, suggesting that the methods are complementary to each other but not necessarily interchangeable.
Collapse
Affiliation(s)
- Jeongho Park
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland; and
| | - Soojin Park
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland; and .,Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
419
|
Cichy RM, Teng S. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0108. [PMID: 28044019 PMCID: PMC5206276 DOI: 10.1098/rstb.2016.0108] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2016] [Indexed: 01/06/2023] Open
Abstract
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Collapse
Affiliation(s)
| | - Santani Teng
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
420
|
Cohen MA, Alvarez GA, Nakayama K, Konkle T. Visual search for object categories is predicted by the representational architecture of high-level visual cortex. J Neurophysiol 2017; 117:388-402. [PMID: 27832600 PMCID: PMC5236111 DOI: 10.1152/jn.00569.2016] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Accepted: 10/26/2016] [Indexed: 02/03/2023] Open
Abstract
Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.
Collapse
Affiliation(s)
- Michael A Cohen
- McGovern Institute for Brain Research, Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts; and
| | - George A Alvarez
- Department of Psychology, Harvard University, Cambridge, Massachusetts
| | - Ken Nakayama
- Department of Psychology, Harvard University, Cambridge, Massachusetts
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, Massachusetts
| |
Collapse
|
421
|
Idiosyncratic Patterns of Representational Similarity in Prefrontal Cortex Predict Attentional Performance. J Neurosci 2016; 37:1257-1268. [PMID: 28028199 DOI: 10.1523/jneurosci.1407-16.2016] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 11/08/2016] [Accepted: 12/09/2016] [Indexed: 11/21/2022] Open
Abstract
The efficiency of finding an object in a crowded environment depends largely on the similarity of nontargets to the search target. Models of attention theorize that the similarity is determined by representations stored within an "attentional template" held in working memory. However, the degree to which the contents of the attentional template are individually unique and where those idiosyncratic representations are encoded in the brain are unknown. We investigated this problem using representational similarity analysis of human fMRI data to measure the common and idiosyncratic representations of famous face morphs during an identity categorization task; data from the categorization task were then used to predict performance on a separate identity search task. We hypothesized that the idiosyncratic categorical representations of the continuous face morphs would predict their distractability when searching for each target identity. The results identified that patterns of activation in the lateral prefrontal cortex (LPFC) as well as in face-selective areas in the ventral temporal cortex were highly correlated with the patterns of behavioral categorization of face morphs and search performance that were common across subjects. However, the individually unique components of the categorization behavior were reliably decoded only in right LPFC. Moreover, the neural pattern in right LPFC successfully predicted idiosyncratic variability in search performance, such that reaction times were longer when distractors had a higher probability of being categorized as the target identity. These results suggest that the prefrontal cortex encodes individually unique components of categorical representations that are also present in attentional templates for target search. SIGNIFICANCE STATEMENT Everyone's perception of the world is uniquely shaped by personal experiences and preferences. Using functional MRI, we show that individual differences in the categorization of face morphs between two identities could be decoded from the prefrontal cortex and the ventral temporal cortex. Moreover, the individually unique representations in prefrontal cortex predicted idiosyncratic variability in attentional performance when looking for each identity in the "crowd" of another morphed face in a separate search task. Our results reveal that the representation of task-related information in prefrontal cortex is individually unique and preserved across categorization and search performance. This demonstrates the possibility of predicting individual behaviors across tasks with patterns of brain activity.
Collapse
|
422
|
Memory consolidation reconfigures neural pathways involved in the suppression of emotional memories. Nat Commun 2016; 7:13375. [PMID: 27898050 PMCID: PMC5141344 DOI: 10.1038/ncomms13375] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2015] [Accepted: 09/27/2016] [Indexed: 12/02/2022] Open
Abstract
The ability to suppress unwanted emotional memories is crucial for human mental health. Through consolidation over time, emotional memories often become resistant to change. However, how consolidation impacts the effectiveness of emotional memory suppression is still unknown. Using event-related fMRI while concurrently recording skin conductance, we investigated the neurobiological processes underlying the suppression of aversive memories before and after overnight consolidation. Here we report that consolidated aversive memories retain their emotional reactivity and become more resistant to suppression. Suppression of consolidated memories involves higher prefrontal engagement, and less concomitant hippocampal and amygdala disengagement. In parallel, we show a shift away from hippocampal-dependent representational patterns to distributed neocortical representational patterns in the suppression of aversive memories after consolidation. These findings demonstrate rapid changes in emotional memory organization with overnight consolidation, and suggest possible neurobiological bases underlying the resistance to suppression of emotional memories in affective disorders. As memories consolidate over time, they become resistant to change, though how this impacts the volitional suppression of memories is not known. Liu and colleagues show that, after overnight consolidation, aversive memories exhibit distributed prefrontal representations and are harder to suppress.
Collapse
|
423
|
Abstract
We often engage in two concurrent but unrelated activities, such as driving on a quiet road while listening to the radio. When we do so, does our brain split into functionally distinct entities? To address this question, we imaged brain activity with fMRI in experienced drivers engaged in a driving simulator while listening either to global positioning system instructions (integrated task) or to a radio show (split task). We found that, compared with the integrated task, the split task was characterized by reduced multivariate functional connectivity between the driving and listening networks. Furthermore, the integrated information content of the two networks, predicting their joint dynamics above and beyond their independent dynamics, was high in the integrated task and zero in the split task. Finally, individual subjects' ability to switch between high and low information integration predicted their driving performance across integrated and split tasks. This study raises the possibility that under certain conditions of daily life, a single brain may support two independent functional streams, a "functional split brain" similar to what is observed in patients with an anatomical split.
Collapse
|
424
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
425
|
Kheradpisheh SR, Ghodrati M, Ganjtabesh M, Masquelier T. Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition. Sci Rep 2016; 6:32672. [PMID: 27601096 PMCID: PMC5013454 DOI: 10.1038/srep32672] [Citation(s) in RCA: 73] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2015] [Accepted: 08/11/2016] [Indexed: 11/08/2022] Open
Abstract
Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.
Collapse
Affiliation(s)
- Saeed Reza Kheradpisheh
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
- CERCO UMR 5549, CNRS – Université de Toulouse, F-31300, France
| | - Masoud Ghodrati
- Department of Physiology, Monash University, Clayton, Australia 3800
- Neuroscience Program, Biomedicine Discovery Institute, Monash University
| | - Mohammad Ganjtabesh
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | - Timothée Masquelier
- CERCO UMR 5549, CNRS – Université de Toulouse, F-31300, France
- INSERM, U968, Paris, F-75012, France
- Sorbonne Universités, UPMC Univ Paris 06, UMR-S 968, Institut de la Vision, Paris, F-75012, France
- CNRS, UMR-7210, Paris, F-75012, France
| |
Collapse
|
426
|
Bellmund JL, Deuker L, Navarro Schröder T, Doeller CF. Grid-cell representations in mental simulation. eLife 2016; 5. [PMID: 27572056 PMCID: PMC5005038 DOI: 10.7554/elife.17089] [Citation(s) in RCA: 94] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Accepted: 07/27/2016] [Indexed: 01/10/2023] Open
Abstract
Anticipating the future is a key motif of the brain, possibly supported by mental simulation of upcoming events. Rodent single-cell recordings suggest the ability of spatially tuned cells to represent subsequent locations. Grid-like representations have been observed in the human entorhinal cortex during virtual and imagined navigation. However, hitherto it remains unknown if grid-like representations contribute to mental simulation in the absence of imagined movement. Participants imagined directions between building locations in a large-scale virtual-reality city while undergoing fMRI without re-exposure to the environment. Using multi-voxel pattern analysis, we provide evidence for representations of absolute imagined direction at a resolution of 30° in the parahippocampal gyrus, consistent with the head-direction system. Furthermore, we capitalize on the six-fold rotational symmetry of grid-cell firing to demonstrate a 60° periodic pattern-similarity structure in the entorhinal cortex. Our findings imply a role of the entorhinal grid-system in mental simulation and future thinking beyond spatial navigation. DOI:http://dx.doi.org/10.7554/eLife.17089.001 Recordings of brain activity in moving rats have found neurons that fire when the rat is at specific locations. These neurons are known as grid cells because their activity produces a grid-like pattern. A separate group of neurons, called head direction cells, represents the rat’s facing direction. Functional magnetic resonance imaging (fMRI) studies that have tracked brain activity in humans as they navigate virtual environments have found similar grid-like and direction-related responses. A recent study showed grid-like responses even if the people being studied just imagined moving around an arena while lying still. Theoretical work suggests that spatially tuned cells might generally be important for our ability to imagine and simulate future events. However, it is not clear whether these location- and direction-responsive cells are active when people do not visualize themselves moving. Bellmund et al. used fMRI to track brain activity in volunteers as they imagined different views in a virtual reality city. Before the fMRI experiment, the volunteers completed extensive training where they learned the layout of the city and the names of its buildings. Then, during the fMRI experiment, the volunteers had to imagine themselves standing in front of certain buildings and facing different directions. Crucially, they did not imagine themselves moving between these buildings. By using representational similarity analysis, which compares patterns of brain activity, Bellmund et al. could distinguish between the directions the volunteers were imagining. Activity patterns in the parahippocampal gyrus (a brain region known to be important for navigation) were more similar when participants were imagining similar directions. The fMRI results also show grid-like responses in a brain area called entorhinal cortex, which is known to contain grid cells. While participants were imagining, this region exhibited activity patterns with a six-fold symmetry, as Bellmund et al. predicted from the characteristic firing patterns of grid cells. The findings presented by Bellmund et al. provide evidence that suggests that grid cells are involved in planning how to navigate, and so support previous theoretical assumptions. The computations of these cells might contribute to other kinds of thinking too, such as remembering the past or imagining future events. DOI:http://dx.doi.org/10.7554/eLife.17089.002
Collapse
Affiliation(s)
- Jacob Ls Bellmund
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Lorena Deuker
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Department of Neuropsychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Bochum, Germany
| | - Tobias Navarro Schröder
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| | - Christian F Doeller
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Kavli Institute for Systems Neuroscience and Centre for Neural Computation, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
427
|
Abstract
Recent advances in neuroscience have given us unprecedented insight into the neural mechanisms of false memory, showing that artificial memories can be inserted into the memory cells of the hippocampus in a way that is indistinguishable from true memories. However, this alone is not enough to explain how false memories can arise naturally in the course of our daily lives. Cognitive psychology has demonstrated that many instances of false memory, both in the laboratory and the real world, can be attributed to semantic interference. Whereas previous studies have found that a diverse set of regions show some involvement in semantic false memory, none have revealed the nature of the semantic representations underpinning the phenomenon. Here we use fMRI with representational similarity analysis to search for a neural code consistent with semantic false memory. We find clear evidence that false memories emerge from a similarity-based neural code in the temporal pole, a region that has been called the "semantic hub" of the brain. We further show that each individual has a partially unique semantic code within the temporal pole, and this unique code can predict idiosyncratic patterns of memory errors. Finally, we show that the same neural code can also predict variation in true-memory performance, consistent with an adaptive perspective on false memory. Taken together, our findings reveal the underlying structure of neural representations of semantic knowledge, and how this semantic structure can both enhance and distort our memories.
Collapse
|
428
|
Oosterhof NN, Connolly AC, Haxby JV. CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave. Front Neuroinform 2016; 10:27. [PMID: 27499741 PMCID: PMC4956688 DOI: 10.3389/fninf.2016.00027] [Citation(s) in RCA: 387] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2016] [Accepted: 07/04/2016] [Indexed: 11/23/2022] Open
Abstract
Recent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, classification, correlations, representational similarity analysis, and the time generalization method. These can be used to address both data-driven and hypothesis-driven questions about neural organization and representations, both within and across: space, time, frequency bands, neuroimaging modalities, individuals, and species. It uses a uniform data representation of fMRI data in the volume or on the surface, and of M/EEG data at the sensor and source level. Through various external toolboxes, it directly supports reading and writing a variety of fMRI and M/EEG neuroimaging formats, and, where applicable, can convert between them. As a result, it can be integrated readily in existing pipelines and used with existing preprocessed datasets. CoSMoMVPA overloads the traditional volumetric searchlight concept to support neighborhoods for M/EEG and surface-based fMRI data, which supports localization of multivariate effects of interest across space, time, and frequency dimensions. CoSMoMVPA also provides a generalized approach to multiple comparison correction across these dimensions using Threshold-Free Cluster Enhancement with state-of-the-art clustering and permutation techniques. CoSMoMVPA is highly modular and uses abstractions to provide a uniform interface for a variety of MVP measures. Typical analyses require a few lines of code, making it accessible to beginner users. At the same time, expert programmers can easily extend its functionality. CoSMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA
Collapse
Affiliation(s)
| | - Andrew C Connolly
- Department of Psychological and Brain Sciences, Dartmouth College Hanover, NH, USA
| | - James V Haxby
- Center for Mind/Brain Sciences, University of TrentoRovereto, Italy; Department of Psychological and Brain Sciences, Dartmouth CollegeHanover, NH, USA
| |
Collapse
|
429
|
Axelrod V. On the domain-specificity of the visual and non-visual face-selective regions. Eur J Neurosci 2016; 44:2049-63. [PMID: 27255921 DOI: 10.1111/ejn.13290] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 05/21/2016] [Accepted: 05/24/2016] [Indexed: 11/27/2022]
Abstract
What happens in our brains when we see a face? The neural mechanisms of face processing - namely, the face-selective regions - have been extensively explored. Research has traditionally focused on visual cortex face-regions; more recently, the role of face-regions outside the visual cortex (i.e., non-visual-cortex face-regions) has been acknowledged as well. The major quest today is to reveal the functional role of each this region in face processing. To make progress in this direction, it is essential to understand the extent to which the face-regions, and particularly the non-visual-cortex face-regions, process only faces (i.e., face-specific, domain-specific processing) or rather are involved in a more domain-general cognitive processing. In the current functional MRI study, we systematically examined the activity of the whole face-network during face-unrelated reading task (i.e., written meaningful sentences with content unrelated to faces/people and non-words). We found that the non-visual-cortex (i.e., right lateral prefrontal cortex and posterior superior temporal sulcus), but not the visual cortex face-regions, responded significantly stronger to sentences than to non-words. In general, some degree of sentence selectivity was found in all non-visual-cortex cortex. Present result highlights the possibility that the processing in the non-visual-cortex face-selective regions might not be exclusively face-specific, but rather more or even fully domain-general. In this paper, we illustrate how the knowledge about domain-general processing in face-regions can help to advance our general understanding of face processing mechanisms. Our results therefore suggest that the problem of face processing should be approached in the broader scope of cognition in general.
Collapse
Affiliation(s)
- Vadim Axelrod
- UCL Institute of Cognitive Neuroscience, University College London, London, UK.,The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, 52900, Israel
| |
Collapse
|
430
|
Madl T, Franklin S, Chen K, Trappl R, Montaldi D. Exploring the Structure of Spatial Representations. PLoS One 2016; 11:e0157343. [PMID: 27347681 PMCID: PMC4922593 DOI: 10.1371/journal.pone.0157343] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2016] [Accepted: 05/29/2016] [Indexed: 11/22/2022] Open
Abstract
It has been suggested that the map-like representations that support human spatial memory are fragmented into sub-maps with local reference frames, rather than being unitary and global. However, the principles underlying the structure of these ‘cognitive maps’ are not well understood. We propose that the structure of the representations of navigation space arises from clustering within individual psychological spaces, i.e. from a process that groups together objects that are close in these spaces. Building on the ideas of representational geometry and similarity-based representations in cognitive science, we formulate methods for learning dissimilarity functions (metrics) characterizing participants’ psychological spaces. We show that these learned metrics, together with a probabilistic model of clustering based on the Bayesian cognition paradigm, allow prediction of participants’ cognitive map structures in advance. Apart from insights into spatial representation learning in human cognition, these methods could facilitate novel computational tools capable of using human-like spatial concepts. We also compare several features influencing spatial memory structure, including spatial distance, visual similarity and functional similarity, and report strong correlations between these dimensions and the grouping probability in participants’ spatial representations, providing further support for clustering in spatial memory.
Collapse
Affiliation(s)
- Tamas Madl
- School of Computer Science, University of Manchester, Manchester, United Kingdom
- Austrian Research Institute for Artificial Intelligence, Vienna, Austria
- * E-mail:
| | - Stan Franklin
- Institute for Intelligent Systems, University of Memphis, Memphis, United States of America
| | - Ke Chen
- School of Computer Science, University of Manchester, Manchester, United Kingdom
| | - Robert Trappl
- Austrian Research Institute for Artificial Intelligence, Vienna, Austria
| | - Daniela Montaldi
- School of Psychological Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
431
|
Xiao X, Dong Q, Chen C, Xue G. Neural pattern similarity underlies the mnemonic advantages for living words. Cortex 2016; 79:99-111. [DOI: 10.1016/j.cortex.2016.03.016] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2015] [Revised: 01/08/2016] [Accepted: 03/16/2016] [Indexed: 12/14/2022]
|
432
|
Roth ZN. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System. Front Integr Neurosci 2016; 10:16. [PMID: 27242455 PMCID: PMC4876365 DOI: 10.3389/fnint.2016.00016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Accepted: 03/14/2016] [Indexed: 11/13/2022] Open
Abstract
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.
Collapse
Affiliation(s)
- Zvi N Roth
- The Edmond and Lily Safra Center for Brain Sciences, The Hebrew UniversityJerusalem, Israel; Department of Neurobiology, The Hebrew UniversityJerusalem, Israel
| |
Collapse
|
433
|
Cichy RM, Pantazis D, Oliva A. Similarity-Based Fusion of MEG and fMRI Reveals Spatio-Temporal Dynamics in Human Cortex During Visual Object Recognition. Cereb Cortex 2016; 26:3563-3579. [PMID: 27235099 PMCID: PMC4961022 DOI: 10.1093/cercor/bhw135] [Citation(s) in RCA: 99] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Every human cognitive function, such as visual object recognition, is realized in a complex spatio-temporal activity pattern in the brain. Current brain imaging techniques in isolation cannot resolve the brain's spatio-temporal dynamics, because they provide either high spatial or temporal resolution but not both. To overcome this limitation, we developed an integration approach that uses representational similarities to combine measurements of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to yield a spatially and temporally integrated characterization of neuronal activation. Applying this approach to 2 independent MEG-fMRI data sets, we observed that neural activity first emerged in the occipital pole at 50-80 ms, before spreading rapidly and progressively in the anterior direction along the ventral and dorsal visual streams. Further region-of-interest analyses established that dorsal and ventral regions showed MEG-fMRI correspondence in representations later than early visual cortex. Together, these results provide a novel and comprehensive, spatio-temporally resolved view of the rapid neural dynamics during the first few hundred milliseconds of object vision. They further demonstrate the feasibility of spatially unbiased representational similarity-based fusion of MEG and fMRI, promising new insights into how the brain computes complex cognitive functions.
Collapse
Affiliation(s)
- Radoslaw Martin Cichy
- Computer Science and Artificial Intelligence Laboratory and.,Department of Education and Psychology, Free University Berlin, Berlin, Germany
| | | | - Aude Oliva
- Computer Science and Artificial Intelligence Laboratory and
| |
Collapse
|
434
|
Naselaris T, Kay KN. Resolving Ambiguities of MVPA Using Explicit Models of Representation. Trends Cogn Sci 2016; 19:551-554. [PMID: 26412094 DOI: 10.1016/j.tics.2015.07.005] [Citation(s) in RCA: 75] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2015] [Revised: 07/14/2015] [Accepted: 07/20/2015] [Indexed: 11/19/2022]
Abstract
We advocate a shift in emphasis within cognitive neuroscience from multivariate pattern analysis (MVPA) to the design and testing of explicit models of neural representation. With such models, it becomes possible to identify the specific representations encoded in patterns of brain activity and to map them across the brain.
Collapse
|
435
|
Perceptual similarity of visual patterns predicts dynamic neural activation patterns measured with MEG. Neuroimage 2016; 132:59-70. [DOI: 10.1016/j.neuroimage.2016.02.019] [Citation(s) in RCA: 68] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2015] [Revised: 02/05/2016] [Accepted: 02/09/2016] [Indexed: 12/14/2022] Open
|
436
|
Dubois J, Adolphs R. Building a Science of Individual Differences from fMRI. Trends Cogn Sci 2016; 20:425-443. [PMID: 27138646 DOI: 10.1016/j.tics.2016.03.014] [Citation(s) in RCA: 410] [Impact Index Per Article: 45.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Revised: 03/28/2016] [Accepted: 03/31/2016] [Indexed: 11/19/2022]
Abstract
To date, fMRI research has been concerned primarily with evincing generic principles of brain function through averaging data from multiple subjects. Given rapid developments in both hardware and analysis tools, the field is now poised to study fMRI-derived measures in individual subjects, and to relate these to psychological traits or genetic variations. We discuss issues of validity, reliability and statistical assessment that arise when the focus shifts to individual subjects and that are applicable also to other imaging modalities. We emphasize that individual assessment of neural function with fMRI presents specific challenges and necessitates careful consideration of anatomical and vascular between-subject variability as well as sources of within-subject variability.
Collapse
Affiliation(s)
- Julien Dubois
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA.
| | - Ralph Adolphs
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, CA 91125, USA
| |
Collapse
|
437
|
Handjaras G, Ricciardi E, Leo A, Lenci A, Cecchetti L, Cosottini M, Marotta G, Pietrini P. How concepts are encoded in the human brain: A modality independent, category-based cortical organization of semantic knowledge. Neuroimage 2016; 135:232-42. [PMID: 27132545 DOI: 10.1016/j.neuroimage.2016.04.063] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Revised: 02/12/2016] [Accepted: 04/26/2016] [Indexed: 11/25/2022] Open
Abstract
How conceptual knowledge is represented in the human brain remains to be determined. To address the differential role of low-level sensory-based and high-level abstract features in semantic processing, we combined behavioral studies of linguistic production and brain activity measures by functional magnetic resonance imaging in sighted and congenitally blind individuals while they performed a property-generation task with concrete nouns from eight categories, presented through visual and/or auditory modalities. Patterns of neural activity within a large semantic cortical network that comprised parahippocampal, lateral occipital, temporo-parieto-occipital and inferior parietal cortices correlated with linguistic production and were independent both from the modality of stimulus presentation (either visual or auditory) and the (lack of) visual experience. In contrast, selected modality-dependent differences were observed only when the analysis was limited to the individual regions within the semantic cortical network. We conclude that conceptual knowledge in the human brain relies on a distributed, modality-independent cortical representation that integrates the partial category and modality specific information retained at a regional level.
Collapse
Affiliation(s)
- Giacomo Handjaras
- Dept. Surgical, Medical, Molecular Pathology and Critical Care, University of Pisa, Pisa 56126, Italy
| | - Emiliano Ricciardi
- Dept. Surgical, Medical, Molecular Pathology and Critical Care, University of Pisa, Pisa 56126, Italy
| | - Andrea Leo
- Dept. Surgical, Medical, Molecular Pathology and Critical Care, University of Pisa, Pisa 56126, Italy
| | - Alessandro Lenci
- Department of Philology, Literature, and Linguistics, University of Pisa, Pisa 56126, Italy
| | - Luca Cecchetti
- Dept. Surgical, Medical, Molecular Pathology and Critical Care, University of Pisa, Pisa 56126, Italy
| | | | - Giovanna Marotta
- Department of Philology, Literature, and Linguistics, University of Pisa, Pisa 56126, Italy
| | - Pietro Pietrini
- Dept. Surgical, Medical, Molecular Pathology and Critical Care, University of Pisa, Pisa 56126, Italy; Clinical Psychology Branch, Pisa University Hospital, Pisa 56126, Italy; IMT School for Advanced Studies Lucca, Lucca 55100, Italy.
| |
Collapse
|
438
|
Ritchie JB, Carlson TA. Neural Decoding and "Inner" Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior. Front Neurosci 2016; 10:190. [PMID: 27199652 PMCID: PMC4848306 DOI: 10.3389/fnins.2016.00190] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2016] [Accepted: 04/18/2016] [Indexed: 11/13/2022] Open
Abstract
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Laboratory of Biological Psychology, Brain and Cognition Unit, KU LeuvenLeuven, Belgium; Department of Philosophy, University of MarylandCollege Park, MD, USA
| | - Thomas A Carlson
- Perception in Action Research Centre, Department of Cognitive Science, Macquarie UniversitySydney, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, Macquarie UniversitySydney, NSW, Australia
| |
Collapse
|
439
|
Kragel PA, LaBar KS. Decoding the Nature of Emotion in the Brain. Trends Cogn Sci 2016; 20:444-455. [PMID: 27133227 DOI: 10.1016/j.tics.2016.03.011] [Citation(s) in RCA: 178] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2016] [Revised: 03/28/2016] [Accepted: 03/30/2016] [Indexed: 10/21/2022]
Abstract
A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This 'new look' into emotion representation promises to improve and reformulate neurobiological models of affect.
Collapse
Affiliation(s)
- Philip A Kragel
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA
| | - Kevin S LaBar
- Department of Psychology and Neuroscience, Duke University, Durham, NC 27708, USA.
| |
Collapse
|
440
|
Experience-dependent hippocampal pattern differentiation prevents interference during subsequent learning. Nat Commun 2016; 7:11066. [PMID: 27925613 PMCID: PMC4820837 DOI: 10.1038/ncomms11066] [Citation(s) in RCA: 102] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2015] [Accepted: 02/16/2016] [Indexed: 11/09/2022] Open
Abstract
The hippocampus is believed to reduce memory interference by disambiguating neural representations of similar events. However, there is limited empirical evidence linking representational overlap in the hippocampus to memory interference. Likewise, it is not fully understood how learning influences overlap among hippocampal representations. Using pattern-based fMRI analyses, we tested for a bidirectional relationship between memory overlap in the human hippocampus and learning. First, we show that learning drives hippocampal representations of similar events apart from one another. These changes are not explained by task demands to discriminate similar stimuli and are fully absent in visual cortical areas that feed into the hippocampus. Second, we show that lower representational overlap in the hippocampus benefits subsequent learning by preventing interference between similar memories. These findings reveal targeted experience-dependent changes in hippocampal representations of similar events and provide a critical link between memory overlap in the hippocampus and behavioural expressions of memory interference.
Collapse
|
441
|
Goda N, Yokoi I, Tachibana A, Minamimoto T, Komatsu H. Crossmodal Association of Visual and Haptic Material Properties of Objects in the Monkey Ventral Visual Cortex. Curr Biol 2016; 26:928-34. [DOI: 10.1016/j.cub.2016.02.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Revised: 12/15/2015] [Accepted: 02/01/2016] [Indexed: 02/03/2023]
|
442
|
Cusack R, Ball G, Smyser CD, Dehaene-Lambertz G. A neural window on the emergence of cognition. Ann N Y Acad Sci 2016; 1369:7-23. [PMID: 27164193 PMCID: PMC4874873 DOI: 10.1111/nyas.13036] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2015] [Revised: 01/23/2016] [Accepted: 02/11/2016] [Indexed: 11/30/2022]
Abstract
Can babies think? A fundamental challenge for cognitive neuroscience is to answer when brain functions begin and in what form they first emerge. This is challenging with behavioral tasks, as it is difficult to communicate to an infant what a task requires, and motor function is impoverished, making execution of the appropriate response difficult. To circumvent these requirements, neuroimaging provides a complementary route for assessing the emergence of cognition. Starting from the prerequisites of cognitive function and building stepwise, we review when the cortex forms and when it becomes gyrated and regionally differentiated. We then discuss when white matter tracts mature and when functional brain networks arise. Finally, we assess the responsiveness of these brain systems to external events. We find that many cognitive systems are observed surprisingly early. Some emerge before birth, with activations in the frontal lobe even in the first months of gestation. These discoveries are changing our understanding of the nature of cognitive networks and their early function, transforming cognitive neuroscience, and opening new windows for education and investigation. Infant neuroimaging also has tremendous clinical potential, for both detecting atypical development and facilitating earlier intervention. Finally, we discuss the key technical developments that are enabling this nascent field.
Collapse
Affiliation(s)
- Rhodri Cusack
- Brain and Mind Institute, Western University, London, Ontario, Canada
| | - Gareth Ball
- Centre for the Developing Brain, King’s College London, London, United Kingdom
| | - Christopher D. Smyser
- Departments of Neurology, Pediatrics and Radiology, Washington University, St Louis, Missouri
| | - Ghislaine Dehaene-Lambertz
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, CNRS, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, Gif/Yvette, France
| |
Collapse
|
443
|
Chiou R, Lambon Ralph MA. The anterior temporal cortex is a primary semantic source of top-down influences on object recognition. Cortex 2016; 79:75-86. [PMID: 27088615 PMCID: PMC4884670 DOI: 10.1016/j.cortex.2016.03.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 02/01/2016] [Accepted: 03/09/2016] [Indexed: 11/30/2022]
Abstract
Perception emerges from a dynamic interplay between feed-forward sensory input and feedback modulation along the cascade of neural processing. Prior knowledge, a major form of top-down modulatory signal, benefits perception by enabling efficacious inference and resolving ambiguity, particularly under circumstances of degraded visual input. Despite semantic information being a potentially critical source of this top-down influence, to date, the core neural substrate of semantic knowledge (the anterolateral temporal lobe – ATL) has not been considered as a key component of the feedback system. Here we provide direct evidence of its significance for visual cognition – the ATL underpins the semantic aspect of object recognition, amalgamating sensory-based (amount of accumulated sensory input) and semantic-based (representational proximity between exemplars and typicality of appearance) influences. Using transcranial theta-burst stimulation combined with a novel visual identification paradigm, we demonstrate that the left ATL contributes to discrimination between visual objects. Crucially, its contribution is especially vital under situations where semantic knowledge is most needed for supplementing deficiency of input (brief visual exposure), discerning analogously-coded exemplars (close representational distance), and resolving discordance (target appearance violating the statistical typicality of its category). Our findings characterise functional properties of the ATL in object recognition: this neural structure is summoned to augment the visual system when the latter is overtaxed by challenging conditions (insufficient input, overlapped neural coding, and conflict between incoming signal and expected configuration). This suggests a need to revisit current theories of object recognition, incorporating the ATL that interfaces high-level vision with semantic knowledge.
Collapse
Affiliation(s)
- Rocco Chiou
- The Neuroscience and Aphasia Research Unit (NARU), School of Psychological Sciences, University of Manchester, England, UK.
| | - Matthew A Lambon Ralph
- The Neuroscience and Aphasia Research Unit (NARU), School of Psychological Sciences, University of Manchester, England, UK.
| |
Collapse
|
444
|
Guntupalli JS, Hanke M, Halchenko YO, Connolly AC, Ramadge PJ, Haxby JV. A Model of Representational Spaces in Human Cortex. Cereb Cortex 2016; 26:2919-2934. [PMID: 26980615 PMCID: PMC4869822 DOI: 10.1093/cercor/bhw068] [Citation(s) in RCA: 119] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Current models of the functional architecture of human cortex emphasize areas that capture coarse-scale features of cortical topography but provide no account for population responses that encode information in fine-scale patterns of activity. Here, we present a linear model of shared representational spaces in human cortex that captures fine-scale distinctions among population responses with response-tuning basis functions that are common across brains and models cortical patterns of neural responses with individual-specific topographic basis functions. We derive a common model space for the whole cortex using a new algorithm, searchlight hyperalignment, and complex, dynamic stimuli that provide a broad sampling of visual, auditory, and social percepts. The model aligns representations across brains in occipital, temporal, parietal, and prefrontal cortices, as shown by between-subject multivariate pattern classification and intersubject correlation of representational geometry, indicating that structural principles for shared neural representations apply across widely divergent domains of information. The model provides a rigorous account for individual variability of well-known coarse-scale topographies, such as retinotopy and category selectivity, and goes further to account for fine-scale patterns that are multiplexed with coarse-scale topographies and carry finer distinctions.
Collapse
Affiliation(s)
- J Swaroop Guntupalli
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Michael Hanke
- Department of Psychology, University of Magdeburg, Magdeburg 39106, Germany
| | - Yaroslav O Halchenko
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Andrew C Connolly
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Peter J Ramadge
- Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA
| | - James V Haxby
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.,Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Trentino 38068, Italy
| |
Collapse
|
445
|
Love BC. Cognitive Models as Bridge between Brain and Behavior. Trends Cogn Sci 2016; 20:247-248. [PMID: 26947873 DOI: 10.1016/j.tics.2016.02.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2016] [Accepted: 02/25/2016] [Indexed: 11/27/2022]
Abstract
How can disparate neural and behavioral measures be integrated? Turner and colleagues propose joint modeling as a solution. Joint modeling mutually constrains the interpretation of brain and behavioral measures by exploiting their covariation structure. Simultaneous estimation allows for more accurate prediction than would be possible by considering these measures in isolation.
Collapse
Affiliation(s)
- Bradley C Love
- Experimental Psychology, University College London, London, UK.
| |
Collapse
|
446
|
Leo A, Handjaras G, Bianchi M, Marino H, Gabiccini M, Guidi A, Scilingo EP, Pietrini P, Bicchi A, Santello M, Ricciardi E. A synergy-based hand control is encoded in human motor cortical areas. eLife 2016; 5. [PMID: 26880543 PMCID: PMC4786436 DOI: 10.7554/elife.13420] [Citation(s) in RCA: 69] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 02/13/2016] [Indexed: 01/17/2023] Open
Abstract
How the human brain controls hand movements to carry out different tasks is still debated. The concept of synergy has been proposed to indicate functional modules that may simplify the control of hand postures by simultaneously recruiting sets of muscles and joints. However, whether and to what extent synergic hand postures are encoded as such at a cortical level remains unknown. Here, we combined kinematic, electromyography, and brain activity measures obtained by functional magnetic resonance imaging while subjects performed a variety of movements towards virtual objects. Hand postural information, encoded through kinematic synergies, were represented in cortical areas devoted to hand motor control and successfully discriminated individual grasping movements, significantly outperforming alternative somatotopic or muscle-based models. Importantly, hand postural synergies were predicted by neural activation patterns within primary motor cortex. These findings support a novel cortical organization for hand movement control and open potential applications for brain-computer interfaces and neuroprostheses. DOI:http://dx.doi.org/10.7554/eLife.13420.001 The human hand can perform an enormous range of movements with great dexterity. Some common everyday actions, such as grasping a coffee cup, involve the coordinated movement of all four fingers and thumb. Others, such as typing, rely on the ability of individual fingers to move relatively independently of one another. This flexibility is possible in part because of the complex anatomy of the hand, with its 27 bones and their connecting joints and muscles. But with this complexity comes a huge number of possibilities. Any movement-related task – such as picking up a cup – can be achieved via many different combinations of muscle contractions and joint positions. So how does the brain decide which muscles and joints to use? One theory is that the brain simplifies this problem by encoding particularly useful patterns of joint movements as distinct units or “synergies”. A given task can then be performed by selecting from a small number of synergies, avoiding the need to choose between huge numbers of options every time movement is required. Leo et al. now provide the first direct evidence for the encoding of synergies by the human brain. Volunteers lying inside a brain scanner reached towards virtual objects – from tennis rackets to toothpicks – while activity was recorded from the area of the brain that controls hand movements. As predicted, the scans showed specific and reproducible patterns of activity. Analysing these patterns revealed that each corresponded to a particular combination of joint positions. These activity patterns, or synergies, could even be ‘decoded’ to work out which type of movement a volunteer had just performed. Future experiments should examine how the brain combines synergies with sensory feedback to allow movements to be adjusted as they occur. Such findings could help to develop brain-computer interfaces and systems for controlling the movement of artificial limbs. DOI:http://dx.doi.org/10.7554/eLife.13420.002
Collapse
Affiliation(s)
- Andrea Leo
- Laboratory of Clinical Biochemistry and Molecular Biology, University of Pisa, Pisa, Italy.,Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| | - Giacomo Handjaras
- Laboratory of Clinical Biochemistry and Molecular Biology, University of Pisa, Pisa, Italy
| | - Matteo Bianchi
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy.,Advanced Robotics Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Hamal Marino
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| | - Marco Gabiccini
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy.,Advanced Robotics Department, Istituto Italiano di Tecnologia, Genova, Italy.,Department of Civil and Industrial Engineering, University of Pisa, Pisa, Italy
| | - Andrea Guidi
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| | - Enzo Pasquale Scilingo
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy.,Department of Information Engineering, University of Pisa, Pisa, Italy
| | - Pietro Pietrini
- Laboratory of Clinical Biochemistry and Molecular Biology, University of Pisa, Pisa, Italy.,Research Center 'E. Piaggio', University of Pisa, Pisa, Italy.,Clinical Psychology Branch, Pisa University Hospital, Pisa, Italy.,IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Antonio Bicchi
- Research Center 'E. Piaggio', University of Pisa, Pisa, Italy.,Advanced Robotics Department, Istituto Italiano di Tecnologia, Genova, Italy
| | - Marco Santello
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, United States
| | - Emiliano Ricciardi
- Laboratory of Clinical Biochemistry and Molecular Biology, University of Pisa, Pisa, Italy.,Research Center 'E. Piaggio', University of Pisa, Pisa, Italy
| |
Collapse
|
447
|
Salmela VR, Henriksson L, Vanni S. Radial Frequency Analysis of Contour Shapes in the Visual Cortex. PLoS Comput Biol 2016; 12:e1004719. [PMID: 26866917 PMCID: PMC4750910 DOI: 10.1371/journal.pcbi.1004719] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2015] [Accepted: 12/17/2015] [Indexed: 12/18/2022] Open
Abstract
Cumulative psychophysical evidence suggests that the shape of closed contours is analysed by means of their radial frequency components (RFC). However, neurophysiological evidence for RFC-based representations is still missing. We investigated the representation of radial frequency in the human visual cortex with functional magnetic resonance imaging. We parametrically varied the radial frequency, amplitude and local curvature of contour shapes. The stimuli evoked clear responses across visual areas in the univariate analysis, but the response magnitude did not depend on radial frequency or local curvature. Searchlight-based, multivariate representational similarity analysis revealed RFC specific response patterns in areas V2d, V3d, V3AB, and IPS0. Interestingly, RFC-specific representations were not found in hV4 or LO, traditionally associated with visual shape analysis. The modulation amplitude of the shapes did not affect the responses in any visual area. Local curvature, SF-spectrum and contrast energy related representations were found across visual areas but without similar specificity for visual area that was found for RFC. The results suggest that the radial frequency of a closed contour is one of the cortical shape analysis dimensions, represented in the early and mid-level visual areas.
Collapse
Affiliation(s)
- Viljami R. Salmela
- Institute of Behavioural Sciences, Division of Cognitive and Neuropsychology, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
- * E-mail:
| | - Linda Henriksson
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Simo Vanni
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland
- Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
448
|
Ritchie JB, Carlson TA. Neural Decoding and "Inner" Psychophysics: A Distance-to-Bound Approach for Linking Mind, Brain, and Behavior. Front Neurosci 2016. [PMID: 27199652 DOI: 10.3389/fnins.2016.00190/full] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2023] Open
Abstract
A fundamental challenge for cognitive neuroscience is characterizing how the primitives of psychological theory are neurally implemented. Attempts to meet this challenge are a manifestation of what Fechner called "inner" psychophysics: the theory of the precise mapping between mental quantities and the brain. In his own time, inner psychophysics remained an unrealized ambition for Fechner. We suggest that, today, multivariate pattern analysis (MVPA), or neural "decoding," methods provide a promising starting point for developing an inner psychophysics. A cornerstone of these methods are simple linear classifiers applied to neural activity in high-dimensional activation spaces. We describe an approach to inner psychophysics based on the shared architecture of linear classifiers and observers under decision boundary models such as signal detection theory. Under this approach, distance from a decision boundary through activation space, as estimated by linear classifiers, can be used to predict reaction time in accordance with signal detection theory, and distance-to-bound models of reaction time. Our "neural distance-to-bound" approach is potentially quite general, and simple to implement. Furthermore, our recent work on visual object recognition suggests it is empirically viable. We believe the approach constitutes an important step along the path to an inner psychophysics that links mind, brain, and behavior.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Laboratory of Biological Psychology, Brain and Cognition Unit, KU LeuvenLeuven, Belgium; Department of Philosophy, University of MarylandCollege Park, MD, USA
| | - Thomas A Carlson
- Perception in Action Research Centre, Department of Cognitive Science, Macquarie UniversitySydney, NSW, Australia; ARC Centre of Excellence in Cognition and its Disorders, Macquarie UniversitySydney, NSW, Australia
| |
Collapse
|
449
|
Representational similarity encoding for fMRI: Pattern-based synthesis to predict brain activity using stimulus-model-similarities. Neuroimage 2015; 128:44-53. [PMID: 26732404 DOI: 10.1016/j.neuroimage.2015.12.035] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2015] [Revised: 11/18/2015] [Accepted: 12/19/2015] [Indexed: 11/24/2022] Open
Abstract
Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.
Collapse
|
450
|
Myers NE, Rohenkohl G, Wyart V, Woolrich MW, Nobre AC, Stokes MG. Testing sensory evidence against mnemonic templates. eLife 2015; 4:e09000. [PMID: 26653854 PMCID: PMC4755744 DOI: 10.7554/elife.09000] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Accepted: 12/13/2015] [Indexed: 11/16/2022] Open
Abstract
Most perceptual decisions require comparisons between current input and an internal template. Classic studies propose that templates are encoded in sustained activity of sensory neurons. However, stimulus encoding is itself dynamic, tracing a complex trajectory through activity space. Which part of this trajectory is pre-activated to reflect the template? Here we recorded magneto- and electroencephalography during a visual target-detection task, and used pattern analyses to decode template, stimulus, and decision-variable representation. Our findings ran counter to the dominant model of sustained pre-activation. Instead, template information emerged transiently around stimulus onset and quickly subsided. Cross-generalization between stimulus and template coding, indicating a shared neural representation, occurred only briefly. Our results are compatible with the proposal that template representation relies on a matched filter, transforming input into task-appropriate output. This proposal was consistent with a signed difference response at the perceptual decision stage, which can be explained by a simple neural model. DOI:http://dx.doi.org/10.7554/eLife.09000.001 Imagine searching for your house keys on a cluttered desk. Your eyes scan different items until they eventually find the keys you are looking for. How the brain represents an internal template of the target of your search (the keys, in this example) has been a much-debated topic in neuroscience for the past 30 years. Previous research has indicated that neurons specialized for detecting the sought-after object when it is in view are also pre-activated when we are seeking it. This would mean that these ‘template’ neurons are active the entire time that we are searching. Myers et al. recorded brain activity from human volunteers using a non-invasive technique called magnetoencephalography (MEG) as they tried to detect when a particular shape appeared on a computer screen. The patterns of brain activity could be analyzed to identify the template that observers had in mind, and to trace when it became active. This revealed that the template was only activated around the time when a target was likely to appear, after which the activation pattern quickly subsided again. Myers et al. also found that holding a template in mind largely activated different groups of neurons to those activated when seeing the same shape appear on a computer screen. This is contrary to the idea that the same cells are responsible both for maintaining a template and for perceiving its presence in our surroundings. The brief activation of the template suggests that templates may come online mainly to filter new sensory evidence to detect targets. This mechanism could be advantageous because it lowers the amount of neural activity (and hence energy) needed for the task. Although this points to a more efficient way in which the brain searches for targets, these findings need to be replicated using other methods and task settings to confirm whether the brain generally uses templates in this way. DOI:http://dx.doi.org/10.7554/eLife.09000.002
Collapse
Affiliation(s)
- Nicholas E Myers
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.,Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| | | | - Valentin Wyart
- Laboratoire de Neurosciences Cognitives, Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom.,Oxford Centre for Functional MRI of the Brain, University of Oxford, Oxford, United Kingdom
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.,Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.,Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| |
Collapse
|