1
|
Gelens F, Äijälä J, Roberts L, Komatsu M, Uran C, Jensen MA, Miller KJ, Ince RAA, Garagnani M, Vinck M, Canales-Johnson A. Distributed representations of prediction error signals across the cortical hierarchy are synergistic. Nat Commun 2024; 15:3941. [PMID: 38729937 PMCID: PMC11087548 DOI: 10.1038/s41467-024-48329-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 04/26/2024] [Indexed: 05/12/2024] Open
Abstract
A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.
Collapse
Affiliation(s)
- Frank Gelens
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WT, Amsterdam, The Netherlands
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
| | - Juho Äijälä
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
| | - Louis Roberts
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
- Department of Computing, Goldsmiths, University of London, SE14 6NW, London, UK
| | - Misako Komatsu
- Laboratory for Haptic Perception and Cognitive Physiology, RIKEN Brain Science Institute, Saitama, 351-0198, Japan
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525, Nijmegen, The Netherlands
| | - Michael A Jensen
- Department of Neurosurgery, Mayo Clinic, Rochester, MN, 55905, USA
| | - Kai J Miller
- Department of Neurosurgery, Mayo Clinic, Rochester, MN, 55905, USA
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QB, Scotland, UK
| | - Max Garagnani
- Department of Computing, Goldsmiths, University of London, SE14 6NW, London, UK
- Brain Language Lab, Freie Universität Berlin, 14195, Berlin, Germany
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525, Nijmegen, The Netherlands.
| | - Andres Canales-Johnson
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK.
- Neuropsychology and Cognitive Neurosciences Research Center, Faculty of Health Sciences, Universidad Católica del Maule, 3460000, Talca, Chile.
| |
Collapse
|
2
|
Ventura P, Pascual M, Cruz F, Araújo S. From Perugino to Picasso revisited: Electrophysiological responses to faces in paintings from different art styles. Neuropsychologia 2024; 193:108742. [PMID: 38056623 DOI: 10.1016/j.neuropsychologia.2023.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/27/2023] [Accepted: 11/30/2023] [Indexed: 12/08/2023]
Abstract
Behavioral research (Ventura, et al., 2023) suggested that pictorial representations of faces varying along a realism-distortion spectrum elicit holistic processing as natural faces. Whether holistic face neural responses are engaged similarly remains, however, underexplored. In the present study, we evaluated the neural correlates of naturalist and artistic face processing, by exploring electrophysiological responses to faces in photographs versus in four major painting styles. The N170 response to faces in photographs was indistinguishable from that elicited by faces in the renaissance art style (depicting the most realistic faces), whilst both categories elicited larger N170 than faces in other art styles (post-impressionism, expressionism, and cubism), with a gradation in brain activity. The present evidence suggest that visual processing may become finer grained the more the realistic nature of the face. Despite behavioral equivalence, the neural mechanisms for holistic processing of natural faces and faces in diverse art styles are not equivalent.
Collapse
Affiliation(s)
- Paulo Ventura
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Mariona Pascual
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Francisco Cruz
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Susana Araújo
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
3
|
Yan Y, Zhan J, Garrod O, Cui X, Ince RAA, Schyns PG. Strength of predicted information content in the brain biases decision behavior. Curr Biol 2023; 33:5505-5514.e6. [PMID: 38065096 DOI: 10.1016/j.cub.2023.10.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/11/2023] [Accepted: 10/23/2023] [Indexed: 12/21/2023]
Abstract
Prediction-for-perception theories suggest that the brain predicts incoming stimuli to facilitate their categorization.1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 However, it remains unknown what the information contents of these predictions are, which hinders mechanistic explanations. This is because typical approaches cast predictions as an underconstrained contrast between two categories18,19,20,21,22,23,24-e.g., faces versus cars, which could lead to predictions of features specific to faces or cars, or features from both categories. Here, to pinpoint the information contents of predictions and thus their mechanistic processing in the brain, we identified the features that enable two different categorical perceptions of the same stimuli. We then trained multivariate classifiers to discern, from dynamic MEG brain responses, the features tied to each perception. With an auditory cueing design, we reveal where, when, and how the brain reactivates visual category features (versus the typical category contrast) before the stimulus is shown. We demonstrate that the predictions of category features have a more direct influence (bias) on subsequent decision behavior in participants than the typical category contrast. Specifically, these predictions are more precisely localized in the brain (lateralized), are more specifically driven by the auditory cues, and their reactivation strength before a stimulus presentation exerts a greater bias on how the individual participant later categorizes this stimulus. By characterizing the specific information contents that the brain predicts and then processes, our findings provide new insights into the brain's mechanisms of prediction for perception.
Collapse
Affiliation(s)
- Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Jiayu Zhan
- School of Psychological and Cognitive Sciences, Peking University, 5 Yiheyuan Road, Beijing 100871, China
| | - Oliver Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Xuan Cui
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
4
|
Itier RJ, Durston AJ. Mass-univariate analysis of scalp ERPs reveals large effects of gaze fixation location during face processing that only weakly interact with face emotional expression. Sci Rep 2023; 13:17022. [PMID: 37813928 PMCID: PMC10562468 DOI: 10.1038/s41598-023-44355-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 10/06/2023] [Indexed: 10/11/2023] Open
Abstract
Decoding others' facial expressions is critical for social functioning. To clarify the neural correlates of expression perception depending on where we look on the face, three combined gaze-contingent ERP experiments were analyzed using robust mass-univariate statistics. Regardless of task, fixation location impacted face processing from 50 to 350 ms, maximally around 120 ms, reflecting retinotopic mapping around C2 and P1 components. Fixation location also impacted majorly the N170-P2 interval while weak effects were seen at the face-sensitive N170 peak. Results question the widespread assumption that faces are processed holistically into an indecomposable perceptual whole around the N170. Rather, face processing is a complex and view-dependent process that continues well beyond the N170. Expression and fixation location interacted weakly during the P1-N170 interval, supporting a role for the mouth and left eye in fearful and happy expression decoding. Expression effects were weakest at the N170 peak but strongest around P2, especially for fear, reflecting task-independent affective processing. Results suggest N170 reflects a transition between processes rather than the maximum of a holistic face processing stage. Focus on this peak should be replaced by data-driven analyses of the epoch using robust statistics to fully unravel the early visual processing of faces and their affective content.
Collapse
Affiliation(s)
- Roxane J Itier
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada.
| | - Amie J Durston
- Department of Psychology, University of Waterloo, 200 University Avenue West, Waterloo, ON, N2L 3G1, Canada
| |
Collapse
|
5
|
Yan Y, Zhan J, Ince RAA, Schyns PG. Network Communications Flexibly Predict Visual Contents That Enhance Representations for Faster Visual Categorization. J Neurosci 2023; 43:5391-5405. [PMID: 37369588 PMCID: PMC10359031 DOI: 10.1523/jneurosci.0156-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/25/2023] [Accepted: 05/30/2023] [Indexed: 06/29/2023] Open
Abstract
Models of visual cognition generally assume that brain networks predict the contents of a stimulus to facilitate its subsequent categorization. However, understanding prediction and categorization at a network level has remained challenging, partly because we need to reverse engineer their information processing mechanisms from the dynamic neural signals. Here, we used connectivity measures that can isolate the communications of a specific content to reconstruct these network mechanisms in each individual participant (N = 11, both sexes). Each was cued to the spatial location (left vs right) and contents [low spatial frequency (LSF) vs high spatial frequency (HSF)] of a predicted Gabor stimulus that they then categorized. Using each participant's concurrently measured MEG, we reconstructed networks that predict and categorize LSF versus HSF contents for behavior. We found that predicted contents flexibly propagate top down from temporal to lateralized occipital cortex, depending on task demands, under supervisory control of prefrontal cortex. When they reach lateralized occipital cortex, predictions enhance the bottom-up LSF versus HSF representations of the stimulus, all the way from occipital-ventral-parietal to premotor cortex, in turn producing faster categorization behavior. Importantly, content communications are subsets (i.e., 55-75%) of the signal-to-signal communications typically measured between brain regions. Hence, our study isolates functional networks that process the information of cognitive functions.SIGNIFICANCE STATEMENT An enduring cognitive hypothesis states that our perception is partly influenced by the bottom-up sensory input but also by top-down expectations. However, cognitive explanations of the dynamic brain networks mechanisms that flexibly predict and categorize the visual input according to task-demands remain elusive. We addressed them in a predictive experimental design by isolating the network communications of cognitive contents from all other communications. Our methods revealed a Prediction Network that flexibly communicates contents from temporal to lateralized occipital cortex, with explicit frontal control, and an occipital-ventral-parietal-frontal Categorization Network that represents more sharply the predicted contents from the shown stimulus, leading to faster behavior. Our framework and results therefore shed a new light of cognitive information processing on dynamic brain activity.
Collapse
Affiliation(s)
- Yuening Yan
- School of Psychology and Neuroscience, University of Glasgow, G12 8QB Glasgow, United Kingdom
| | - Jiayu Zhan
- School of Psychological and Cognitive Sciences, Peking University, Beijing 100871, China
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, G12 8QB Glasgow, United Kingdom
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, G12 8QB Glasgow, United Kingdom
| |
Collapse
|
6
|
Celotto M, Bím J, Tlaie A, De Feo V, Lemke S, Chicharro D, Nili H, Bieler M, Hanganu-Opatz IL, Donner TH, Brovelli A, Panzeri S. An information-theoretic quantification of the content of communication between brain regions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.14.544903. [PMID: 37398375 PMCID: PMC10312682 DOI: 10.1101/2023.06.14.544903] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Quantifying the amount, content and direction of communication between brain regions is key to understanding brain function. Traditional methods to analyze brain activity based on the Wiener-Granger causality principle quantify the overall information propagated by neural activity between simultaneously recorded brain regions, but do not reveal the information flow about specific features of interest (such as sensory stimuli). Here, we develop a new information theoretic measure termed Feature-specific Information Transfer (FIT), quantifying how much information about a specific feature flows between two regions. FIT merges the Wiener-Granger causality principle with information-content specificity. We first derive FIT and prove analytically its key properties. We then illustrate and test them with simulations of neural activity, demonstrating that FIT identifies, within the total information flowing between regions, the information that is transmitted about specific features. We then analyze three neural datasets obtained with different recording methods, magneto- and electro-encephalography, and spiking activity, to demonstrate the ability of FIT to uncover the content and direction of information flow between brain regions beyond what can be discerned with traditional anaytical methods. FIT can improve our understanding of how brain regions communicate by uncovering previously hidden feature-specific information flow.
Collapse
Affiliation(s)
- Marco Celotto
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto (TN), Italy
- Department of Pharmacy and Biotechnology, University of Bologna, Bologna, Italy
| | - Jan Bím
- Datamole, s. r. o, Vitezne namesti 577/2 Dejvice, 160 00 Praha 6, The Czech Republic
| | - Alejandro Tlaie
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto (TN), Italy
| | - Vito De Feo
- Artificial Intelligence Team, Future Health Technology, and Brain-Computer Interfaces laboratories, School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, UK
| | - Stefan Lemke
- Department of Cell Biology and Physiology, University of North Carolina, Chapel Hill, United States
| | - Daniel Chicharro
- Department of Computer Science, City, University of London, London, UK
| | - Hamed Nili
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Malte Bieler
- Mobile Technology Lab, School of Economics, Innovation and Technology, University College Kristiania, Oslo, Norway
| | - Ileana L. Hanganu-Opatz
- Institute of Developmental Neurophysiology, Center for Molecular Neurobiology, University Medical Center, Hamburg-Eppendorf, Hamburg, Germany
| | - Tobias H. Donner
- Section Computational Cognitive Neuroscience, Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Andrea Brovelli
- Institut de Neurosciences de la Timone, UMR 7289, Aix Marseille Université, CNRS, Marseille, France
| | - Stefano Panzeri
- Department of Excellence for Neural Information Processing, Center for Molecular Neurobiology (ZMNH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Neural Computation Laboratory, Istituto Italiano di Tecnologia, Rovereto (TN), Italy
| |
Collapse
|
7
|
Impact of face outline, parafoveal feature number and feature type on early face perception in a gaze-contingent paradigm: A mass-univariate re-analysis of ERP data. NEUROIMAGE: REPORTS 2022. [DOI: 10.1016/j.ynirp.2022.100148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
8
|
Higgins C, van Es MWJ, Quinn AJ, Vidaurre D, Woolrich MW. The relationship between frequency content and representational dynamics in the decoding of neurophysiological data. Neuroimage 2022; 260:119462. [PMID: 35872176 PMCID: PMC10565838 DOI: 10.1016/j.neuroimage.2022.119462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 07/04/2022] [Accepted: 07/08/2022] [Indexed: 11/20/2022] Open
Abstract
Decoding of high temporal resolution, stimulus-evoked neurophysiological data is increasingly used to test theories about how the brain processes information. However, a fundamental relationship between the frequency spectra of the neural signal and the subsequent decoding accuracy timecourse is not widely recognised. We show that, in commonly used instantaneous signal decoding paradigms, each sinusoidal component of the evoked response is translated to double its original frequency in the subsequent decoding accuracy timecourses. We therefore recommend, where researchers use instantaneous signal decoding paradigms, that more aggressive low pass filtering is applied with a cut-off at one quarter of the sampling rate, to eliminate representational alias artefacts. However, this does not negate the accompanying interpretational challenges. We show that these can be resolved by decoding paradigms that utilise both a signal's instantaneous magnitude and its local gradient information as features for decoding. On a publicly available MEG dataset, this results in decoding accuracy metrics that are higher, more stable over time, and free of the technical and interpretational challenges previously characterised. We anticipate that a broader awareness of these fundamental relationships will enable stronger interpretations of decoding results by linking them more clearly to the underlying signal characteristics that drive them.
Collapse
Affiliation(s)
- Cameron Higgins
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Mats W J van Es
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Andrew J Quinn
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| | - Diego Vidaurre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK; Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Mark W Woolrich
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
9
|
Combrisson E, Allegra M, Basanisi R, Ince RAA, Giordano B, Bastin J, Brovelli A. Group-level inference of information-based measures for the analyses of cognitive brain networks from neurophysiological data. Neuroimage 2022; 258:119347. [PMID: 35660460 DOI: 10.1016/j.neuroimage.2022.119347] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 05/24/2022] [Accepted: 05/30/2022] [Indexed: 12/30/2022] Open
Abstract
The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches.
Collapse
Affiliation(s)
- Etienne Combrisson
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
| | - Michele Allegra
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France; Dipartimento di Fisica e Astronomia "Galileo Galilei", Università di Padova, via Marzolo 8, 35131 Padova, Italy; Padua Neuroscience Center, Università di Padova, via Orus 2, 35131 Padova, Italy
| | - Ruggero Basanisi
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Bruno Giordano
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France
| | - Julien Bastin
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France
| | - Andrea Brovelli
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
| |
Collapse
|
10
|
Daube C, Xu T, Zhan J, Webb A, Ince RA, Garrod OG, Schyns PG. Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity. PATTERNS (NEW YORK, N.Y.) 2021; 2:100348. [PMID: 34693374 PMCID: PMC8515012 DOI: 10.1016/j.patter.2021.100348] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 11/30/2020] [Accepted: 08/20/2021] [Indexed: 01/24/2023]
Abstract
Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.
Collapse
Affiliation(s)
- Christoph Daube
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Tian Xu
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, England, UK
| | - Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Andrew Webb
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A.A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G.B. Garrod
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
11
|
Schmid PC, Amodio DM. Effects of high and low power on the visual encoding of faces. Soc Neurosci 2021; 16:293-306. [PMID: 33740878 DOI: 10.1080/17470919.2021.1906745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The experience of power is typically associated with social disengagement, yet power has also been shown to facilitate configural visual encoding - a process that supports the initial perception of a human face. To investigate this apparent contradiction, we directly tested whether power influences the visual encoding of faces. Two experiments, using neural and psychophysical assessments, revealed that low power impeded both first-order configural processing (the encoding of a stimulus as a face, assessed by the N170 event-related potential) and second-order configural processing (the encoding of feature distances within configuration, assessed using the face inversion paradigm), relative to high-power and control conditions. Power did not significantly affect facial feature encoding. Results reveal an early and automatic effect of low power on face perception, characterized primarily by diminished face processing. These findings suggest a novel interplay between visual and cognitive processes in power's influence on social behavior.
Collapse
Affiliation(s)
- Petra C Schmid
- Department of Management, Technology, and Economics, Swiss Federal Institute of Technology (ETH Zurich), Zurich, Switzerland
| | - David M Amodio
- Department of Psychology, New York University, New York, NY, USA.,Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
12
|
Vision: Face-Centered Representations in the Brain. Curr Biol 2020; 30:R1277-R1278. [PMID: 33080203 DOI: 10.1016/j.cub.2020.07.086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A longstanding debate in the face recognition field concerns the format of face representations in the brain. New face research clarifies some of this mystery by revealing a face-centered format in a patient with a left splenium lesion of the corpus callosum who perceives the right side of faces as 'melted'.
Collapse
|
13
|
Faghel-Soubeyrand S, Lecomte T, Bravo MA, Lepage M, Potvin S, Abdel-Baki A, Villeneuve M, Gosselin F. Abnormal visual representations associated with confusion of perceived facial expression in schizophrenia with social anxiety disorder. NPJ SCHIZOPHRENIA 2020; 6:28. [PMID: 33004809 PMCID: PMC7529755 DOI: 10.1038/s41537-020-00116-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 08/19/2020] [Indexed: 01/25/2023]
Abstract
Deficits in social functioning are especially severe amongst schizophrenia individuals with the prevalent comorbidity of social anxiety disorder (SZ&SAD). Yet, the mechanisms underlying the recognition of facial expression of emotions-a hallmark of social cognition-are practically unexplored in SZ&SAD. Here, we aim to reveal the visual representations SZ&SAD (n = 16) and controls (n = 14) rely on for facial expression recognition. We ran a total of 30,000 trials of a facial expression categorization task with Bubbles, a data-driven technique. Results showed that SZ&SAD's ability to categorize facial expression was impared compared to controls. More severe negative symptoms (flat affect, apathy, reduced social drive) was associated with more impaired emotion recognition ability, and with more biases in attributing neutral affect to faces. Higher social anxiety symptoms, on the other hand, was found to enhance the reaction speed to neutral and angry faces. Most importantly, Bubbles showed that these abnormalities could be explained by inefficient visual representations of emotions: compared to controls, SZ&SAD subjects relied less on fine facial cues (high spatial frequencies) and more on coarse facial cues (low spatial frequencies). SZ&SAD participants also never relied on the eye regions (only on the mouth) to categorize facial expressions. We discuss how possible interactions between early (low sensitivity to coarse information) and late stages of the visual system (overreliance on these coarse features) might disrupt SZ&SAD's recognition of facial expressions. Our findings offer perceptual mechanisms through which comorbid SZ&SAD impairs crucial aspects of social cognition, as well as functional psychopathology.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Département de Psychologie, Université de Montréal, Montréal, Canada. .,School of Psychology, University of Birmingham, Birmingham, United Kingdom.
| | - Tania Lecomte
- Département de Psychologie, Université de Montréal, Montréal, Canada
| | | | - Martin Lepage
- Department of Psychiatry, McGill University, Montréal, Canada
| | - Stéphane Potvin
- Départment de Psychiatrie, Université de Montréal, Montréal, Canada
| | - Amal Abdel-Baki
- Centre hospitalier de l'Université de Montréal-Hôpital Notre-Dame, Montréal, Canada
| | - Marie Villeneuve
- Institut universitaire en santé mentale de Montréal, Montréal, Canada
| | - Frédéric Gosselin
- Département de Psychologie, Université de Montréal, Montréal, Canada
| |
Collapse
|
14
|
Gandhi TK, Tsourides K, Singhal N, Cardinaux A, Jamal W, Pantazis D, Kjelgaard M, Sinha P. Autonomic and Electrophysiological Evidence for Reduced Auditory Habituation in Autism. J Autism Dev Disord 2020; 51:2218-2228. [PMID: 32926307 DOI: 10.1007/s10803-020-04636-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
It is estimated that nearly 90% of children on the autism spectrum exhibit sensory atypicalities. What aspects of sensory processing are affected in autism? Although sensory processing can be studied along multiple dimensions, two of the most basic ones involve examining instantaneous sensory responses and how the responses change over time. These correspond to the dimensions of 'sensitivity' and 'habituation'. Results thus far have indicated that autistic individuals do not differ systematically from controls in sensory acuity/sensitivity. However, data from studies of habituation have been equivocal. We have studied habituation in autism using two measures: galvanic skin response (GSR) and magneto-encephalography (MEG). We report data from two independent studies. The first study, was conducted with 13 autistic and 13 age-matched neurotypical young adults and used GSR to assess response to an extended metronomic sequence. The second study involved 24 participants (12 with an ASD diagnosis), different from those in study 1, spanning the pre-adolescent to young adult age range, and used MEG. Both studies reveal consistent patterns of reduced habituation in autistic participants. These results suggest that autism, through mechanisms that are yet to be elucidated, compromises a fundamental aspect of sensory processing, at least in the auditory domain. We discuss the implications for understanding sensory hypersensitivities, a hallmark phenotypic feature of autism, recently proposed theoretical accounts, and potential relevance for early detection of risk for autism.
Collapse
Affiliation(s)
- Tapan K Gandhi
- Department of Electrical Engineering, India Institute of Technology, New Delhi, 110016, India.
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Kleovoulos Tsourides
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Nidhi Singhal
- Open Doors School, Action for Autism, New Delhi, 110 054, India
| | - Annie Cardinaux
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Wasifa Jamal
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Dimitrios Pantazis
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Margaret Kjelgaard
- Communication Sciences and Disorders, Bridgewater State University, Bridgewater, MA, 02325, USA
| | - Pawan Sinha
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| |
Collapse
|
15
|
Shephard E, Milosavljevic B, Mason L, Elsabbagh M, Tye C, Gliga T, Jones EJ, Charman T, Johnson MH. Neural and behavioural indices of face processing in siblings of children with autism spectrum disorder (ASD): A longitudinal study from infancy to mid-childhood. Cortex 2020; 127:162-179. [PMID: 32200288 PMCID: PMC7254063 DOI: 10.1016/j.cortex.2020.02.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 12/13/2019] [Accepted: 02/19/2020] [Indexed: 02/08/2023]
Abstract
Impaired face processing is proposed to play a key role in the early development of autism spectrum disorder (ASD) and to be an endophenotypic trait which indexes genetic risk for the disorder. However, no published work has examined the development of face processing abilities from infancy into the school-age years and how they relate to ASD symptoms in individuals with or at high-risk for ASD. In this novel study we investigated neural and behavioural measures of face processing at age 7 months and again in mid-childhood (age 7 years) as well as social-communication and sensory symptoms in siblings at high (n = 42) and low (n = 35) familial risk for ASD. In mid-childhood, high-risk siblings showed atypical P1 and N170 event-related potential correlates of face processing and, for high-risk boys only, poorer face and object recognition ability compared to low-risk siblings. These neural and behavioural atypicalities were associated with each other and with higher social-communication and sensory symptoms in mid-childhood. Additionally, more atypical neural correlates of object (but not face) processing in infancy were associated with less right-lateralised (more atypical) N170 amplitudes and greater social-communication problems in mid-childhood. The implications for models of face processing in ASD are discussed.
Collapse
Affiliation(s)
- Elizabeth Shephard
- Department of Child & Adolescent Psychiatry, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK.
| | - Bosiljka Milosavljevic
- Department of Psychology, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Luke Mason
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK
| | - Mayada Elsabbagh
- Montreal Neurology Institute and Hospital, McGill University, Canada
| | - Charlotte Tye
- Department of Child & Adolescent Psychiatry, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Teodora Gliga
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK; University of East Anglia, Norwich, UK
| | - Emily Jh Jones
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK
| | - Tony Charman
- Department of Psychology, Institute of Psychiatry, Psychology & Neuroscience, King's College London, UK
| | - Mark H Johnson
- Centre for Brain and Cognitive Development, Birkbeck, University of London, UK; Department of Psychology, Cambridge University, UK
| |
Collapse
|
16
|
Caplette L, Ince RAA, Jerbi K, Gosselin F. Disentangling presentation and processing times in the brain. Neuroimage 2020; 218:116994. [PMID: 32474082 DOI: 10.1016/j.neuroimage.2020.116994] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 05/16/2020] [Accepted: 05/22/2020] [Indexed: 11/30/2022] Open
Abstract
Visual object recognition seems to occur almost instantaneously. However, not only does it require hundreds of milliseconds of processing, but our eyes also typically fixate the object for hundreds of milliseconds. Consequently, information reaching our eyes at different moments is processed in the brain together. Moreover, information received at different moments during fixation is likely to be processed differently, notably because different features might be selectively attended at different moments. Here, we introduce a novel reverse correlation paradigm that allows us to uncover with millisecond precision the processing time course of specific information received on the retina at specific moments. Using faces as stimuli, we observed that processing at several electrodes and latencies was different depending on the moment at which information was received. Some of these variations were caused by a disruption occurring 160-200 ms after the face onset, suggesting a role of the N170 ERP component in gating information processing; others hinted at temporal compression and integration mechanisms. Importantly, the observed differences were not explained by simple adaptation or repetition priming, they were modulated by the task, and they were correlated with differences in behavior. These results suggest that top-down routines of information sampling are applied to the continuous visual input, even within a single eye fixation.
Collapse
Affiliation(s)
- Laurent Caplette
- Department of Psychology, Université de Montréal, Montréal, Qc, Canada.
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Karim Jerbi
- Department of Psychology, Université de Montréal, Montréal, Qc, Canada
| | - Frédéric Gosselin
- Department of Psychology, Université de Montréal, Montréal, Qc, Canada
| |
Collapse
|
17
|
Schyns PG, Zhan J, Jack RE, Ince RAA. Revealing the information contents of memory within the stimulus information representation framework. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190705. [PMID: 32248774 PMCID: PMC7209912 DOI: 10.1098/rstb.2019.0705] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The information contents of memory are the cornerstone of the most influential models in cognition. To illustrate, consider that in predictive coding, a prediction implies that specific information is propagated down from memory through the visual hierarchy. Likewise, recognizing the input implies that sequentially accrued sensory evidence is successfully matched with memorized information (categorical knowledge). Although the existing models of prediction, memory, sensory representation and categorical decision are all implicitly cast within an information processing framework, it remains a challenge to precisely specify what this information is, and therefore where, when and how the architecture of the brain dynamically processes it to produce behaviour. Here, we review a framework that addresses these challenges for the studies of perception and categorization–stimulus information representation (SIR). We illustrate how SIR can reverse engineer the information contents of memory from behavioural and brain measures in the context of specific cognitive tasks that involve memory. We discuss two specific lessons from this approach that generally apply to memory studies: the importance of task, to constrain what the brain does, and of stimulus variations, to identify the specific information contents that are memorized, predicted, recalled and replayed. This article is part of the Theo Murphy meeting issue ‘Memory reactivation: replaying events past, present and future’.
Collapse
Affiliation(s)
- Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK.,School of Psychology, University of Glasgow, Scotland G12 8QB, UK
| | - Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK
| | - Rachael E Jack
- School of Psychology, University of Glasgow, Scotland G12 8QB, UK
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, UK
| |
Collapse
|
18
|
Nemrodov D, Ling S, Nudnou I, Roberts T, Cant JS, Lee ACH, Nestor A. A multivariate investigation of visual word, face, and ensemble processing: Perspectives from EEG‐based decoding and feature selection. Psychophysiology 2019; 57:e13511. [DOI: 10.1111/psyp.13511] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/11/2019] [Accepted: 11/13/2019] [Indexed: 01/24/2023]
Affiliation(s)
- Dan Nemrodov
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Shouyu Ling
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Ilya Nudnou
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Tyler Roberts
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Jonathan S. Cant
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Andy C. H. Lee
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough University of Toronto Toronto Ontario Canada
| |
Collapse
|
19
|
Jaworska K, Yi F, Ince RAA, van Rijsbergen NJ, Schyns PG, Rousselet GA. Healthy aging delays the neural processing of face features relevant for behavior by 40 ms. Hum Brain Mapp 2019; 41:1212-1225. [PMID: 31782861 PMCID: PMC7268067 DOI: 10.1002/hbm.24869] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Revised: 10/16/2019] [Accepted: 11/10/2019] [Indexed: 12/18/2022] Open
Abstract
Fast and accurate face processing is critical for everyday social interactions, but it declines and becomes delayed with age, as measured by both neural and behavioral responses. Here, we addressed the critical challenge of understanding how aging changes neural information processing mechanisms to delay behavior. Young (20-36 years) and older (60-86 years) adults performed the basic social interaction task of detecting a face versus noise while we recorded their electroencephalogram (EEG). In each participant, using a new information theoretic framework we reconstructed the features supporting face detection behavior, and also where, when and how EEG activity represents them. We found that occipital-temporal pathway activity dynamically represents the eyes of the face images for behavior ~170 ms poststimulus, with a 40 ms delay in older adults that underlies their 200 ms behavioral deficit of slower reaction times. Our results therefore demonstrate how aging can change neural information processing mechanisms that underlie behavioral slow down.
Collapse
Affiliation(s)
- Katarzyna Jaworska
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Fei Yi
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | | | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | | |
Collapse
|
20
|
Elucidating the Neural Representation and the Processing Dynamics of Face Ensembles. J Neurosci 2019; 39:7737-7747. [PMID: 31413074 DOI: 10.1523/jneurosci.0471-19.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 08/02/2019] [Accepted: 08/06/2019] [Indexed: 11/21/2022] Open
Abstract
Extensive behavioral work has documented the ability of the human visual system to extract summary representations from face ensembles (e.g., the average identity of a crowd of faces). Yet, the nature of such representations, their underlying neural mechanisms, and their temporal dynamics await elucidation. Here, we examine summary representations of facial identity in human adults (of both sexes) with the aid of pattern analyses, as applied to EEG data, along with behavioral testing. Our findings confirm the ability of the visual system to form such representations both explicitly and implicitly (i.e., with or without the use of specific instructions). We show that summary representations, rather than individual ensemble constituents, can be decoded from neural signals elicited by ensemble perception, we describe the properties of such representations by appeal to multidimensional face space constructs, and we visualize their content through neural-based image reconstruction. Further, we show that the temporal profile of ensemble processing diverges systematically from that of single faces consistent with a slower, more gradual accumulation of perceptual information. Thus, our findings reveal the representational basis of ensemble processing, its fine-grained visual content, and its neural dynamics.SIGNIFICANCE STATEMENT Humans encounter groups of faces, or ensembles, in a variety of environments. Previous behavioral research has investigated how humans process face ensembles as well as the types of summary representations that can be derived from them, such as average emotion, gender, and identity. However, the neural mechanisms mediating these processes are unclear. Here, we demonstrate that ensemble representations, with different facial identity summaries, can be decoded and even visualized from neural data through multivariate analyses. These results provide, to our knowledge, the first detailed investigation into the status and the visual content of neural ensemble representations of faces. Further, the current findings shed light on the temporal dynamics of face ensembles and its relationship with single-face processing.
Collapse
|
21
|
From eye to face: The impact of face outline, feature number, and feature saliency on the early neural response to faces. Brain Res 2019; 1722:146343. [PMID: 31336099 DOI: 10.1016/j.brainres.2019.146343] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 07/12/2019] [Accepted: 07/19/2019] [Indexed: 11/22/2022]
Abstract
The LIFTED model of early face perception postulates that the face-sensitive N170 event-related potential may reflect underlying neural inhibition mechanisms which serve to regulate holistic and featural processing. It remains unclear, however, what specific factors impact these neural inhibition processes. Here, N170 peak responses were recorded whilst adults maintained fixation on a single eye using a gaze-contingent paradigm, and the presence/absence of a face outline, as well as the number and type of parafoveal features within the outline, were manipulated. N170 amplitudes and latencies were reduced when a single eye was fixated within a face outline compared to fixation on the same eye in isolation, demonstrating that the simple presence of a face outline is sufficient to elicit a shift towards a more face-like neural response. A monotonic decrease in the N170 amplitude and latency was observed with increasing numbers of parafoveal features, and the type of feature(s) present in parafovea further modulated this early face response. These results support the idea of neural inhibition exerted by parafoveal features onto the foveated feature as a function of the number, and possibly the nature, of parafoveal features. Specifically, the results suggest the use of a feature saliency framework (eyes > mouth > nose) at the neural level, such that the parafoveal eye may play a role in down-regulating the response to the other eye (in fovea) more so than the nose or the mouth. These results confirm the importance of parafoveal features and the face outline in the neural inhibition mechanism, and provide further support for a feature saliency mechanism guiding early face perception.
Collapse
|
22
|
Abstract
This paper has developed a neuromarketing framework measuring the relationship between products and services in product–service systems (PSSs), particularly regarding its impact on PSS decision making. We divided the PSSs into different levels of product and service combinations in order to identify the impact of the various elements in PSS on decision making, particularly the key factor that induces significant variation in the purchase rate. The experiments showed the neural mechanisms behind the value perception of PSSs; this has been indicated by the appearance of N170, which is related to the cognition processing of familiarity and similarity. It is concluded that the perceived value of the product-oriented PSS is mainly determined by the product attribute, as the promotional effect of service has been clarified. The results explain the psychological and neurological activities that take place when consumers are browsing product–service bundles, which may help corporations better understand the relationships among the components in product–service bundles, providing insight for PSS innovation and service design.
Collapse
|
23
|
Zhan J, Ince RAA, van Rijsbergen N, Schyns PG. Dynamic Construction of Reduced Representations in the Brain for Perceptual Decision Behavior. Curr Biol 2019; 29:319-326.e4. [PMID: 30639108 PMCID: PMC6345582 DOI: 10.1016/j.cub.2018.11.049] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2018] [Revised: 10/23/2018] [Accepted: 11/20/2018] [Indexed: 01/03/2023]
Abstract
Over the past decade, extensive studies of the brain regions that support face, object, and scene recognition suggest that these regions have a hierarchically organized architecture that spans the occipital and temporal lobes [1-14], where visual categorizations unfold over the first 250 ms of processing [15-19]. This same architecture is flexibly involved in multiple tasks that require task-specific representations-e.g. categorizing the same object as "a car" or "a Porsche." While we partly understand where and when these categorizations happen in the occipito-ventral pathway, the next challenge is to unravel how these categorizations happen. That is, how does high-dimensional input collapse in the occipito-ventral pathway to become low dimensional representations that guide behavior? To address this, we investigated what information the brain processes in a visual perception task and visualized the dynamic representation of this information in brain activity. To do so, we developed stimulus information representation (SIR), an information theoretic framework, to tease apart stimulus information that supports behavior from that which does not. We then tracked the dynamic representations of both in magneto-encephalographic (MEG) activity. Using SIR, we demonstrate that a rapid (∼170 ms) reduction of behaviorally irrelevant information occurs in the occipital cortex and that representations of the information that supports distinct behaviors are constructed in the right fusiform gyrus (rFG). Our results thus highlight how SIR can be used to investigate the component processes of the brain by considering interactions between three variables (stimulus information, brain activity, behavior), rather than just two, as is the current norm.
Collapse
Affiliation(s)
- Jiayu Zhan
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Nicola van Rijsbergen
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Scotland G12 8QB, United Kingdom; School of Psychology, University of Glasgow, 62 Hillhead Street, Glasgow, Scotland G12 8QB, United Kingdom.
| |
Collapse
|
24
|
Nemrodov D, Behrmann M, Niemeier M, Drobotenko N, Nestor A. Multimodal evidence on shape and surface information in individual face processing. Neuroimage 2019; 184:813-825. [DOI: 10.1016/j.neuroimage.2018.09.083] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 09/22/2018] [Accepted: 09/30/2018] [Indexed: 11/27/2022] Open
|
25
|
Dupuis-Roy N, Faghel-Soubeyrand S, Gosselin F. Time course of the use of chromatic and achromatic facial information for sex categorization. Vision Res 2018; 157:36-43. [PMID: 30201473 DOI: 10.1016/j.visres.2018.08.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 07/29/2018] [Accepted: 08/29/2018] [Indexed: 11/27/2022]
Abstract
The most useful facial features for sex categorization are the eyes, the eyebrows, and the mouth. Dupuis-Roy et al. reported a large positive correlation between the use of the mouth region and rapid correct answers [Journal of Vision 9 (2009) 1-8]. Given the chromatic information in this region, they hypothesized that the extraction of chromatic and achromatic cues may have different time courses. Here, we tested this hypothesis directly: 110 participants categorized the sex of 300 face images whose chromatic and achromatic content was partially revealed through time (200 ms) and space using randomly located spatio-temporal Gaussian apertures (i.e. the Bubbles technique). This also allowed us to directly compare, for the first time, the relative importance of chromatic and achromatic facial cues for sex categorization. Results showed that face-sex categorization relies mostly on achromatic (luminance) information concentrated in the eye and eyebrow regions, especially the left eye and eyebrow. Additional analyses indicated that chromatic information located in the mouth/philtrum region was used earlier-peaking as early as 35 ms after stimulus onset-than achromatic information in the eye regions-peaking between 165 and 176 ms after stimulus onset-as was speculated by Dupuis-Roy et al. A non-linear analysis failed to support Yip and Sinha's proposal that processing of chromatic variations can improve subsequent processing of achromatic spatial cues, possibly via surface segmentation [Perception 31 (2002) 995-1003]. Instead, we argue that the brain prioritizes chromatic information to compensate for the sluggishness of chromatic processing in early visual areas, and allow chromatic and achromatic information to reach higher-level visual areas simultaneously.
Collapse
Affiliation(s)
- N Dupuis-Roy
- Département de psychologie, Université de Montréal, Canada
| | | | - F Gosselin
- Département de psychologie, Université de Montréal, Canada.
| |
Collapse
|
26
|
Park H, Ince RAA, Schyns PG, Thut G, Gross J. Representational interactions during audiovisual speech entrainment: Redundancy in left posterior superior temporal gyrus and synergy in left motor cortex. PLoS Biol 2018; 16:e2006558. [PMID: 30080855 PMCID: PMC6095613 DOI: 10.1371/journal.pbio.2006558] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Revised: 08/16/2018] [Accepted: 07/24/2018] [Indexed: 11/24/2022] Open
Abstract
Integration of multimodal sensory information is fundamental to many aspects of human behavior, but the neural mechanisms underlying these processes remain mysterious. For example, during face-to-face communication, we know that the brain integrates dynamic auditory and visual inputs, but we do not yet understand where and how such integration mechanisms support speech comprehension. Here, we quantify representational interactions between dynamic audio and visual speech signals and show that different brain regions exhibit different types of representational interaction. With a novel information theoretic measure, we found that theta (3-7 Hz) oscillations in the posterior superior temporal gyrus/sulcus (pSTG/S) represent auditory and visual inputs redundantly (i.e., represent common features of the two), whereas the same oscillations in left motor and inferior temporal cortex represent the inputs synergistically (i.e., the instantaneous relationship between audio and visual inputs is also represented). Importantly, redundant coding in the left pSTG/S and synergistic coding in the left motor cortex predict behavior-i.e., speech comprehension performance. Our findings therefore demonstrate that processes classically described as integration can have different statistical properties and may reflect distinct mechanisms that occur in different brain regions to support audiovisual speech comprehension.
Collapse
Affiliation(s)
- Hyojin Park
- School of Psychology, Centre for Human Brain Health (CHBH), University of Birmingham, Birmingham, United Kingdom
| | - Robin A. A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Philippe G. Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Gregor Thut
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of Muenster, Muenster, Germany
| |
Collapse
|
27
|
Accuracy of Rats in Discriminating Visual Objects Is Explained by the Complexity of Their Perceptual Strategy. Curr Biol 2018; 28:1005-1015.e5. [PMID: 29551414 PMCID: PMC5887110 DOI: 10.1016/j.cub.2018.02.037] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 01/17/2018] [Accepted: 02/15/2018] [Indexed: 11/20/2022]
Abstract
Despite their growing popularity as models of visual functions, it remains unclear whether rodents are capable of deploying advanced shape-processing strategies when engaged in visual object recognition. In rats, for instance, pattern vision has been reported to range from mere detection of overall object luminance to view-invariant processing of discriminative shape features. Here we sought to clarify how refined object vision is in rodents, and how variable the complexity of their visual processing strategy is across individuals. To this aim, we measured how well rats could discriminate a reference object from 11 distractors, which spanned a spectrum of image-level similarity to the reference. We also presented the animals with random variations of the reference, and processed their responses to these stimuli to derive subject-specific models of rat perceptual choices. Our models successfully captured the highly variable discrimination performance observed across subjects and object conditions. In particular, they revealed that the animals that succeeded with the most challenging distractors were those that integrated the wider variety of discriminative features into their perceptual strategies. Critically, these strategies were largely preserved when the rats were required to discriminate outlined and scaled versions of the stimuli, thus showing that rat object vision can be characterized as a transformation-tolerant, feature-based filtering process. Overall, these findings indicate that rats are capable of advanced processing of shape information, and point to the rodents as powerful models for investigating the neuronal underpinnings of visual object recognition and other high-level visual functions. The ability of rats to discriminate visual objects varies greatly across subjects Such variability is accounted for by the diversity of rat perceptual strategies Animals building richer perceptual templates achieve higher accuracy Perceptual strategies remain largely invariant across object transformations
Collapse
|
28
|
Giordano BL, Ince RAA, Gross J, Schyns PG, Panzeri S, Kayser C. Contributions of local speech encoding and functional connectivity to audio-visual speech perception. eLife 2017; 6. [PMID: 28590903 PMCID: PMC5462535 DOI: 10.7554/elife.24763] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 05/07/2017] [Indexed: 11/13/2022] Open
Abstract
Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI:http://dx.doi.org/10.7554/eLife.24763.001 When listening to someone in a noisy environment, such as a cocktail party, we can understand the speaker more easily if we can also see his or her face. Movements of the lips and tongue convey additional information that helps the listener’s brain separate out syllables, words and sentences. However, exactly where in the brain this effect occurs and how it works remain unclear. To find out, Giordano et al. scanned the brains of healthy volunteers as they watched clips of people speaking. The clarity of the speech varied between clips. Furthermore, in some of the clips the lip movements of the speaker corresponded to the speech in question, whereas in others the lip movements were nonsense babble. As expected, the volunteers performed better on a word recognition task when the speech was clear and when the lips movements agreed with the spoken dialogue. Watching the video clips stimulated rhythmic activity in multiple regions of the volunteers’ brains, including areas that process sound and areas that plan movements. Speech is itself rhythmic, and the volunteers’ brain activity synchronized with the rhythms of the speech they were listening to. Seeing the speaker’s face increased this degree of synchrony. However, it also made it easier for sound-processing regions within the listeners’ brains to transfer information to one other. Notably, only the latter effect predicted improved performance on the word recognition task. This suggests that seeing a person’s face makes it easier to understand his or her speech by boosting communication between brain regions, rather than through effects on individual areas. Further work is required to determine where and how the brain encodes lip movements and speech sounds. The next challenge will be to identify where these two sets of information interact, and how the brain merges them together to generate the impression of specific words. DOI:http://dx.doi.org/10.7554/eLife.24763.002
Collapse
Affiliation(s)
- Bruno L Giordano
- Institut de Neurosciences de la Timone UMR 7289, Aix Marseille Université - Centre National de la Recherche Scientifique, Marseille, France.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Robin A A Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Philippe G Schyns
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Stefano Panzeri
- Neural Computation Laboratory, Center for Neuroscience and Cognitive Systems, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|