1
|
Duyck S, Costantino AI, Bracci S, Op de Beeck H. A computational deep learning investigation of animacy perception in the human brain. Commun Biol 2024; 7:1718. [PMID: 39741161 DOI: 10.1038/s42003-024-07415-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 12/18/2024] [Indexed: 01/02/2025] Open
Abstract
The functional organization of the human object vision pathway distinguishes between animate and inanimate objects. To understand animacy perception, we explore the case of zoomorphic objects resembling animals. While the perception of these objects as animal-like seems obvious to humans, such "Animal bias" is a striking discrepancy between the human brain and deep neural networks (DNNs). We computationally investigated the potential origins of this bias. We successfully induced this bias in DNNs trained explicitly with zoomorphic objects. Alternative training schedules failed to cause an Animal bias. We considered the superordinate distinction between animate and inanimate classes, the sensitivity for faces and bodies, the bias for shape over texture, the role of ecologically valid categories, recurrent connections, and language-informed visual processing. These findings provide computational support that the Animal bias for zoomorphic objects is a unique property of human perception yet can be explained by human learning history.
Collapse
Affiliation(s)
- Stefanie Duyck
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Andrea I Costantino
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium.
| | - Stefania Bracci
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Hans Op de Beeck
- Brain and Cognition, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
2
|
Weber V, Ruch S, Skieresz NH, Rothen N, Reber TP. Correlates of implicit semantic processing as revealed by representational similarity analysis applied to EEG. iScience 2024; 27:111149. [PMID: 39524349 PMCID: PMC11546129 DOI: 10.1016/j.isci.2024.111149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 05/01/2024] [Accepted: 10/08/2024] [Indexed: 11/16/2024] Open
Abstract
Most researchers agree that some stages of object recognition can proceed implicitly. Implicit recognition occurs when an object is automatically and unintentionally encoded and represented in the brain even though the object is irrelevant to the current task. No consensus has been reached as to what level of semantic abstraction processing can go implicitly. An informative method to explore the level of abstraction and the time courses of informational content in neural representations is representational similarity analysis (RSA). Here, we apply RSA to EEG data recorded while participants processed semantics of visually presented objects. Explicit focus on semantics was given when participants classified images of objects as manmade or natural. For implicit processing of semantics, participants judged the location of images on the screen. The category animate/inanimate as well as more concrete categories (e.g., birds, fruit, musical instruments, etc.) are processed implicitly whereas the category manmade/natural is not processed implicitly.
Collapse
Affiliation(s)
- Vincent Weber
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Simon Ruch
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Nicole H. Skieresz
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
- The LINE (Laboratory for Investigative Neurophysiology), Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| | - Nicolas Rothen
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Thomas P. Reber
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
- Department of Epileptology, University of Bonn Medical Centre, Bonn, Germany
| |
Collapse
|
3
|
Grootswagers T, Robinson AK, Shatek SM, Carlson TA. Mapping the dynamics of visual feature coding: Insights into perception and integration. PLoS Comput Biol 2024; 20:e1011760. [PMID: 38190390 PMCID: PMC10798643 DOI: 10.1371/journal.pcbi.1011760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/19/2024] [Accepted: 12/13/2023] [Indexed: 01/10/2024] Open
Abstract
The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, Australia
| | - Amanda K. Robinson
- Queensland Brain Institute, The University of Queensland, Brisbane, Australia
| | - Sophia M. Shatek
- School of Psychology, The University of Sydney, Sydney, Australia
| | | |
Collapse
|
4
|
Carota F, Schoffelen JM, Oostenveld R, Indefrey P. Parallel or sequential? Decoding conceptual and phonological/phonetic information from MEG signals during language production. Cogn Neuropsychol 2023; 40:298-317. [PMID: 38105574 DOI: 10.1080/02643294.2023.2283239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/08/2023] [Indexed: 12/19/2023]
Abstract
Speaking requires the temporally coordinated planning of core linguistic information, from conceptual meaning to articulation. Recent neurophysiological results suggested that these operations involve a cascade of neural events with subsequent onset times, whilst competing evidence suggests early parallel neural activation. To test these hypotheses, we examined the sources of neuromagnetic activity recorded from 34 participants overtly naming 134 images from 4 object categories (animals, tools, foods and clothes). Within each category, word length and phonological neighbourhood density were co-varied to target phonological/phonetic processes. Multivariate pattern analyses (MVPA) searchlights in source space decoded object categories in occipitotemporal and middle temporal cortex, and phonological/phonetic variables in left inferior frontal (BA 44) and motor cortex early on. The findings suggest early activation of multiple variables due to intercorrelated properties and interactivity of processing, thus raising important questions about the representational properties of target words during the preparatory time enabling overt speaking.
Collapse
Affiliation(s)
- Francesca Carota
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Jan-Mathijs Schoffelen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
| | - Robert Oostenveld
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
- NatMEG, Karolinska Institutet, Stockholm, Sweden
| | - Peter Indefrey
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Cognitive Neuroscience, Radboud University, Nijmegen, The Netherlands
- Institut für Sprache und Information, Heinrich Heine University, Düsseldorf, Germany
| |
Collapse
|
5
|
Mendez MF. A Functional and Neuroanatomical Model of Dehumanization. Cogn Behav Neurol 2023; 36:42-47. [PMID: 36149395 PMCID: PMC9991937 DOI: 10.1097/wnn.0000000000000316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 04/26/2022] [Indexed: 11/25/2022]
Abstract
The dehumanization of others is a major scourge of mankind; however, despite its significance, physicians have little understanding of the neurobiological mechanisms for this behavior. We can learn much about dehumanization from its brain-behavior localization and its manifestations in people with brain disorders. Dehumanization as an act of denying to others human qualities includes two major forms. Animalistic dehumanization (also called infrahumanization) results from increased inhibition of prepotent tendencies for emotional feelings and empathy for others. The mechanism may be increased activity in the inferior frontal gyrus. In contrast, mechanistic dehumanization results from a loss of perception of basic human nature and decreased mind-attribution. The mechanism may be hypofunction of a mentalization network centered in the ventromedial prefrontal cortex and adjacent subgenual anterior cingulate cortex. Whereas developmental factors may promote animalistic dehumanization, brain disorders, such as frontotemporal dementia, primarily promote mechanistic dehumanization. The consideration of these two processes as distinct, with different neurobiological origins, could help guide efforts to mitigate expression of this behavior.
Collapse
Affiliation(s)
- Mario F. Mendez
- Department of Neurology, University of California Los Angeles, Los Angeles, California
- Psychiatry and Behavioral Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, California
- Neurology Service, Neurobehavior Unit, V.A. Greater Los Angeles Healthcare System, Los Angeles, California
| |
Collapse
|
6
|
Lee J, Jung M, Lustig N, Lee J. Neural representations of the perception of handwritten digits and visual objects from a convolutional neural network compared to humans. Hum Brain Mapp 2023; 44:2018-2038. [PMID: 36637109 PMCID: PMC9980894 DOI: 10.1002/hbm.26189] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 12/04/2022] [Accepted: 12/12/2022] [Indexed: 01/14/2023] Open
Abstract
We investigated neural representations for visual perception of 10 handwritten digits and six visual objects from a convolutional neural network (CNN) and humans using functional magnetic resonance imaging (fMRI). Once our CNN model was fine-tuned using a pre-trained VGG16 model to recognize the visual stimuli from the digit and object categories, representational similarity analysis (RSA) was conducted using neural activations from fMRI and feature representations from the CNN model across all 16 classes. The encoded neural representation of the CNN model exhibited the hierarchical topography mapping of the human visual system. The feature representations in the lower convolutional (Conv) layers showed greater similarity with the neural representations in the early visual areas and parietal cortices, including the posterior cingulate cortex. The feature representations in the higher Conv layers were encoded in the higher-order visual areas, including the ventral/medial/dorsal stream and middle temporal complex. The neural representations in the classification layers were observed mainly in the ventral stream visual cortex (including the inferior temporal cortex), superior parietal cortex, and prefrontal cortex. There was a surprising similarity between the neural representations from the CNN model and the neural representations for human visual perception in the context of the perception of digits versus objects, particularly in the primary visual and associated areas. This study also illustrates the uniqueness of human visual perception. Unlike the CNN model, the neural representation of digits and objects for humans is more widely distributed across the whole brain, including the frontal and temporal areas.
Collapse
Affiliation(s)
- Juhyeon Lee
- Department of Brain and Cognitive EngineeringKorea UniversitySeoulRepublic of Korea
| | - Minyoung Jung
- Department of Brain and Cognitive EngineeringKorea UniversitySeoulRepublic of Korea
| | - Niv Lustig
- Department of Brain and Cognitive EngineeringKorea UniversitySeoulRepublic of Korea
| | - Jong‐Hwan Lee
- Department of Brain and Cognitive EngineeringKorea UniversitySeoulRepublic of Korea
| |
Collapse
|
7
|
Disentangling five dimensions of animacy in human brain and behaviour. Commun Biol 2022; 5:1247. [PMCID: PMC9663603 DOI: 10.1038/s42003-022-04194-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 10/31/2022] [Indexed: 11/16/2022] Open
Abstract
AbstractDistinguishing animate from inanimate things is of great behavioural importance. Despite distinct brain and behavioural responses to animate and inanimate things, it remains unclear which object properties drive these responses. Here, we investigate the importance of five object dimensions related to animacy (“being alive”, “looking like an animal”, “having agency”, “having mobility”, and “being unpredictable”) in brain (fMRI, EEG) and behaviour (property and similarity judgements) of 19 participants. We used a stimulus set of 128 images, optimized by a genetic algorithm to disentangle these five dimensions. The five dimensions explained much variance in the similarity judgments. Each dimension explained significant variance in the brain representations (except, surprisingly, “being alive”), however, to a lesser extent than in behaviour. Different brain regions sensitive to animacy may represent distinct dimensions, either as accessible perceptual stepping stones toward detecting whether something is alive or because they are of behavioural importance in their own right.
Collapse
|
8
|
Roy Chowdhury P, Singh Wadhwa A, Tyagi N. Brain inspired face recognition: A computational framework. COGN SYST RES 2022. [DOI: 10.1016/j.cogsys.2022.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
9
|
Mattioni S, Rezk M, Battal C, Vadlamudi J, Collignon O. Impact of blindness onset on the representation of sound categories in occipital and temporal cortices. eLife 2022; 11:e79370. [PMID: 36070354 PMCID: PMC9451537 DOI: 10.7554/elife.79370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 08/15/2022] [Indexed: 11/30/2022] Open
Abstract
The ventral occipito-temporal cortex (VOTC) reliably encodes auditory categories in people born blind using a representational structure partially similar to the one found in vision (Mattioni et al.,2020). Here, using a combination of uni- and multivoxel analyses applied to fMRI data, we extend our previous findings, comprehensively investigating how early and late acquired blindness impact on the cortical regions coding for the deprived and the remaining senses. First, we show enhanced univariate response to sounds in part of the occipital cortex of both blind groups that is concomitant to reduced auditory responses in temporal regions. We then reveal that the representation of the sound categories in the occipital and temporal regions is more similar in blind subjects compared to sighted subjects. What could drive this enhanced similarity? The multivoxel encoding of the 'human voice' category that we observed in the temporal cortex of all sighted and blind groups is enhanced in occipital regions in blind groups , suggesting that the representation of vocal information is more similar between the occipital and temporal regions in blind compared to sighted individuals. We additionally show that blindness does not affect the encoding of the acoustic properties of our sounds (e.g. pitch, harmonicity) in occipital and in temporal regions but instead selectively alter the categorical coding of the voice category itself. These results suggest a functionally congruent interplay between the reorganization of occipital and temporal regions following visual deprivation, across the lifespan.
Collapse
Affiliation(s)
- Stefania Mattioni
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
- Department of Brain and Cognition, KU LeuvenLeuvenBelgium
| | - Mohamed Rezk
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
| | - Ceren Battal
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
| | - Jyothirmayi Vadlamudi
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
| | - Olivier Collignon
- Institute for research in Psychology (IPSY) & Neuroscience (IoNS), Louvain Bionics, Crossmodal Perception and Plasticity Laboratory - University of Louvain (UCLouvain)Louvain-la-NeuveBelgium
- Center for Mind/Brain Studies, University of TrentoTrentoItaly
- School of Health Sciences, HES-SO Valais-WallisSionSwitzerland
- The Sense Innovation and Research Center, Lausanne and SionSionSwitzerland
| |
Collapse
|
10
|
Grootswagers T, McKay H, Varlet M. Unique contributions of perceptual and conceptual humanness to object representations in the human brain. Neuroimage 2022; 257:119350. [PMID: 35659994 DOI: 10.1016/j.neuroimage.2022.119350] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 05/09/2022] [Accepted: 05/31/2022] [Indexed: 01/18/2023] Open
Abstract
The human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain.
Collapse
Affiliation(s)
- Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia.
| | - Harriet McKay
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
11
|
Shatek SM, Robinson AK, Grootswagers T, Carlson TA. Capacity for movement is an organisational principle in object representations. Neuroimage 2022; 261:119517. [PMID: 35901917 DOI: 10.1016/j.neuroimage.2022.119517] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 07/22/2022] [Accepted: 07/24/2022] [Indexed: 11/18/2022] Open
Abstract
The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.
Collapse
Affiliation(s)
- Sophia M Shatek
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | - Tijl Grootswagers
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Camperdown, NSW 2006, Australia
| |
Collapse
|
12
|
The role of animal faces in the animate-inanimate distinction in the ventral temporal cortex. Neuropsychologia 2022; 169:108192. [PMID: 35245528 DOI: 10.1016/j.neuropsychologia.2022.108192] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 02/26/2022] [Accepted: 02/27/2022] [Indexed: 01/26/2023]
Abstract
Animate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal's ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that faces are not necessary in order to observe high-level animacy information (e.g., agency) in parts of the VTC. A possible explanation could be that this animacy-related activity is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC might treat the face as a proxy for agency, a ubiquitous feature of familiar animals.
Collapse
|
13
|
Shi R, Zhao Y, Cao Z, Liu C, Kang Y, Zhang J. Categorizing objects from MEG signals using EEGNet. Cogn Neurodyn 2021; 16:365-377. [PMID: 35401863 PMCID: PMC8934895 DOI: 10.1007/s11571-021-09717-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/09/2021] [Accepted: 09/02/2021] [Indexed: 11/25/2022] Open
Abstract
Magnetoencephalography (MEG) signals have demonstrated their practical application to reading human minds. Current neural decoding studies have made great progress to build subject-wise decoding models to extract and discriminate the temporal/spatial features in neural signals. In this paper, we used a compact convolutional neural network-EEGNet-to build a common decoder across subjects, which deciphered the categories of objects (faces, tools, animals, and scenes) from MEG data. This study investigated the influence of the spatiotemporal structure of MEG on EEGNet's classification performance. Furthermore, the EEGNet replaced its convolution layers with two sets of parallel convolution structures to extract the spatial and temporal features simultaneously. Our results showed that the organization of MEG data fed into the EEGNet has an effect on EEGNet classification accuracy, and the parallel convolution structures in EEGNet are beneficial to extracting and fusing spatial and temporal MEG features. The classification accuracy demonstrated that the EEGNet succeeds in building the common decoder model across subjects, and outperforms several state-of-the-art feature fusing methods.
Collapse
Affiliation(s)
- Ran Shi
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yanyu Zhao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Zhiyuan Cao
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Chunyu Liu
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Yi Kang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
| | - Jiacai Zhang
- School of Artificial Intelligence, Beijing Normal University, Beijing, 100875, China
- Engineering Research Center of Intelligent Technology and Educational Application, Ministry of Education, Beijing, 100875, China
| |
Collapse
|
14
|
Ritchie JB, Zeman AA, Bosmans J, Sun S, Verhaegen K, Op de Beeck HP. Untangling the Animacy Organization of Occipitotemporal Cortex. J Neurosci 2021; 41:7103-7119. [PMID: 34230104 PMCID: PMC8372013 DOI: 10.1523/jneurosci.2628-20.2021] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Revised: 04/20/2021] [Accepted: 05/20/2021] [Indexed: 11/21/2022] Open
Abstract
Some of the most impressive functional specializations in the human brain are found in the occipitotemporal cortex (OTC), where several areas exhibit selectivity for a small number of visual categories, such as faces and bodies, and spatially cluster based on stimulus animacy. Previous studies suggest this animacy organization reflects the representation of an intuitive taxonomic hierarchy, distinct from the presence of face- and body-selective areas in OTC. Using human functional magnetic resonance imaging, we investigated the independent contribution of these two factors-the face-body division and taxonomic hierarchy-in accounting for the animacy organization of OTC and whether they might also be reflected in the architecture of several deep neural networks that have not been explicitly trained to differentiate taxonomic relations. We found that graded visual selectivity, based on animal resemblance to human faces and bodies, masquerades as an apparent animacy continuum, which suggests that taxonomy is not a separate factor underlying the organization of the ventral visual pathway.SIGNIFICANCE STATEMENT Portions of the visual cortex are specialized to determine whether types of objects are animate in the sense of being capable of self-movement. Two factors have been proposed as accounting for this animacy organization: representations of faces and bodies and an intuitive taxonomic continuum of humans and animals. We performed an experiment to assess the independent contribution of both of these factors. We found that graded visual representations, based on animal resemblance to human faces and bodies, masquerade as an apparent animacy continuum, suggesting that taxonomy is not a separate factor underlying the organization of areas in the visual cortex.
Collapse
Affiliation(s)
- J Brendan Ritchie
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Astrid A Zeman
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Joyce Bosmans
- Faculty of Medicine and Health Sciences, University of Antwerp, 2000 Antwerp, Belgium
| | - Shuo Sun
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Kirsten Verhaegen
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| | - Hans P Op de Beeck
- Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, Katholieke Universiteit Leuven, 3000 Leuven, Belgium
| |
Collapse
|
15
|
Reaction times predict dynamic brain representations measured with MEG for only some object categorisation tasks. Neuropsychologia 2020; 151:107687. [PMID: 33212137 DOI: 10.1016/j.neuropsychologia.2020.107687] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/29/2020] [Accepted: 11/10/2020] [Indexed: 11/21/2022]
Abstract
Behavioural categorisation reaction times (RTs) provide a useful way to link behaviour to brain representations measured with neuroimaging. In this framework, objects are assumed to be represented in a multidimensional activation space, with the distances between object representations indicating their degree of neural similarity. Faster RTs have been reported to correlate with greater distances from a classification decision boundary for animacy. Objects inherently belong to more than one category, yet it is not known whether the RT-distance relationship, and its evolution over the time-course of the neural response, is similar across different categories. Here we used magnetoencephalography (MEG) to address this question. Our stimuli included typically animate and inanimate objects, as well as more ambiguous examples (i.e., robots and toys). We conducted four semantic categorisation tasks on the same stimulus set assessing animacy, living, moving, and human-similarity concepts, and linked the categorisation RTs to MEG time-series decoding data. Our results show a sustained RT-distance relationship throughout the time course of object processing for not only animacy, but also categorisation according to human-similarity. Interestingly, this sustained RT-distance relationship was not observed for the living and moving category organisations, despite comparable classification accuracy of the MEG data across all four category organisations. Our findings show that behavioural RTs predict representational distance for an organisational principle other than animacy, however further research is needed to determine why this relationship is observed only for some category organisations and not others.
Collapse
|